Tag: algorithms

Entries for tag "algorithms", ordered from most recent. Entry count: 65.

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: > 1 2 3 4 ... 9 >

# Bit Tricks with Modulo

Sat
26
Feb 2011

I like to emphasize that programming is not so similar to maths as some people say. We are used to thinking about numbers in decimal numeral system and the operations that seem most natural for us are arithmetic computations like additions, multiplication and division. On the other hand, computer works with numbers stored in binary system and bitwise operations are most efficient and natural for it. Another difference is modulo operation (the remainder after division) - a function not so frequently used in math classes, but very important in programming. That's what I'd like to talk about here.

Modulo in C++ can be done using % operator, but just as division, it is quite slow. So when doing a common operation of cycling between subsequent numbers from a limited range:

x2 = (x1 + 1) % n;

few alternatives have been developed for some special cases, using bitwise operations.

First, when the divisor is a power of 2, we can use bitwise and (operator &) to mask only least significant bits of the addition result. But this optimization can be done automatically by the compiler (at least my Visual C++ 2010).

// 7 = 0b111, so it masks 3 least significant bits.
// Equal to (x1 + 1) % 8
x2 = (x1 + 1) & 7;

When we only cycle between two values - 0 and 1 - we can use XOR (operator ^):

// Equal to (x1 + 1) % 2, for values 0, 1.
x2 = x1 ^ 1;

Finally, when we cycle between three values - 0, 1, 2 (like when indexing x, y, z components of a 3D vector), we can use the trick invented by Christer Ericson (at the end of the linked page). I came across it recently on Twitter and that's what inspired me to write this article.

// Equal to (x1 + 1) % 3, for values 0, 1, 2.
x2 = (1 << x1) & 3;

Comments | #math #c++ #algorithms Share

# My Implementation of Diff Algorithm

Tue
16
Nov 2010

Algorithm for comparing two sequences of some data is very useful. For example, version control systems like SVN or Git use it to highlight differences between two versions of a text file. But what if you need to code such algorithm from scratch? Today it was my task and it turned out to be not so simple, so I want to share my sample implementation in C++ which I came up with based on some readings from the Internet and my previous experiences with calculating Levenshtein distance.

What we need to compare is sometimes just lines of text files that are equal or not, but I needed something more complex, so let's say an item is defined as follows:

struct Item
{
  unsigned ID;
  const char *Text;
};

For items from opposite sides to be comparable they must have same ID values. Text may be different, but I'd prefer to match items with same Text (treated as NULL-terminated string) or at least same Text length.

Here are sample data:

const Item items1[] = {
  { 1, "Foo" },
  { 1, "Foo" },
  { 2, "Bar" },
  { 4, "Foobar" },
};
const Item items2[] = {
  { 1, "Foo" },
  { 1, "Firefox" },
  { 1, "Another one" },
  { 2, "Boo" },
  { 5, "Last one" },
};
const unsigned count1 = _countof(items1);
const unsigned count2 = _countof(items2);

The final result may look like this:

1              Foo   <->   1              Foo
--                   <->   1          Firefox
1              Foo   <->   1      Another one
2              Bar   <->   2              Boo
4           Foobar   <->   --
--                   <->   5         Last one

Read full entry > | Comments | #algorithms Share

# Music Analysis - Spectrogram

Wed
14
Jul 2010

I've started learning about sound analysis. I have some deficiencies in education when it comes to digital signal processing (greetings for the professor who taught this subject at our university ;) but Wikipedia comes to the rescue. As a starting point, here is a spectrogram I've made from one of my recent favourite songs: Sarge Devant feat. Emma Hewitt - Take Me With You.

Now I'm going to exaplain in details how I've done this by showing some C++ code. First I had to figure out how to decode an MP3, OGG or other compressed sound format. FMOD is my favourite sound library and I knew it can play many file formats. It took me some time though to find functions for fast decoding uncompressed PCM data from a song without actually playing it for all 3 minutes. I've found on the FMOD forum that Sound::seekData and Sound::readData can do the job. Finally I've finished with this code (all code shown here is stripped from error checking which I actually do everywhere):

Read full entry > | Comments | #math #libraries #music #rendering #dsp #algorithms Share

# Timespan from/to String Conversion

Tue
29
Jun 2010

Today I'd like to show a piece of code I've written today. These are functions for converting between a timespan, stored in a int64 as number of milliseconds and string in for of "HH:MM:SS.MMM" (hours, minutes, seconds, milliseconds). That's IMHO a convenient way of presenting timespan to the user. I'll include these functions in the next version of my CommonLib library, to work with raw millisecond number as well as TIMESPAN and/or GameTime types.

The code is in C++, uses STL string and my own StrToUint and UintToStr functions. All suggestions about how these algorithms could be optimized are welcome.

Read full entry > | Comments | #algorithms #c++ Share

# LZMA SDK - How to Use

Sat
08
May 2010

What do you think about when I tell a word "compression"? If you currently study computer science, you probably think about some details of algorithms like RLE, Huffman coding or Burrows-Wheeler transform. If not, then you surely associate compression with archive file formats such as ZIP and RAR. But there is something in between - a kind of libraries that let you compress some data - implement a compression algorithm but do not provide ready file format to pack multiple files and directories into one archive. Such library is useful in gamedev for creating VFS (Virtual File System). Probably the most popular one is zlib - a free C library that implements Deflate algorithm. I've recently discovered another one - LZMA. Its SDK is also free (public domain) and the basic version is a small C library (C++, C# and Java API-s are also available, as well as some additional tools). The library uses LZMA algorithm (Lempel–Ziv–Markov chain algorithm, same as in 7z archive format), which has better compression ratio than Deflate AFAIK. So I've decided to start using it. Here is what I've learned:

If you decide to use only the C API, it's enough to add some C and H files to your project - the ones from LZMASDK\C directory (without subdirectories). Alternatively you can compile them as a static library.

There is a little bit of theory behind the LZMA SDK API. First, the term props means a 5-byte header where the library stores some settings. It must be saved with compressed data to be given to the library before decompression.

Next, the dictionary size. It is the size of a temporary memory block used during compression and decompression. Dictionary size can be set during compression and is then saved inside props. Library uses dictionary of same size during decompression. Default dictionary size is 16 MB so IMHO it's worth changing, especially as I haven't noticed any meaninful drop in compression rate when set it to 64 KB.

And finally, end mark can be saved at the end of compressed data. You can use it so decompression code can determine the end of data. Alternatively you can decide not to use the end mark, but you must remember the exact size of uncompressed data somewhere. I prefer the second method, because remembering data size takes only 4 bytes (for the 4 GB limit) and can be useful anyway, while compressed data finished with end mark are about 6 bytes longer than without it.

Compressing full block of data with single call is simple. You can find appropriate functions in LzmaLib.h header. Here is how you can compress a vector of bytes using LzmaCompress function:

Read full entry > | Comments | #commonlib #libraries #algorithms Share

# My Algorithms for Sorted Arrays

Sun
21
Mar 2010

Yesterday I wrote about The Beauty of Sorted Arrays. I believe the C++ STL is not perfect and so I've written some additional code that I needed for the purpose of handling such data structure. It can be found in my CommonLib library.

I once wanted to have a sorting algorithm, just like std::sort, but using insertion sort. The std::sort is of course great, has average O(n log n) complexity and AFAIK internally uses introsort algorithm (an improved version of quicksort). But if we know that the collection is already almost sorted, insertion sort will work faster. So here is my code:

// Basic version that uses operator <
template<class Iterator>
void InsertionSort(Iterator b, Iterator e)
{
  if (b == e) return;
  for (Iterator j = b + 1; j < e; j++)
  {
    typename std::iterator_traits<Iterator>::value_type key;
    key = *j;
    Iterator i = j;
    while (i > b && key < *(i-1))
    {
      *i = *(i-1);
      i--;
    } 
    *i = key;
  }
}

// Overloaded version with explicit comparator
template<class Iterator, typename Compare>
void InsertionSort(Iterator b, Iterator e, Compare Comp)
{
  if (b == e) return;
  for (Iterator j = b + 1; j < e; j++)
  {
    typename std::iterator_traits<Iterator>::value_type key;
    key = *j;
    Iterator i = j;
    while (i > b && Comp(key, *(i-1)))
    {
      *i = *(i-1);
      i--;
    } 
    *i = key;
  }
}

Another problem is finding an item in the ordered array by some key. I can see two solutions in C++, both unsatisfactory. First one is to write your own comparator or overload operator < for your item structure to compare items by key, like this:

struct Item
{
  int Id;
  DWORD Color;
  std::string Name;

  bool operator < (const Item &rhs) const { return Id < rhs.Id; }
};

The problem is that C++ comparator always compares two full items, so to use this comparing feature for finding item in the array, we must construct and pass full Item object, despite we use only the Id field.

Item item;
item.Id = 1;
bool contains = std::binary_search(vec.begin(), vec.end(), item);

Second solution is to use std::map container. It has separate key and value type so you can search by key:

std::map<int, Item>::iterator foundIt = map.find(1);
if (foundIt != map.end())
  std::cout << "Found ID=" << foundIt->first
    << ", Name=" << foundIt->second.Name << std::endl;

But the problem is that the key field is separate from value type so the Id field should not be included in the Item structure because it's redundant. We should instead use std::pair<int, Item> as the type of items contained in the map, which is alised to std::map<int, Item>::value_type.

I wanted to have more convenient way to quickly search an ordered array by key, so I've coded these two function templates:

template <typename IterT, typename KeyT, typename CmpT>
IterT FirstNotLessIndex(IterT beg, IterT end, const KeyT &key, CmpT cmp)
{
  unsigned down = 0, up = (end - beg);
  while(down < up)
  {
    unsigned mid = (down + up) / 2;
    int res = cmp(key, *(beg + mid));
    if(res > 0)
      down = mid + 1;
    else
      up = mid;
  }
  return beg + down;
}

template <typename IterT, typename KeyT, typename CmpT>
IterT BinarySearch(IterT beg, IterT end, const KeyT &key, CmpT cmp)
{
  IterT it = FirstNotLessIndex<IterT, KeyT, CmpT>(beg, end, key, cmp);
  if (it == end || cmp(key, *it) != 0)
    return end;
  return it;
}

First one performs binary search to determine the iterator for a first element not less (greater or equal) than the specified key. It's perfect for determining the place for inserting new item into the array. Second function uses first one to check whether an item with exact matching key exists in the array and returns iterator pointing to it, or end iterator if not found.

As you can see, both functions take the type of the key as a template parameter, which may be different than full item type pointed by iterator. They also take a comparator functor which is always called with parameters (key, item) and should return int. The comparator should return a number less, equal or greater than zero according to the result of the comparison - just like strcmp, memcmp and other functions from old C.

Now my example can be written like this:

struct Item
{
  int Id;
  DWORD Color;
  std::string Name;

  static int Cmp(int id, const Item &rhs) { return id - rhs.Id; }
};

bool contains =
  common::BinarySearch(vec.begin(), vec.end(), 1, &Item::Cmp) != vec.end();

Having such universal comparison function, it's easy to convert it to all comparison operators:

bool operator <  (const Item &rhs) const { return Cmp(Id, rhs) <  0; }
bool operator >  (const Item &rhs) const { return Cmp(Id, rhs) >  0; }
bool operator <= (const Item &rhs) const { return Cmp(Id, rhs) <= 0; }
bool operator >= (const Item &rhs) const { return Cmp(Id, rhs) <= 0; }
bool operator == (const Item &rhs) const { return Cmp(Id, rhs) == 0; }
bool operator != (const Item &rhs) const { return Cmp(Id, rhs) != 0; }

Opposite conversion is also possible. If you have a type with operator < accessible (like float or your own type with overloaded operator), you can convert it to a comparator returning int with this function:

template <typename T>
inline int UniversalCmp(const T &a, const T &b)
{
  if (a < b) return -1;
  if (b < a) return  1;
  return 0;
}

Comments | #commonlib #stl #c++ #algorithms Share

# The Beauty of Sorted Arrays

Sat
20
Mar 2010

A one-dimmensional array (also called vector) is very simple data structure. We were taught that inserting or deleting elements in the middle of the array is slow because we have to move (by rewriting) all following elements. That's why books about C++ teach that for sorted collections - those who provide fast search operation - it's better to use containers like std::set or std::map.

But those who code games in C++ and care about performance know that today's processors (CPU) are very fast in terms of doing calculations and processing lots of data, while access to the RAM memory becomes more and more kind of a bottleneck. According to great presentation Pitfalls of Object Oriented Programming by Tony Albrecht from Sony, a cachce miss (access to some data from the memory which are not currently in CPU cache) costs about 400 cycles! Dynamic memory allocation (like operator new in C++) is also a slow operation. That's why data structures that use separate nodes linked with pointers (like std::list, std::set, std::map) are slower than std::vector and it's often better to keep a sorted collection of elements in a vector, even if you have hundreds or thousands of elements and must sometimes insert or delete them from the middle.

Unfortunately, C++ STL doesn't provide a "std::ordered_vector" container that whould keep items sorted while allocating single chunk of continuous memory. We have to do it by ourselves, like this:

First, we have to choose a way to compare elements. There are multiple solutions (read about comparators in C++), but the simplest one is to just overload operator < in your element structure.

#include <vector>
#include <string>
#include <algorithm>

struct Item
{
  int Id;
  DWORD Color;
  std::string Name;

  Item() { }
  Item(int id, DWORD color, const std::string name)
    : Id(id), Color(color), Name(name) { }

  bool operator < (const Item &rhs) const { return Id < rhs.Id; }
};

typedef std::vector<Item> ItemVector;

ItemVector vec;
Item item;

Here is how we insert an item into sorted vector:

item = Item(1, 0xFF0000, "Red");
ItemVector::iterator itToInsert = std::lower_bound(
  vec.begin(), vec.end(), item);
vec.insert(itToInsert, item);

Unfortunately we have to deal with iterators instead of simple indices.

To check if the vector contains element with particular Id:

item = Item(1, 0, std::string());
bool contains = std::binary_search(vec.begin(), vec.end(), item);

Here come another nuisances. We have to create full Item object just to use its Id field for comparison purpose. I'll show tomorrow how to overcome this problem.

It's also annoying that binary_search algorithm returns bool instead of iterator that would show us where the found item is. To find an item and determine its iterator (and from this its index) we have to use lower_bound algorithm. This algorithm is very universal but also hard to understand. It quickly searches a sorted collection and returns the position of the first element that has a value greater than or equivalent to a specified value. It was perfect for determining a place to insert new item into, but to find existing element, we have to use it with this if:

item = Item(1, 0, std::string());
ItemVector::iterator findIt = std::lower_bound(vec.begin(), vec.end(), item);
if (findIt != vec.end() && findIt->Id == item.Id)
{
  size_t foundIndex = std::distance(vec.begin(), findIt);
}
else
{
  // Not found.
}

And finally, to remove an item from the vector while having index, we do this trick:

size_t indexToDelete = 0;
vec.erase(vec.begin() + indexToDelete);

Well, C++ STL is very general, universal, powerful and... very complicated, while standard libraries of other programming languages also have some curiosities in their array classes.

For example, ActionScript 3 have single method to insert and remove items:

function splice(startIndex:int, deleteCount:uint, ...  values):Array

To delete some items, you pass non-zero deleteCount. To insert some items, you pass them as values parameter.

In .NET, the System.Collections.Generic.List<T> class (which is actually an array, the name is misnomer) has very smart BinarySearch method. It returns the "zero-based index of item (...), if item is found; otherwise, a negative number that is the bitwise complement of the index of the next element that is larger than item or, if there is no larger element, the bitwise complement of Count.". It can be used to both find existing item in the array or determine an index for a new item, like:

int index = myList->BinarySearch("abc");
if (index < 0)
  myList->Insert(~index, "abc");

See also: My Algorithms for Sorted Arrays

Comments | #algorithms #c++ #stl #.net #flash Share

# Doing Something Every X Time

Sun
24
Jan 2010

Games are one of these applications where we have a loop spinning infinitely as fast as it can (or at least 30 times per second) and calling some DoFrame function that make all the computations and rendering of a game frame. Dealing with time is an important task in game programming. One of the things game frameworks usually do are retrieving precise current time from the system and measuring FPS. Xion wrote an interesting article about all this stuff some time ago, entitled PÍtla czasu rzeczywistego [pl].

By the way, a fact not so well known by beginners is that FPS (frames per second) is not good measurement of a game performance, because it is expressed in Hz which is reciprocal of average frame time and thus it is nonlinear. The real metric for how much more optimization do you have to put into your game should be average frame time in milliseconds. But that's another topic. I've blogged about this some time ago here [pl]. Others also did this, like Humus or Real-Time Rendering Blog.

Subject I want to write about today is inspired by recent question from forum.gamedev.pl - Koniec z timerami [pl]. It's about implementing timers in games. The question is how to execute some code - let's say to call DoSomething() function - every EventTime time, e.g. every 200 ms. Two solutions have been proposed there by Kos and cybek. The difference between them is suble but important.

Let's assume we have have access to variables:

Time may be expressed in seconds, milliseconds or any units - it doesn't matter here. First solutions looks like this:

// Defining helper variable
float EventCounter;
// Initialization
EventCounter = 0.f;

// Inside DoFrame, called every frame
EventCounter += DeltaTime;
while (EventCounter >= EventTime)
{
  DoSomething();
  EventCounter -= EventTime; // !
}

And here is the second example:

// Inside DoFrame, called every frame
EventCounter += DeltaTime;
if (EventCounter >= EventTime)
{
  DoSomething();
  EventCounter = 0.f; // !
}

So what's the difference? Method number 1 is precise. DoSometing will be always called every EventTime, plus/minus one frame and exactly every EventTime on average. On the other hand, method 2 calls DoSomething() not more often than every EventTime, but usually more rarely. I suppose it's EventTime + average_frame_time/2 on average. I hope you can see from this code why is that.

Why does it make a difference? First case is when you have a stall for, let's say, 500 ms between frames because you did some very expensive computations like failed A* pathfinding or some background process like antivirus slowed down the system. In the next executed frame, the first code above will "catch up" executing DoSomething two or three times, but the second code will just execute DoSomething() once and continue as nothing happened. Second difference is when you measure how many times was DoSomething() executed during some longer period. First code will probably do it exactly period/EventTime +/- 1 and the second code will result in no more, but probably less number of calls.

There is another question: How to express EventCounter variable? This problem is orthogonal to the one described above and make no difference to the code logic, so it's just your choice. Here are the options: As time passes since the last DoSomething call...

As a side note, you don't have to tell me that float is not a good type for expressing absolute and precise time inside games. I know that and I will write more about it another time. The code above was just and example.

Comments | #algorithms Share

Pages: > 1 2 3 4 ... 9 >

STAT NO AD
[Stat] [STAT NO AD] [Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2018