Dec 08

Lifeforms in nature need to adapt. Without adaptation the survival of a species would be impossible in changing environments. Animals and plants are are good examples. The same evolutionary principles that helped species in nature can be used in a simpler and faster matter on todays computers:

  • Better adapted individuals survice
  • Genes are passed to the children
  • Mutations sometimes improve a species

Genes are encoded as strings of 'commands' in the DNA. This can be done with computer algorithms as well. In this example we will use an artificial ant.

This ant does not have a brain. It is just following simple commands. These commands are R for "turn right" and L for "turn left". The ant will turn in the opposite direction, if the place was already visited before. This can be a gene-code for such a path of an ant: LRLRLRLLRLLLRLRLRLLRLLLRRRLRL

If the last command was executed, the whole command-list will be repeated, resulting in an endless movement pattern. Since there are only 2 commands, the gene can be represented in binary form: 01010100100010101001000111010

A single ant cant reproduce and will not result in an evolutionary process. We need a whole ant population for that to occur. Since GA are much simpler than nature, 50 inidividuals will be okay for this example. They all start with random genes. Second, we need an environment that will tell us, if the ant is doing well in its simple artificial life. In this example we have a small two dimensional world and the goal for the ant is to visit as many places as possible. For this small example we can assume that each ant will live for a fixed amount of time. 

Algorithm

All ants start with an unvisited world. After a fixed amount of time, 1000 steps, all ants die. The environment can tell us, wich ant has visited more places than other ants. The best ants are allowed to reproduce and the other ants wont be able to pass their genetic code to the next generation. The binary code of the well performing ants can be combined and mutations (bit-flips) will occur. Example:

Mother 01010100100010101001000111010 Father 10100011101010010111011000101

Child A 01011100100010010111011000101 Child B 01010100101010010011011000101

These genetic codes show which genes are passed to the next generation and where mutations occur. The algorithm will start again with the next generation of ants. The algorithm can be stopped if no big improvement is achieved and the genetic code somewhat stabilizes itself. In my example I tried to visualize the performance of ants. The path of an ant is shown in blue and if places are visited more often, the path gets yellow.

The optimal path would be an equal distribution of blue lines without many yellow or black areas.

2012-12/ga_001.jpg

This shows the progression of the algorithm on youtube:

Nov 30

I wanted to have a background mood light like Philips has on their TV sets (ambilight). Except I wanted it for my PC and I wanted to have control over it. Since I had an arduino at my hands, I ordered the additional parts I needed to create an Arduino controlled single RGB LED light. 

Background light of my screen

After I completed this first concept, I thought about shrinking this project to fit on an Attiny45 and adding a USB cable to it. Using MOSFETs (N-FET 50V, BUZ71A) and reading some information about controlling everything with pulse width modulation (PWM) and V-USB, I managed to get it working on the Arduino first. I then used the Arduino itself as a programmer for the Attiny45. 

"Circuit schema for attiny45 led control" description

The 3 blocks with 3 contacts at the bottom are the FETs.

This project is now sitting behind my main screen and is controlled with a taskbar-application I have written in C#. The Attiny45 software is written with AVR libraries in C. The controller is written in plain C (with http://libusb.sourceforge.net/) and the GUI (taskbar ico) is written in C#. 

Attiny on final board

No casing yet. No problem since its hidden behind my desk.

Sep 07

Swarm intelligence is a very interesting phenomenon. You can see it in many things in nature. For example insects like ants or bees. Ants find their food by the help of others in the swarm. Working together gives them an advantage over single individuals. And still, there is no single ant that knows each way exactly!

Some time ago, I wrote a little piece of Software which had some simple rules for each individual. The swarming behavior was not programmed, it emerged from these simple rules:

  • Move towards your friends
  • Move like your friends
  • Keep a little distance to your friends

This is what you get:

On the next screenshot you can see the units (white circle), their viewing radius (blue circle) and their minimal distance to other units (red circle).

And this is how it looks when the simulation is running:

After that, I added additional objects and mechanisms:

  • Food: Blue squares are food, bigger squares contain more food than small ones
  • Movement of units costs energy and units can die
  • Additional rule: Units are drawn towards food
  • Visualisation: Bigger and more intensive filled circles symbolize older units
  • Visualisation: Show the enegry of units. Red means weak, green means healthy.

Here is a screenshot again:

And a video of it:

 

More detailed tutorial

Step 1: Creation of the basis

We need an area where units can move. Therefore the maximum and minimum coordinates of units needs to be fixed and rules should be defined what happens, if units are outside this area. After we defined the area, units need variables like position, direction and speed. Each unit needs to have the ability to change the direction. There needs to be a simulation loop which updates the rules for each unit and the position. Additionally we need some library to draw the units on the screen.


Step 2: Creating the ruleset

Rule 1: Search for your friends

Every unit has a visibility range. Can be a circle to start with. For every unit we need to find out, which other units can be seen. After that, we average the position of these visible units. A vector pointing towards this point is stored.

Rule 2: Move the same way like your friends

Use the same mechanism as in rule 1 to get the visible units. Instead of calculating the position, we need to calculate the average moving direction. This is again stored as a vector.

Rule 3: Keep a small distance to your friends

This is in a sense like rule 1 with 2 minor changes. The viewing distance is smaller and the vector pointing to the center of these very near units need to be rotated by 180 degrees.

Step 4: Apply the rules

We now have 3 vectors from the 3 rules. We weight and add them. The resulting vector is than normalized and stored as the direction of movement for that single unit. The weightings of the vectors can be tested out. Play with them! I weighted rule 3 the highest since I wanted to avoid collisions. The second rule was weighted second highest because I wanted movement and not just a pile of units sitting together.

Step 4: Movement

The calculated direction vector needs to be applied in the simulation update to move the units. And you are done :-)

 

Hunter & Prey

A while later, I added another type of swarm. It was a copy of the first one, but instead of the food squares it eats the other swarm. So this is the hunter swarm. The first swarm got an additional rule: Avoid the hunter type. Some interesting effects emerged from this addition: Sometimes the hunters are eating all prey and they die because there is no food. Sometimes the hunters don't catch the prey and they all die. Than the prey gets a population boost. I did not yet achieve a stable population :-) Again here is a screenshot:

And a video on Youtube.com:

Have fun building your own swarms!

Feb 08

I added a new module to the SP Game Framework:

A profiling system capable of multiple threads and hierarchical structures. The profiling results can be watched in realtime on screen. Here is a screenshot of my actual proof of concept for solar tactica:

2011-02/hprof_threads.png

To profile your application you can add macro code which will generate a scoped variable. Time is measured from construction of this variable until the destruction. You can easily measure chunks of code with scoping brackets {}.

If your application is ready to be realeased, just define SP_RELEASE_VERSION and the macro code will not end up in your application. No performance impact!

Have fun.

Oct 23

I worked on a 'virtual eyeball' to create images that somewhat model the signals going to the brain.

Since the the eye has more receptors in the center of it, the center is sharper and more detailed. The color channels (gray-tones were eleminated) are half the resolution since the human eye has less receptors for colors than for brightness.

I used a barrel distortion formula which I altered a bit to get the center sharp while the edges are heavily bended.

First screenshot of the eyeball system:

The original microscopy-image was made by Norman Reppingen in 2007 (found by google: http://www.mikroskopie.de/mikforum/read.php?2,30983,30983#msg-30983).

Aug 27

I wanted to use openGL lines for lasers.

But how to make bigger lasers? How to make cool looking lasers?

2010-08/better_lines2.png

One way is to call glLineWidth() and make the line thicker. But there are 3 downsides: not every graphics-card supports it, the ends of the line seem 'cutted' and it does look very 'flat'.

Aug 24

I wrote a more detailed post at the ludum dare-site:

http://www.ludumdare.com/compo/2010/08/23/summary-my-first-ld/

2010-08/screen_1.jpg

Here are the download-locations for the windows-version and the source-Code:

download:
http://games.spunkmeyer.de/downloads/Trust_Your_Enemy.zip
source:
http://games.spunkmeyer.de/downloads/Trust_Your_Enemy_Sources.zip

I will polish that game, get some music and fix all the bugs in some weeks from now, since there are exams to be written ahead... :-)

Aug 22

Its LD time! I have managed to get through roughly the half of my core features.

See more details on the LD-Site:

http://www.ludumdare.com/compo/2010/08/22/progress-kind-of/