For this project I have combined flocking-birds-like boids + granular synthesis based instrument. User can control the audio engine in a non-direct way. The cursor works as the attraction point for the boids system.

Flocking birds’ behaviour is considered an interesting example of an emergent process observed in nature. Craig Reynolds has presented an approach on how this process can be modelled mathematically. A number of attempts have been made at combining boids + granular synthesis in order to achieve complex sound structures.


Craig Reynolds has introduced his Boids algorithm in 1987. It is an example of artificial life particle system that is capable of imitating birds’ flocking behaviour. Each boid in the flock reacts to the presence of others within its immediate surrounding. Boids model follows three rules: separation, alignment, and cohesion. The distance and the angle determines the surrounding . Separation – boid will adjust its direction of movement to avoid collision with others. Alignment – boid will move towards the average direction of movement. Cohesion – boid will move towards the local center of mass. Those three rules are enough to model the complex system of birds flock emergent behaviour. As a result of those rules, the boids in the flock will keep their distances while following a common direction. More rules can be added to improve the complexity of the model.


I have decided to use Max software to bring my idea into life. I have divided the build of the instrument in two separate patches. First patch contains the boids model, while the second – the granular audio engine(s). The boids are generated according to set parameters. I have designed the patch in a way that the boids follow the position of the mouse cursor. First patch is transmitting the XYZ coordinates of the boids to the second one. Inside of the latter one the coordinates of each boid are routed to separate instances of granular synthesis engine. Each instance of granular engine can play up to ten grains at a time. All audio engines are using the same audio sample. Inside the engines those coordinates control the following audio parameters:

  • X – sample playback position
  • Y – sample pitch offset
  • Z – sound position within stereo field

Above all my goal was to achieve more ‘musically pleasing’ results and not just a cloud of noise. Therefore I have came up with an idea how I could achieved. I have decided to add option to quantize the sample pitch offset. While the pitch offset falls between notes from a set scale it will glide towards the closes note from this scale. All granular engines have global settable grain size, density, start position variation, playback speed, and pitch variation.

The result

Pitch quantizer is active in the first half of the video demo

I am satisfied with the result of this project. As a result of the quantizer addition, the quality of the musical output has increased. Choice of audio sample will create different results. Simpler monophonic sample will be easier to work with. In contrast to using polyphonic samples or samples containing layers of sound sources.