Sound Reactive Circle 1

I’ve decided to start release some of the stuff I’ve been working on lately, building up towards my show. I have so many experiments sitting on my hard drive that never see the light of day.

So here we have Sound Reactive Circle 1.



I wanted it to be able to play in the browser, but for some reason couldn’t get it to work, event though I signed the applet. Next time….

A video is here. For more of my work see the radarboy FLickr page.

Here’s the code. Build in Processing. It includes a Mac, PC and Linux application.

Radarboy Reactive




These screengrabs are part of a vast pool of sound reactive stuff I’ve been working on over the past year, which is finally getting into a presentable form.

After a long haitus, I completely rebuild the old award winning RBVJ in Processing – but it wasn’t as simple as I anticipated, and it’s still not 100% stable. Once it is I will release the code.

I’m not a programmer, I’m a designer/artist, whatever. However I do enjoy quite a few aspects of programatic design – the fact that code can lead you to unexpected places and taking on a life of it’s own can really surprise you.

I am really most inspired by the visuals of Raster Norton and Ali Demirel, which are pretty close to my own work in both style and substance. I’ve also been thinking plenty about space and I think, along with minimalism, these theme play out a lot in my work. I think the graphics also reflect the way my music production is moving.

I have always believed that club graphics should simple – there is too much noise in the world already – and the simplicity of the graphics allows us to go with the music and find meaning on our own, rather than being shoved down a visual mishmash. Not that these kind of graphics should necessarily be just shown in a club context, but that’s another whole conversation.

The plan is to combine the visuals with my live music/DJ sets – eventually controlling both light and sound through Ableton/OSC/Max.

Soon on a wall/dancefloor in Berlin. And beyond.

Project Links:
Gallery: Here’s the first of a three part set of screengrabs from my work:
Video: (Coming Soon – next week hopefully)

Some technical stuff:
I’ve decided to totally ditch Flash – it is just too verbose, annoying and slow. I actually considered moving completely over to Open Frameworks for speed, but eventually decided Processing was the best bet. Given the strides being made in ProcessingJS – the JavaScript port of Processing, the recent launch of Processing for Andoid and news that Processing 2 will be even better friends with OpenGL. Open Frameworks’ inability to publish on the web was also a clincher. Anyway, I digress.

Magnetic Ink

One of my favourite creative coders Robert Hodgin aka Flight 404 created this beautiful sound reactive piece in Processing.

Read more about it here: and some great screenshots here:

Daniel Rozen’s Weave Mirror



Lots of other good stuff on his site.

Processing Blinds

processing blinds

Another oldish sketch I dug up. The idea was to project it onto a store window, and then a passerby would need to pull open the blinds to see in the store.

It used to be quite processor heavy, so reworked the motion detection code to try and speed it up. By pre-calculating the grid positions on the screen I managed to achieve a significant speed increase.

UPDATE: Just noticed some other places where code can be speeded up. Will update the code when I get time.

View the sketch here.

Here’s the motion detection part of the code:

Capture myCapture;
color newPixel;
color[] prevFrame;
float speed = 10;
float sensitivity = 0.64;
float targetX, targetY, myX, myY, avX, avY;
int cxLoc, cyLoc;
int counter=0;
int totalBlocks=0;
int[] cX= new int[5000];
int[] cY= new int[5000];
// precalculating the grib
void precalc(){
for(int y = 0; y < 30-1; y++){ for(int x = 0; x < 40-1; x++){ cX[counter] = x*(width/40); cY[counter] = y*(height/30); counter++ ; } } totalBlocks=counter; } void initVid () { String s = "IIDC FireWire Video"; myCapture = new Capture ( this, width, height, 4); prevFrame = new color[width*height]; } void captureEvent ( Capture myCapture ) {; } void motion () { avX = 0; avY = 0; counter = 0; for(int i = 0; i < totalBlocks; i++){ if(motionTest(i,width/40,height/30)==true){ avX += cX[i]; avY += cY[i]; counter ++; } } if (avY<=0) avY=myY; if (avX<=0) avX=targetX; if (avY>0 && avX>0 && counter>0){
myY += (targetY-myY)/speed;
myX += (targetX-myX)/speed;
boolean motionTest(int j,int tw, int th){
int dc=0; // counter to track number of differences
// ave. out a square
for(int y=0;y25) dc++;
prevFrame[srcPos]=newPixel; // update prevFrame w/ current pixel clr
if(dc>(sensitivity*(tw*th))){ return true; } else{ return false; }

Reactive Video: Peter Saville Tribute


Dug up this old sketch I did – the Peter Saville Tribute, and added sound reactiveness.

Press ‘s’ to turn on sound reactive response.
Use up and down arrow keys to increase or decrease the grid.
Drag with mouse to change perspective.

View here. or download the application (mac/pc/linux).

Ascii Video


Here‘s an old experiment I dug up on how to convert a video feed into ASCII.

Please keys 1-7 to get different variations.


Been mucking about with processing based typography, as part of another idea I am working on. Will update the applet to a more interactive one when I get the time.

Here’s the first result: Vines.


The code is based on Joe Gilman's Circle Crawlers
It's pretty messy, and uncommented, but here it is: vines.pde bug.pde brightspark.pde

Goodwords – Part 1

I’ve pretty much given up on watching the news or reading newspapers. Yea, yea, we all know that the news is slanted to the bad shit that is going on.

But I’ve been wondering if I do consume mass media, who should I turn to for a more positive slant.

Enter the goodwords project. The idea is simple, I have a list of positive words, I scrape various news sources and see who’s the happy chappies and who are the grumpies.

I’ve been dabbling with this idea on and off for quite a while. And now finally have a bit of data to start fiddling with. I initially was tracking a number of international sources, but lost the data (long boring story). And since I’ve been summering in South Africa and people are so news conscious here, I decided to start here. I’m actually scraping a few times a day, but these initial results are based on midnight editions.

The results are based on a percentage of good words vs the number of total words on the page. And yea, I know words in context can mean different things, but then this was never meant to be scientific. (And I was tracking other papers – such as the Indepent Newspaper Groups Papers – but they changed something on their site and my scrapes have stopped working).

Enough you say, let’s see the results.


Firstly, note the results are measured in percentage of goodwords on a page and reflect only two months of tracking so far.

Well seems, if I did want to read the news, I should stick to the Times and avoid the business newspapers – especially the Financial Mail (kinda expected). However, the financial Mail also has the least words on the page. Google News South Africa has the most (and has Google News recently become a happierplace?). The Mail and Guardian, which used to be the paper I respected the most, has kinda become a bit of a naysayer these days – and the results seem to reflect that.

Here are the top words from all the tracked papers, which probably proves I need to adjust my word list.


I’ve started tracking a number of international newspapers, but it’s too early to have interesting results.

Technical notes:
I am using PHP to pull the data via a cron job into a SQL database, and using Processing to draw the graphs. I am using the SQLibrary by Florian Jenett to pull the database stuff into processing. The code is not so exciting, and kind of messy, but I will keep releasing it anyway. I am generating the source_id's manually simply because I haven't got round to implementing that yet.

Radarboy Reaktiv at Apple Store Tokyo


Presentation of experimental motion and sound reactive work at the Apple Store in Ginza ,Tokyo.

3D Pixels

An oldish Processing reactive video project displays a person’s image as a set of 3d pixels.

You need a webcam to view. Instructions: Stand back from camera. Move around. Move slider to adjust block size. Turn on and off your video. Click on blocks for other presets.

View here.

The code can be found here: boxes_3d.pde boxes3d3.pde motion.pde scrollbar.pde

Publishing a Processing Webcam Applet on the web

I had a bit of trouble working out how to get a video stream into a Processing applet, so I thought I’d give a quick rundown here in case someone else needs to know. The information is adapted from processing hacks – how to sign an applet

Export your sketch as an applet.
Open a terminal program and navigate to the applet directory where your applet is located (a subdirectory of where your sketch is located). The easiest way to do this is type “cd ” in terminal, then just drag the applet folder onto terminal, and it will automatically insert the path name.

Then type the following in terminal. But most importantly:
– replace xxx with the name of your .jar file. Repeat this process for all the .jar files. Remember to sign ALL the .jar files, ie. your applet .jar, core.jar, and video.jar files, this was the confusing part that’s not explained properly on the hacks site.
– the password you should enter when prompted is ‘Password’

keytool -genkey -keystore pKeyStore -alias radarboy
keytool -selfcert -keystore pKeyStore -alias radarboy
jarsigner -keystore pKeyStore xxx.jar radarboy

Upload your applet and voila, you have a self-signed, working web applet.