Projection Mapped Audio Visualization
by MrShambles in Circuits > Audio
5026 Views, 31 Favorites, 0 Comments
Projection Mapped Audio Visualization
I'm throwing a party, and since the party is meant to be a bit of an AV experience, I'm trying to design and add as many AV elements as possible. This is one of them - an audio visualization using 3D projection mapping. The idea is pretty simple: have a set of boxes in a set layout - each box represents a band of the audio spectrum. So when a bass note plays, the bass box lights up, and the same for the rest of the frequencies.
So the route I've chosen is as follows:
Sound source (my laptop) > sound analysis (processing) > broadcast sound analysis data (processing & OSC) > receive data and projection map accordingly (VPT)
I hope that doesn't sound too confusing. There are easier ways, but this allows for quite a large degree of flexibility and loads of room to add on other cool bits.
Shall we get stuck in, then?
Bill of Materials
Hardware:
- A laptop (a desktop is possible, but portability helps...)
- A projector (size and power is up to you)
- Boxes or something you want to project onto*
- White (spray) paint
- Microphone (optional)
Software:
- Processing (as well as controlP5, netP5 and oscP5 libraries)
- VPT (great free projection mapping tool. I'm using version 7. Link: http://hcgilje.wordpress.com/vpt/ )
- Something to play your tunes (iTunes?)
*Note on the boxes: I went down to my local supermarket loading depot and found a huge selection of boxes for free. Give it a go and be a bit more green :)
Audio Meets Processing (then OSC)
Now that you have an audio source, we need to dive into the code:
/*
3D projection mapping with VPT via OSC
Nic Shackle
Falls under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
April 2014
*/
import oscP5.*;
import netP5.*;
import controlP5.*;
ControlP5 cp5;
Knob gain;
int gainVal;
OscP5 oscP5;
NetAddress myRemoteLocation;
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioInput jingle;
FFT fft;
void setup()
{
size(450, 300);
frameRate(60);
frame.setResizable(true);
cp5 = new ControlP5(this);
placeButtons();
oscP5 = new OscP5(this,6666);
myRemoteLocation = new NetAddress("127.0.0.1",6666);
minim = new Minim(this);
jingle = minim.getLineIn();
fft = new FFT( jingle.bufferSize(), jingle.sampleRate() );
fft.logAverages(86, 1);
}
int scale = 2; //change for sensitivity overall
boolean FFTon=false;
String viewOSC; //for showing OSC stream. used in OSC tab.
void draw()
{
//background(0);
fill(0,40);
noStroke();
rect(0,0,width,height);
fill(50);
rect(0,0,width,47);
textSize(30);
fill(200);
textAlign(LEFT);
text("OSC_panel",10,35);
textSize(10);
text("Nic Shackle",170,35);
// perform a forward FFT on the samples in the buffer
fft.forward( jingle.mix );
if(FFTon)analyseAndSend(); //if toggled, broadcast FFT values via OSC
}
void placeButtons(){
gain = cp5.addKnob("gainVal")
.setRange(0,50)
.setValue(1)
.setPosition(300,70)
.setRadius(50)
.setDragDirection(Knob.VERTICAL)
;
cp5.addButton("Toggle_FFT_broadcast")
.setValue(0)
.setPosition(50,70)
.setSize(200,19)
;
}
void knob(int gainVal) {
gainVal=gainVal;
}
public void controlEvent(ControlEvent theEvent) {
println(theEvent.getController().getName());
}
public void Toggle_FFT_broadcast(int theValue) {
FFTon= !FFTon;
}
void send(String path,float val){
OscMessage myMessage = new OscMessage(path);
myMessage.add(val);
/* send the message */
oscP5.send(myMessage, myRemoteLocation);
// println(myMessage + " Sent");
viewOSC="OSC stream: "+myMessage;
}
void analyseAndSend(){
//the following "sends" are if you're using multi-sided objects that require three faces to show the same thing
// //Three faces of the "band 1" box
// send("/" + str(1) + "layer/fade",fft.getAvg(1)/100*gainVal);
// send("/" + str(2) + "layer/fade",fft.getAvg(1)/100*gainVal);
// send("/" + str(3) + "layer/fade",fft.getAvg(1)/100*gainVal);
//
// //Three faces of band 2 box
// send("/" + str(4) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
// send("/" + str(5) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
// send("/" + str(6) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
//
// //Three faces of band 3 box
// send("/" + str(7) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
// send("/" + str(8) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
// send("/" + str(9) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
//
// //Three faces of band 4 box
// send("/" + str(10) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
// send("/" + str(11) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
// send("/" + str(12) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
//
// //Three faces of band 5 box
// send("/" + str(13) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
// send("/" + str(14) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
// send("/" + str(15) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
//
// //Three faces of band 6 box
// send("/" + str(16) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
// send("/" + str(17) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
// send("/" + str(18) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
//
// //Three faces of band 7 box
// send("/" + str(19) + "layer/fade",fft.getAvg(7)*8/100*gainVal);
// send("/" + str(20) + "layer/fade",fft.getAvg(7)*8/100*gainVal);
// send("/" + str(21) + "layer/fade",fft.getAvg(7)*8/100*gainVal);
//the following "sends" are if you're using single-sided objects that require only one value sent
for(int i = 0; i < 9; i++) //iterate through the bands
{
if(i==8 || i==9){send("/" + str(i) + "layer/fade",(fft.getAvg(i)*i/100)*gainVal*2);} //trebles need a bit of an oomph to show up nicely
else{send("/" + str(i) + "layer/fade",(fft.getAvg(i)*i/100)*gainVal);}
stroke(25*i,50,50);
strokeWeight(5);
line(120,100+i*10,120+fft.getAvg(i)*i*gainVal,100+i*10);
textSize(10);
fill(150);
text("Band/Layer "+str(i),50,104+i*10);
}
fill(150);
textSize(12);
//text(viewOSC,50,60);
}
Did you get all that?
A quick run through:
Audio is loaded into a buffer. That buffer is FFTed and averaged into 9 bands. Those bands are iterated through to get their value with a range of about 0 to 1 (hence a floating point value). That value is concatenated with a string that corresponds to an OSC command destined for VPT. That string is then broadcast over OSC on IP address 127.0.0.1 (port number 6666). There's also a GUI that shows the frequencies for each band, and a Gain knob, which can boost the signal if your source is a bit soft.
Note: I've never worked with audio analysis before this, so I admit my algorithm for getting an accurate spectrum is probably a little off. If anyone is more clued up on this, I'd love to hear a better way to go about it (I have a feeling it's with some tasty maths!)
OSC Meets VPT
Install VPT and learn how to use it using the very helpful guide included. Also be sure you're comfortable with the method of saving your work - I learnt this the hard way a couple of times...
Start off by adding layers and assigning sources to them. If you are using boxes to project onto, you'll need three layers per box (and also to change the processing code to send each band three times - this is currently in the code but commented out). If you are using single sided objects, you can leave the code as is and create 6 layers in VPT (I know there are 9 bands, hence there should be 9 layers, but as you may notice, band 0 (almost subsonic bass) is multiplied by 0, and the very upper bands aren't really worth looking at).
As you may have guessed, the effect is created by fading each layer according to the level of the band linked to that layer. This means you can assign any source to the layer and still get the same effect. I even ran it using the webcam as a source out of curiosity (an easy way to get your processor in quite a fluster).
VPT Meets Terra Firma
I found setting my projector to "extend display", dragging VPT windows onto the extended display and entering fullscreen mode meant I could keep my laptop screen free for the processing app and media player.
Here's an extremely short clip with an extremely bad camera showing the result of a test with one box:
VPT + OSC test from Nic Shackle on Vimeo.
To Do...
- Projecting onto boxes that are stacked on/next to each other means you end up projecting one layer on to two boxes on the faces where they meet. You can get around this by creating a mask for each layer, but this is almost impossibly fiddly. It's a lot easier to project onto isolated boxes or 2D objects such as boards etc.
- I'm not entirely happy with the abruptness of the fading. I will need to add some sort of "easing" algorithm into the processing code to make a smoother show.
- I need to paint the boxes white.
- Improved spectrum analysis?