Laser Harp Synthesizer on Zybo Board

by johndont in Circuits > Microcontrollers

2029 Views, 15 Favorites, 0 Comments

Laser Harp Synthesizer on Zybo Board

20171130_100837 (1).jpg

In this tutorial we will create a fully functional laser harp using IR sensors with a serial interface that will allow the user to change the tuning and tone of the instrument. This harp will be the 21st century remake of the age old instrument. The system was created using a Xilinx Zybo development board along with the Vivado Design Suites. What you will need to complete the project:

  • 12 IR sensors and emitters (more or less can be used depending on the number of strings)
  • Zybo Zynq-7000 development board
  • Free RTOS
  • Vivado Design Suite
  • Wire (for connecting the sensors to the board)
  • 3 pieces of PVC pipe ( (2) 18 inch and (1) 8 inch)
  • 2 PVC elbows

Get Digilent's Zybo DMA Audio Demo

The FPGA side of this project is based largely on the demo project found here. It uses direct memory access to send data directly from memory that the processor can write to over AXI Stream to an I2S audio block. The following steps will help you get the DMA audio demo project up and running:

  1. A new version of the board file for the Zybo board may be necessary. Follow these instructions to obtain new board files for Vivado.
  2. Follow steps 1 and 2 in the instructions on this page to get the demo project open in Vivado. Use the Vivado method, not the SDK hardware handoff.
  3. You may get a message that says some of your ip blocks should be updated. If so, select "Show IP Status" and then in the IP status tab select all out of date IP and click "Upgrade Selected". When it finishes and a window pops up asking if you want to generate output product, go ahead and click "Generate". If you get a critical warning message, ignore it.
  4. Switch from the design to the sources tab in Vivado to see the source files. Right-click the block design "design_1" and select "Create HDL Wrapper". When prompted select "copy generated wrapper to allow user edits". A wrapper file for the project will be generated.
  5. Now that those critical steps that were somehow left out in the other tutorial are completed, you can return to the tutorial previously linked and continue from step 4 to the end and make sure that the demo project runs correctly. If you don't have a way to input audio for it to record then just record with your headphones in and listen for a 5-10 second fuzzy sound when you press the playback button. As long as something comes out of the headphone jack when you press the playback button, it's probably working correctly.

Make Some Changes in Vivado

schematic.PNG

So now you've got Digilent's DMA audio demo working, but that's not at all the end goal here. So we've got to go back to Vivado and make some changes so that our sensors can be plugged into the PMOD headers and we can use their value on the software side.

  1. Open up the block diagram in Vivado
  2. Create a GPIO block by right-clicking in empty space in the block diagram and selecting "Add IP" from the menu. Find and select "AXI GPIO".
  3. Double click the new IP block and in the re-customize IP window go to the IP configuration tab. Select all inputs and set the width to twelve, since we will have 12 "strings" on our harp and therefore need 12 sensors. If you want to use fewer or more sensors then adjust this number appropriately. Also set enable interrupt.
  4. Right click the new GPIO IP block and select "run connection automation". Check the AXI box and hit okay. This should connect the AXI interface automatically, but leave the outputs of the block unconnected.
  5. In order to make room for the extra interrupt, double click on the xlconcat_0 IP block and change the number of ports from 4 to 5. Then you can connect the ip2intc_irpt pin from the new GPIO block to the new unused port on the xlconcat block.
  6. Right click on the "GPIO" output of the new GPIO IP block and select "make external". Find where the line goes to and click on the little sideways pentagon and on the left a window should open where you can change the name. Change the name to "SENSORS". It is important to use the same name if you want the constraints file we provide to work, otherwise you will have to change the name in the constraints file.
  7. Back in the sources tab, find the constraints file and replace it with the one we provide. You can choose to either replace the file or just copy the contents of our constraints file and paste it over the contents of the old one. One of the important things our constraints file does is enable the pullup resistors on the PMOD headers. This is necessary for the particular sensors we used, however not all sensors are the same. If your sensors require pulldown resistors you can change every instance of "set_property PULLUP true" with "set_property PULLDOWN true". If they require a different resistor value than the one on the board, then you can remove these lines and use external resistors.The pin names are in the comments in the constraints file, and they correspond to the labels in first diagram in the Zybo Schematics page which can be found here. If you want to use different pmod pins just match the names in the constraint file to the labels in schematic. We use PMOD header JE and JD, and use six data pins on each, omitting pins 1 and 7. This information is important when hooking up your sensors. As shown in the schematic, pins 6 and 12 on the PMODS are VCC and pins 5 and 11 are ground.
  8. Regenerate the HDL wrapper as before, and copy and overwrite the old one. When that's done, generate bitstream and export hardware like before, and relaunch the SDK. If you get asked whether you want to replace the old hardware file, the answer is yes. It's probably best to have the SDK closed when you export hardware so that it gets properly replaced.
  9. Launch the SDK.

Get FreeRTOS Running

The next step is to get FreeRTOS running on the Zybo board.

  1. If you don't already have a copy, download FreeRTOS here and extract the files.
  2. Import the FreeRTOS Zynq demo located at FreeRTOSv9.0.0\FreeRTOS\Demo\CORTEX_A9_Zynq_ZC702\RTOSDemo. The import process is pretty much the same as it was for the other demo project, however because the FreeRTOS Zynq demo relies on other files in the FreeRTOS folder, you should not copy the files into your workspace. Instead, you should place the whole FreeRTOS folder inside your project folder.

  3. Create a new board support package by going to "file" -> "new" -> "board support package". Make sure standalone is selected and click finish. After a moment a window will pop up, check the box next to lwip141 (this stops one of the FreeRTOS demos from failing to compile) and hit OK. After that completes right click on the RTOSdemo project and go to "properties", go to the "project references" tab, and check the box next to the new bsp you created. Hopefully it will be recognized but sometimes the Xilinx SDK can be weird about this sort of thing. If you still get an error after this step that xparameters.h is missing or something like that then try repeating this step and maybe exiting and relaunching the SDK.

Add Laser Harp Code

Now that FreeRTOS is imported, you can bring the files from the laser harp project into the FreeRTOS demo

  1. Create a new folder under the src folder in the FreeRTOS demo and copy and paste all of the provided c files except for main.c into this folder.
  2. Replace the RTOSDemo main.c with the provided main.c.
  3. If everything is done correctly, you should be able to run the laser harp code at this point. For testing purposes, the button input that was used in the DMA demo project is now used to play sounds without sensors attached (any of the four main buttons will work). It will play a string each time you press it and cycle through all the strings in the system over multiple presses. Plug in some headphones or speakers to the headphone jack on the Zybo board and make sure you can hear the sounds of the strings coming through when you press a button.

About the Code

Many of you reading this tutorial are likely here to learn how to set up audio or use DMA to do something different, or to create a different musical instrument. For that reason the next few sections are dedicating to describing how the provided code works in conjunction with the hardware previously described to get a working audio output using DMA. If you understand why the code pieces are there then you should be able to adjust them for whatever it is that you want to create.

Interrupts

First I'll mention how interrupts are created in this project. The way we did it was by first creating an interrupt vector table structure which keeps track of the ID, interrupt handler, and a reference to the device for each interrupt. The interrupt IDs come from xparameters.h. The interrupt handler is a function we wrote for the DMA and GPIO, and the I2C interrupt comes from the Xlic I2C driver. The device reference points to instances of each device which we initialize elsewhere. Near the end of the _init_audio function a loop goes through each item in the interrupt vector table and calls two functions, XScuGic_Connect() and XScuGic_Enable() to connect and enable the interrupts. They reference xInterruptController, which is an interrupt controller created in the FreeRTOS main.c by default. So basically we attach each of our interrupts to this interrupt controller which was already created for us by FreeRTOS.

DMA

The DMA initialization code starts in lh_main.c. First a static instance of a XAxiDma structure is declared. Then in the _init_audio() function it gets configured. First the configure function from the demo project gets called, which is in dma.c. It's pretty well documented and comes straight from the demo. Then the interrupt gets connected and enabled. For this project only the master-to-slave interrupt is required, because all data is being sent by the DMA to the I2S controller. If you wish to record audio, you will also need the slave-to-master interrupt. The master-to-slave interrupt gets called when the DMA finishes sending out whatever data you told it to send. This interrupt is incredibly important for our project because every time the DMA finishes sending out one buffer of audio samples it must immediately begin sending out the next buffer, or else an audible delay would occur between sends. Inside the dma_mm2s_ISR() function you can see how we handle the interrupt. The important part is near the end where we use xSemaphoreGiveFromISR() and portYIELD_FROM_ISR() to notify _audio_task() that it can initiate the next DMA transfer. The way we send constant audio data is by alternating between two buffers. When one buffer is being transmitted to the I2C block the other buffer is having its values calculated and stored. Then when the interrupt comes from the DMA the active buffer switches and the more recently written buffer starts being transferred while the previously transferred buffer starts getting overwritten with new data. The key part of the _audio_task function is where fnAudioPlay() gets called. fnAudioPlay() takes in the DMA instance, the length of the buffer, and a pointer to the buffer from which data will be transferred. A few values are sent to I2S registers to let it know more samples are coming. Then XAxiDma_SimpleTransfer() gets called to initiate the transfer.

I2S Audio

audio.c and audio.h are where the I2S initialization takes place. I2S initialization code is a pretty common chunk of code that's floating around in a number of places, you might find slight variations from other sources but this one should work. It's pretty well documented and not much needed to be changed for the harp project. The DMA audio demo which it came from has functions for switching to the mic or line inputs so you can use those if you need that functionality.

Sound Synthesis

To describe how the sound synthesis works, I am going to list each of the sound models used in development that led to the final method, as it will give you a sense of why it is done the way it is done.

Method 1: One period of sine values is calculated for each string at the corresponding frequency for that string's musical note and stored in an array. For example, the length of the array will be the period of the sine wave in samples, which equals # of samples / cycle. If the sampling rate is 48kHz and the note frequency is 100Hz, then there are 48,000 samples/second and 100 cycles/second leading to 4800 samples per cycle, and the array length will be 4800 samples and will contain the values of one complete sine wave period. When the string is played, the audio sample buffer is filled by taking a value from the sine wave array and putting it into the audio buffer as a sample, then incrementing the index into the sine wave array so that using our previous example over the course of 4800 samples one sine wave cycle is put into the audio buffer. A modulo operation is used on the array index so that it always falls between 0 and the length, and when the array index goes over a certain threshold (like maybe 2 seconds worth of samples) the string is turned off. To play multiple strings at the same time, keep track of each strings' array index separately and add the value from each strings' sine wave together to get each sample.

Method 2: To create a more musical tone, we start with the previous model and add harmonics to each fundamental frequency. Harmonic frequencies are frequencies which are integer multiples of the fundamental frequency. Unlike when two unrelated frequencies are summed together, which results in two distinct sounds being played simultaneously, when harmonics are added together it continues to sound like just one sound, but with a different tone. To accomplish this, each time we add the value of the sine wave at location (array index % array length) to the audio sample, we also add (2 * array index % array length), and (3 * array index % array length), and so on for however many harmonics are desired. These multiplied indices will traverse the sine wave at frequencies which are integer multiples of the original frequency. To allow for more control of tone, each harmonic's values are multiplied by a variable which represents the amount of that harmonic in the overall sound. For example, the fundamental sine wave might have it's values all multiplied by 6 to make it more of a factor in the overall sound, while the 5th harmonic might have a multiplier of 1, meaning its values contribute much less to the overall sound.

Method 3: Okay, so now we've got very nice tone on the notes, but there's still a pretty crucial problem: they play at a fixed volume for a fixed duration. To sound at all like a real instrument the volume of a string being played should decay smoothly over time. In order to accomplish this, an array is filled with the values of an exponentially decaying function. Now when the audio samples are being created, the sound coming from each string is calculated like in the previous method but before it gets added to the audio sample it gets multiplied by the value at that strings' array index in the exponential decay function array. This makes the sound smoothly dissipate over time. When the array index reaches the end of the decay array, the string is stopped.

Method 4: This last step is what really gives the string sounds their realistic string sound. Before they sounded pleasant but clearly synthesized. To try to better emulate a real-world harp string, a different decay rate is assigned to each harmonic. In real strings, when the string is first struck there is a high content of high frequency harmonics that creates the sort of plucking sound we expect from a string. These high frequency harmonics are very briefly the main part of the sound, the sound of the string being struck, but they decay very quickly as the slower harmonics take over. A decay array is created for each harmonic number used in sound synthesis each with its own decay rate. Now each harmonic can be independently multiplied by the value its corresponding decay array at the array index of the string and added to the sound.

Overall the sound synthesis is intuitive but calculation heavy. Storing the entire string sound in memory at once would take too much memory, but calculating the sine wave and the exponential function between every frame would take way too long to keep up with the rate of audio playback. A number of tricks are used in the code to speed the calculation up. All math except in the initial creation of the sine and exponential decay tables is done in integer format, which requires spreading out the available numerical space in the 24 bit audio output. For example, the sine table is of amplitude 150 so that it is smooth but not so large that many strings played together can add to be over 24 bits. Likewise, the exponential table values are multiplied by 80 before being rounded to integers and stored. The harmonic weights can take on discrete values between 0 and 10. Also all samples are actually doubled and the sine waves are indexed by 2's, effectively halving the sampling rate. This limits the maximum frequency that can be played, but was necessary for the current number of strings and harmonics to be calculated quickly enough.

Creating this sound model and getting it to work took considerable effort on the processor side, and it would've been incredibly difficult to get it working on the fpga side from scratch in the time frame of this project (imagine having to recreate the bitstream every time a piece of verilog was changed to test the sound). However, doing it on the fpga could likely be a better way of doing it, possibly eliminating the issue of not being able to calculate samples quickly enough and allow for more strings, harmonics, and even audio effects or other tasks to be run on the processor side.

Wiring Up the Sensors

Capture.PNG

To create the strings we used IR break beam sensors that will detect when the string is being played. We ordered our sensors from the following link. The sensors have a power, ground, and data wire while the emitters only have a power and ground wire. We used the 3.3 V and ground pins from the PMOD headers to power both the emitters and sensors. To power all the sensors and emitters it is necessary to connect all the sensors and emitter in parallel. The data wires from the sensors will each need to go to their own pmod pin.

Constructing the Skeleton

20171130_154649 (2).jpg

In order to create the shape of the harp the three pieces are used as a skeleton to place the sensors and emitters on. On one of the two 18 inch pieces of PVC pipe align the sensors and emitters in alternating order 1.5 inches from each other and then tape them down to the pipe. On the other 18 inch PVC pipe align the sensors and emitters in alternating order but make sure to offset the order (i.e. if the first pipe had a sensor first the second should have an emitter first and vice versa). It will be necessary to solder longer wires on the data, power, and ground wires to ensure they will be able to reach the board.

Building the Wood Exterior

20171130_162008.jpg

This step is optional but it highly recommended. The wood exterior not only makes the harp look nice it also protects the sensors and wires from damage. The wood frame can be created by a hallow rectangular ring from wood. The inside of the rectangle needs to have an opening of at least 1-1/2 inches to fit the pipe and sensor skeleton. Once the frame is constructed drill two holes that will allow the wires from the sensor and emitters out in order to connect them with the board.

*Note: It is recommended to add access points to be able to remove and insert the pipe skeleton in case repairs need to be made or slight adjustments need to be made.

Putting All the Pieces Together

20171130_100830 (1).jpg

Once all the previous steps are finished it is time to construct the harp. First place the pipe skeleton inside the wooden exterior. Then plug in the wires for the sensors and emitters into the correct location on the board. Then open the SDK and click the debug button to program the board. Once the board is programmed plug in a pair of headphones or a speaker. Depending on which sensor ends up in which pmod port your harp's strings will probably be out of order to begin with. Because it can be difficult to tell which wire goes to which sensor when so many wires are involved, we included a way to map string numbers to interrupt bit positions in software. Find "static int sensor_map[NUM_STRINGS]" and adjust the values in the array until the strings play from lowest to highest in order.

The menu can be used by opening a serial terminal (e.g. RealTerm) and set the baud rate to 115200 and the display to ANSI. The menu can be navigated by using the w and s keys to move up and down and the a and d keys to change the values.

ROCK OUT

Once the harp is fully functional. Master the harp and listen to the sweet sound of your own music!