Educational Child's Toy
For my GCSE Design and Technology coursework, I decided to make an educational child's toy. Due to time constraints, not every feature that I had planned to add could be implemented so I decided to revisit the project. To improve the device, I made several large modifications (such as changing the microcontroller from an Arduino Mega to an ESP32). This tutorial will focus on the upgraded toy as it maintains most of the features of the old device alongside newer features - allowing for all the features to be discussed.
The main body of the toy is built out of MDF and plywood. I used the CNC machine to carve out the front panels as they are quite intricate. This was because I had designed it to only allow pieces to be inserted in one orientation thus ensuring that the text is always upright and level.
I made the pieces myself using epoxy resin and phosphorescent mica powder. Due to time constraints and a lack of any other alternative, I had to use the front panel (covered with sellotape) as a mold. While rudimentary, sellotape was the best covering out of all 7 I had tested. It allowed the resin to set properly, it protected the MDF panel, and it also allowed the pieces to be removed easily without damaging them. All things being considered, they turned out very well and the phosphorescent effect is greater than I could have hoped for.
The user's parent can control the toy through any Android device with the Microcontroller Remote app (https://bit.ly/34kJ3rF). It is also backwards compatible with the Educational Toy Companion App (https://bit.ly/2N9c1SE).
All files for this tutorial are available on GitHub (https://github.com/michaelmckey/educational_toy2)
Videos of the toy are available on YouTube (https://www.youtube.com/playlist?list=PLHfkLzhFq1Qh6MnNZbL_Q6yu3HD-ERqyJ)
Supplies
Materials:
- MDF
- plywood
- epoxy resin (with hardener)
- hinges
- handle
Electronics:
- ESP32 (previous version used Arduino Mega)
- USB cable (for ESP32)
- USB power bank
- Wired speaker with internal battery
- 2-way 3.5mm headphone jack adapter (for speaker)
- 22-gauge solid core wire
- perf board
- female pin header
- RGB LEDs
- LDRs
- resistors
- NPN transistors
- PNP transistors
- heat-shrink tubing
Other:
- phosphorescent mica powder
- paint
- panel pins
- wood glue
- wood screws
- A4 label paper
The Circuit
When building the circuit, the number of available pins on the microcontroller was an issue. Due to this I selected an Arduino Mega for the original prototype incase more pins were required than previously planned. When upgrading the toy, I knew the exact number of pins that were necessary, so could instead opt of an ESP32. As it featured inbuilt Bluetooth while having enough storage for the auto files, it decreased the need for external components and simplified the circuit.
I used RGB LEDs to provide visual feedback to the user and to also allow the device to scan each inserted piece and determine its colour. As I was limited in the number of available pins, I decided to synchronize the colour of all the LEDs to reduce the number of pins necessary. This meant that three pins could control the red, green, and blue components of all LEDs while nine pins could control which LEDs were on and off.
To measure the light intensity in each slot, I made a potential divider circuit using LDRs and 2k resistors. As the number of available pins was limited, I was only able to have nine sensors (1 per piece with none left over to measure the ambient light).
To add sound to the device, I cut the headphone adapters wires and soldered them into my circuit. This meant that the speaker could be switched easily reducing the sunken costs associated with the project.
I used some perf board and 22-gauge solid core wire to build the circuit. I wanted the ESP32 to be removable in case I made a mistake while soldering (foreshadowing) or incase I wanted to use it in future projects. To do this, I cut a female pin header to make a port for the device.
Calculating the LDR Formulas (Explanation)
I had no LDRs that had already been calibrated to act as a reference, so instead I used some physics to work out the formula. I devised an experiment using the inverse square law to calculate the relationship between the resistance and the light intensity (outlined in the next step).
I was able to work out the relative light intensity at the LDR while also measuring its resistance. I was then able to use this data to calculate a formula linking the two variables, which allowed me to calculate the relative light intensity from the LDRs resistance. The absolute value of the calculated intensity doesn't matter as the values are only compared relative to each other. The only requirement is that the calculated value is proportional to the actual light intensity.
I was then able to calculate the transparency of each piece, by comparing the light intensity at the LDR when the piece was in and when it was out. The calculated transparencies change as it gets darker which shows that the data isn't perfect (as the transparency should remain constant). This is due to several factors including the accuracy of the ESP32 ADC and the effect of hysteresis in the LDRs. The LDRs were never designed for this application, so they have a less than optimal signal to noise ratio, which increases the degree of uncertainty in any of these calculations.
Calculating the LDR Formulas (Experiment)
Apparatus:
- Portable light source (preferably a point source)
- 30cm ruler
Setup:
- wait until midnight, so all other sources of light are negligible.
- only preformed the experiment on 1 LDR at a time.
- Measure the gap between the 0cm mark on the ruler and the surface of the LDR (for my setup it was 1.8cm which was 1.2cm (of MDF) + 0.5cm (at the bottom of the rule))
- Upload the LDR formula code and open the Arduino IDE's serial monitor to collect data from the experiment.
- Remove all inserted pieces
- Turn off all lights including the portable light source. The computer’s screen should provide sufficient light to navigate in the dark
Method:
- clear the serial monitor output
- position the 30cm ruler perpendicular to the front panel of the toy
- place the light at the 5cm mark on the ruler and point it directly at the LDR
- turn on the light
- wait 30 seconds
- turn off the light
- copy all the serial output to Notepad and delete any readings that weren't during the thirty second interval (if unsure about any readings, delete them as it’s better to have less readings that to have inaccurate ones)
- Save the file as “[piece colour][slot number]_[distance]cm.csv” (e.g., “none6_5cm.csv”)
- Repeat steps 1 to 8 for distances of 5cm, 10cm, 15cm, 20cm, 25cm and 30cm
- then repeat step 9 for different slots and different piece colours
- used the python program "calculating_ldr_formulas.py" to collate all the data together into a new “all_results.csv” file.
- Finally open this final in Excel and analyse the data to find the correct formula for converting from LDR resistance to light intensity. An example of this being done has also been uploaded – "all_results.xlsx"
Deciding If Pieces Are in (Explanation)
Because I wanted my device to work in all lighting conditions, I couldn't specify a light intensity at which a piece was in or out beforehand (for example if the light intensity is low a piece could be in... or the toy could just be in the shade). A simple solution would be to add another sensor on top of the toy to record the ambient brightness, but this wasn't possible as I'd already used all the ESP32's pins. Instead, I had to try and calculate the ambient brightness by combining the readings from all the LDRs (There are other ways to solve this problem, but this is the most reliable method).
A problem was that all the slots would be covered by pieces when the child completed the game. So, it is necessary to calculate the ambient brightness even when all the LDRs are blocked. To do this, I calculated the transparency of each piece when it was inserted, and then used this to predict the brightness if the piece wasn't inserted. As each LDR had a slightly different intensity curve, I created a formula to calibrate all the LDRs when the toy is booted so they could all be used to accurately calculate the ambient intensity.
Naturally, some values for transparency or light intensity were erroneous (due to noise or high opacity), so had to filter out any anomalies in the predictions. If too many values were removed not only would this decrease the accuracy of the prediction, but it was also likely to trigger dangerous positive feedback loops that would quickly render all readings invalid. To solve this problem, I decided to limit the toy so it couldn't overreact too quickly (allowing time for the values to be corrected before positive feedback loops were set up).
I used the following two assumptions to limit the rate at which it could change its opinion on which pieces were in:
- It takes at least 1300ms to put a piece in and 100ms to take a piece out.
- The in/out state of only one slot will change at any time (if two pieces were to be inserted simultaneously it would process then in order).
This reduced the chance of a positive feedback loop being created as values were allowed time to stabilise before they invalidated the predicted ambient brightness. As the device also knows which slot state is changing, this value can be excluded from the predictions of the ambient brightness further increasing the accuracy of the device.
Another issue was that light from the LEDs would reflect off the piece and be interpreted as ambient light by the sensor (which in some cases could exceed the predicted ambient light - making the toy think the piece wasn't in). To solve this, it records the amount of light reflected by the piece, directly after it is inserted, and then uses this (along with the LED brightness) to predict the actual light level (as if the LED were off).
To reduce the effect of hysteresis and noise, I used the rolling average intensity for calculation. This doesn't interfere with the response time too much, as any notable change takes a few hundred milliseconds to happen and changes the reading dramatically.
Deciding If Pieces Are in (Experiment)
Setup:
- Can be performed at any time if ambient brightness is consistent (e.g., no flickering)
- Upload the Arduino code and open the Arduino IDE's serial monitor to collect data from the experiment.
Method:
- Insert all green pieces
- Clear the serial monitor
- Leave it running in the background for 5 minutes
- Save the file as “intensities_in.csv”
- Repeat steps 1-4 with no pieces in and save as “intensities_out.csv”
- Run the Python program, "in_vs_out.py", to plot the data.
Removing LED Effect (Experiment)
Setup:
- Can be performed at any time if ambient brightness is consistent (e.g., no flickering)
- Insert a piece into the test slot (third slot)
- Upload the Arduino code and open the Arduino IDE's serial plotter to collect data from the experiment.
Method:
- Look at the plot and see if the predicted actual intensity is constant. If so, the LED effect has been accounted for (Expect spikes when the LED suddenly changes due to hysteresis)
Calculating Piece Colour (Explanation)
When a piece has just been inserted, we can measure the light intensity when the LED is off. Then we can measure the intensity as the toy cycles through the three primary colours (red, green, blue) which allows us to calculate the amount of each colour of light reflected by the piece. For example:
Red light reflected = Reading when LED is red - Reading when LED is off
As the colour an object appears to be is due to the proportion of each colour reflected, we can use these readings to calculate which colour of piece has been inserted. As each colour represents a different language, we can determine what language the toy should speak in. When choosing what colour of pigment to use to dye the pieces, I deliberately chose red, green, and blue to make this task easier. In this specific case, to determine what colour a piece, is you could just find which primary colour is reflected most and that is the colour of the piece. But I wanted to get a more accurate method that could also be used if the pieces were unusual colours.
I performed another experiment (outlined in the next step) to calculate the average reflectivity of the pieces. After collecting all the data, I plotted it using a 3D graph in matplotlib. I plotted the amount of light of each colour (RGB) reflected on the XYZ axis. For each data point, I coloured it the same colour as the piece inserted at that time. This allowed me to see the clusters formed by each piece. As the clusters were in straight lines from the origin, I decide to flatten the graph on to a plane by using the "amount of a colour reflected as a percent of the total amount reflected" on each axis. As all the readings are on a plane, I technically only need two axes to display the data, but it is easier to understand it in 3D.
The data points were now organised in neat clusters, so it was easy to find out which cluster a new piece belonged to. If I were given a point in space on the graph (representing the colour of light reflected), I could find out which cluster the point was most likely to be in by working out the distance from the point to the centre of each cluster. To work out this distance, I used 3D Pythagoras theorem:
distance ^2 = (x1 - x2) ^2 + (y1 - y2) ^2 + (z1 - z2) ^ 2
as we are just finding the minimum distance and the distance is always positive
if (distance1) ^2 > (distance2) ^2
then distance1 > distance2
so, we can use distance squared instead of distance and it saves computational resources.
To find the centre of each cluster, I used the median values for each variable (to reduce the effect of anomalies).
To calculate I split the data into training data (75%) and testing data (25%) to prevent over fitting. I then fed the training data into the "training_colour_detection.py" program to get a model.
To prove that my model was valid, I used "testing_colour_predictions.py" and the remaining 25% of the data to test the model.
The model was:
- 100.0% effective at identifying red pieces (absolutely no errors!)
- 98.9% effective at identifying green pieces
- 99.9% effective at identifying blue pieces
This was under controlled test conditions so outside of the "lab" these values would be lower due to rapidly moving shadows caused by the user interacting with the device. Luckily, when tested in the real world it was still sufficiently accurate.
As the model uses the percent of light reflected, this model works at any LED brightness level (provided there is a sufficient signal to noise ratio).
Calculating Piece Colour (Experiment)
Setup:
- Can be performed at any time if ambient brightness is consistent (e.g., no flickering). Also try and minimise the change in intensity between experiments (if this isn't possible the results are still valid, as only the difference in intensity is used for calculations).
- Insert all green pieces
- Upload the Arduino code and open the Arduino IDE's serial monitor to collect data from the experiment.
Method:
- Leave it running in the background for a few hours (keeping the serial monitor open) and it will automatically collect the data
- Copy the terminal output and save the file as “green_pieces.csv”
- then repeat steps 1-2 for the red and blue pieces saving as “red_and_blue_pieces.csv”
- run "splitting_testing_and_training_data.py" to slit the original data into training (75%) and testing (25%) data
- run "training_colour_prediction.py" to display the data as a 3D graph and output the expected reflectivity, by piece colour, to the terminal window
- Finally, copy this output into "testing_colour_prediction.py" to calculate the accuracy of the program
Issues:
I ran out of dynamic memory. The audio files took up a lot of memory, so I had to convert all the arrays from variables to constants so they would no longer be counted as dynamic memory.
I had been planning to implement Wi-Fi connectivity but sadly turning on Wi-Fi prevented ADC2 from working (as they use the same resources). ADC2 is vital for the device to work so it was impractical / impossible to implement Wi-Fi into the device. Wi-Fi would also have made the toy more vulnerable to cyberattack which would be a major concern as the device is targeted at young children.
When transferring the circuit from my Arduino to my ESP32, I forgot that they use different voltages. This meant that the circuit that had previously worked on my 5V Arduino, no longer worked on my 3.3 V ESP32. As so much soldering had to be completed and parts were limited, scrapping the broken circuit, and redoing all the soldering wasn't feasible. Instead, I made an adapter circuit which converted the ESP32's 3.3V output to 5V for all output pins.
The original speaker couldn't be supplied enough power from the circuit, so I had to switch it out for a speaker with its own inbuilt power supply.
Conclusion
Hopefully, I've gave you an insight into the process of making my toy. Naturally, I have focused on the final solution, but many pathways were explored until the best route was found. I hope this has inspired you to complete your own projects, hopefully implementing some of the technologies or techniques used in this build.
This project really tested the limits of the ESP32, in terms of available pins, storage and memory. There were many times when I didn't even think it was physically possible to complete, but with enough "creativity" there is always a way (even if that way just happens to involve using two full rolls of sellotape!)