Animatronics Face of Sasuke Uchiha

by ManthanY in Circuits > Arduino

305 Views, 2 Favorites, 0 Comments

Animatronics Face of Sasuke Uchiha

IMG_20230427_143045.jpg
Animatronics: Sasuke Uchiha

This is an animatronics face of Sasuke Uchiha, one of the main characters from Naruto. I tried to imitate few of his abilities in response to sensors similar to that in the anime. He has different eyes with different abilities that he can activate, so with the use of ultrasonic sensor, if someone comes closer than a set value or if someone grabs his sword he activates sharingan. For the sword, I've used photoresistor with an LED. If both of these conditions are triggered he activates the mangyako sharingan. If the touch sensor detects a touch then he shoots a fireball through his mouth. With these interactions he says his dialogue and the mouth moves in sync with the voice.

Character Selection and the Interactions

To build an animatronics we need to know the basic idea behind it. Animatronics is use of robotic devices to build a life like movements in inanimate objects. So first we have to decide what character are we building and what the interactions will be. So I have selected Sasuke Uchiha as my character. As for the interactions, it has the following interactions.

  1. Introduction: So the character needs to introduce himself if anyone comes in front of him.
  2. Sharingan: For certain sensor trigger he will activate his sharingan eyes.
  3. Mangyako Sharingan: For some other sensor trigger he will activate his mangyako sharingan.
  4. Fireball Jutsu: For the other sensor trigger he will cook his fireball and shoot through his mouth.

Along with interactions if we can make him say some dialogues then it will be more fun and realistic! So for this we have to sync the mouth movements with the voice lines.

Select the Hardware for Each Interaction

equipment.png

Once you are done with the selection of character and its interactions you have to select the hardware that you will be employing for those interactions.

  1. Sensor selection:
  2. For the introduction part I used the ultrasonic sensor that returns us the distance of certain object from the sensor. So if I set some value and someone comes closer that that value it would trigger the introduction.
  3. For Sharingan and Mangyako Sharingan I used the photo resistor and ultrasonic sensor in combination to get those eyes activated.
  4. For the fireball I got the touch sensor to depict someone hitting him and in return he shoots the fireball.
  5. Actuator Selection:
  6. For turning the eye balls you need to turn for a specific angle so servo motor will be a better selection.
  7. Even for the mouth movement you have to do a to and fro movement for which servo motor will be a better fit.

Designing the Structure

frame.png
assemble.jpg
print.jpg

Now that you have the hardware figured out, its time to design the structure that will hold the entirety of the animatronics face. I designed a frame that houses all the three motors, the breadboards and the Arduino. I got the frame 3D printed and assembled. Besides this you can use any material to fix the rest of the required things on to the structure.



The Algorithm for Logic

sensorTest

We are all set with the hardware. Now its time for the logic of our interaction.

  1. Initially we try to turn the eye servos in response to the photoresistor and the ultrasonic sensor.
  2. Then we can add the touch response to the code.
  3. Once that is done we can integrate the voice lines when each of the sensors are triggered.
  4. Now we can sync the mouth movement to the voiceover.

Low Level Explanation of Logic

  1. Eye activation:
  2. For activating sharingan we have two sensor triggers with an or condition. Which means if either of the photoresistor or the ultrasonic sensor is triggered, sharingan will be activated. The condition for the ultrasonic sensor is that if it detects anything closer than 10 cm it will trigger and similarly for the photoresistor, if the voltage value is less than the threshold value it triggers.
  3. For mangyako sharingan to activate we have the and condition. Which means only when both of the above conditions are triggered mangyako will be active.
  4. Since the sensors are not highly reliable, we need to have some delay after one of the eyes is activated otherwise if the sensor triggers once and before completing the voice line and activating the eye has some other trigger the previous activity will be interrupted and the new activity will start which is not the ideal scenario.
  5. Fireball:
  6. For using the fireball the touch sensor must sense the touch. Once that is sensed then voice line will be active and mouth will open with a red LED turning on in mouth. With a small delay of 5 seconds the LED will turn off and mouth will shut.


Integrating the Voice Lines

miniplayer.png
speaker.jpg

You can use the DFRobotDFMiniPlayer.h library to play the audio. To play any specific audio for a specific sensor trigger we can pass the integer and for every integer it reads a specific audio file from the SD card. So what the character does is first he says his dialogue and then does the action. So we will have him say a specific voice line and then have the motor commands after that.

Now to sync the mouth movements, we have to use a while loop with a flag that can read whether the mp3 player is running or not. If the mp3 player is playing the busy pin changes from 0 to 1 so we can use this as a flag and move the mouth until the busy pin is 1. This will sync your voice with the mouth movement.

You can refer the following link for more info on DFplayer

https://wiki.dfrobot.com/DFPlayer_Mini_SKU_DFR0299

Wiring the Circuit

wir3s.jpg

Now that we have the coding done its time to wire the circuit according to the pins declared. To make the setup more clean you can choose the pins for the components such that the wires do not tangle into each other and make a mess. You can choose the components that are on the upper half to be connected to the higher pins like 10,11,12.

Testing

finalback.jpg

Now you have your animatronics ready and its time to test altogether the features that you have combined like eye movement, mouth movement and audio sync. Try to test all possible test cases and handle all possible scenarios that it may fail. For this you have to test multiple times. Also to avoid getting a ton of bugs at the final testing, what you can do is write a block of code and test it at the same time. By doing so once you combine all the code the only part that you need to debug is the integration part and not the actual hardware.