Sign Language Translator
by AndreiRaresDobrescu in Circuits > Raspberry Pi
1386 Views, 11 Favorites, 0 Comments
Sign Language Translator
Have you ever thought how hard it is to not be able to communicate as easily as most people do? This is the question I ask myself every time I see someone struggling to be understood
In our daily life, communication helps us build relationships by allowing us to share our experiences, and needs, and helps us connect to others and that is why communication is one of the most important things on this earth.
The application that I will make will be a super useful application for people who do not know sign language (ASL Language) , but it is equally good for mute people who want to communicate without fear of not being understood.
I wish that this application is going to break down language barriers and build bridges between Deaf and Hearing people. <3
Supplies
For these project I have used the RasperryPi 5 with the Freenove Kit for RaspberryPi 5. I have also used an webcam for tracking my hand. A good light source is always welcome :)
For my instructable, I've gathered up the following things:
- Raspberry Pi 5
- Freenove Projects Kit for Raspberry Pi
- My laptop (Lenovo Legion 7 Pro)
- Raspberry Pi 5 Cooling
- Webcam (720p-1080p)
- Extra cables for Raspberry Pi connection
Creating My Own Application for Taking Pictures
First we need to make an application that takes pictures of our hands. You can also use an dataset from internet but I wanted to make my own dataset that I can use for my project.
After taking the picture and saving it, the data will be automatically transferred to a folder with the name of the gesture and the hand. This data will of course be in a CSV file.
I want to mention that I also worked with a MNIST dataset from the Kaggle application and everything went very well, but with all that I wanted my prototype to work only with the pictures I took.
Creating the Data
I took around 50 pictures for each gesture and then I trained the model using these pictures.
The background doesn't matter, but I tried to take the pictures as professionally as possible, so I decided to take the pictures in a classroom with good light and a white background.
I took pictures from different distances because that's the best way. The more pictures, the more accurate the model.
Start the Training in Python
My training was very fast because I have a super small dataset. The process takes about 5 seconds because, as I said, I have about 500 pictures.
I mostly had 100% accuracy because the pictures were taken in good light.
Connection to the Raspberry Pi
I used a net cable that I connected to the laptop and to the Raspberry, then I changed the IP to be able to log in to the Raspberry.(ip ->192.168.168.10 - subnet mask ->255.255.255.0)
Connection With the Led Matrix
After many attempts and failures, I finally displayed the first letter on the Led Matrix 8x8. This will be part of the project. When you show a letter to the camera, this letter will reach the Led Matrix and will remain there for 2-3 seconds.
Connection to the LCD Display
After many more attempts and failures, I was able to display the letters on the LCD Display. Are you wondering, why LCD Display, I already have the Led Matrix. Yes, I have the Led Matrix but on the Led Matrix I can only show one letter at a time, so I chose the LCD Display because I want to form words that will be displayed on the LCD Display. This will make communication much easier. I will upload some pictures later :)
Maker Part
I decided to make a badge where I will put the Display. These will be attached to the t-shirt/shirt on the chest area to be as easy as possible to spot. This badge will use an LCD Display that will not need a connection to the Raspberry Pi via cables but via Bluetooth. I have decided that this will be my final version at the moment when this project will reach a more advanced level.
For now, I will make a wooden box where I will put the Freenove board and the Raspberry Pi together, along with the Led Matrix and LCD display.