Bod Language Reader With Ai
by ambufire in Circuits > Raspberry Pi
53 Views, 2 Favorites, 0 Comments
Bod Language Reader With Ai
Welcome to my Body Language/emotions Reader project, where we delve into the world of non-verbal communication. With this innovative tool, we aim to decode the subtle gestures of people!
Supplies
the supplies needed:
- Raspberry Pi 5/4.
- 2 basic webcams
- raspberry pi projects kit
- 3D printer
- 4 m angle bracket
- 0.5m² wood
- paint
- wing nuts
- bolts
- nuts
- tools for sanding, cutting the wood , drilling and installing the bolts and nuts
Github link:
https://github.com/howest-mct/2023-2024-projectone-ctai-CrombezWout
If anything is not clear during the instructables, please refer to the github code shared!
Emotion Detection Model
Before we begin, ensure you have the following installed:
- Python
- OpenCV: pip install opencv-python-headless
- cvzone: pip install cvzone
- numpy: pip install numpy
- pandas: pip install pandas
- Pickle: Comes with the standard library, no installation needed.
- facemesh: Part of cvzone, no additional installation required.
Step-by-Step Guide to Creating and Capturing FaceMesh Data
1. Set Up the FaceMesh Detector
- Initialize the Detector:
- Use cvzone.FaceMeshModule.FaceMeshDetector to create a FaceMesh detector.
2. Capture Data from Camera or Video
- Open the Camera or Video:
- Use OpenCV to capture data from your own camera or video files.
- Capture FaceMesh Data:
- For each frame, detect facial landmarks and save the coordinates into a CSV file.
3. Ensure Data Diversity
- Different People:
- Capture data from multiple individuals to ensure variety.
- Different Angles and Distances:
- Record faces from various angles and distances to make the data robust.
- Balance the Data:
- Ensure the dataset is balanced with respect to different classes or labels you are using.
Detailed Instructions
Step 1: Set Up the FaceMesh Detector
- Initialize the Detector:
- Import necessary libraries and initialize the FaceMesh detector from cvzone.
Step 2: Capture Data from Camera or Video
- Open the Camera or Video:
- Use OpenCV to open your camera or load a video file.
- Capture FaceMesh Data:
- Read frames from the camera or video in a loop.
- Use the FaceMesh detector to detect facial landmarks.
- Save the coordinates of these landmarks into a CSV file.
Step 3: Ensure Data Diversity
- Capture Data from Different People:
- Record data from multiple individuals. Each person should perform different facial expressions and movements to capture a wide range of data.
- Capture Data from Different Angles and Distances:
- Move the camera to different angles and distances from the face to capture diverse perspectives.
- Ensure Data Balance:
- Ensure your dataset is balanced by having an equal representation of all classes or labels (e.g., different emotions, angles, etc.).
Training the Model
1. Preprocess the Data
- Load the Data:
- Read the CSV file into a DataFrame using pandas.
- Split into Features and Target:
- Separate the features (all columns except the last) and the target (the last column).
- Split the Dataset:
- Use train_test_split from scikit-learn to split the data into training and testing sets.
2. Train the Model
- Choose the Model:
- Use Logistic Regression or Ridge Classifier for training.
- Train the Model:
- Fit the model using the training data.
- Save the Model:
- Use pickle to save the trained model to a file.
3. Evaluate the Model
- Predict on Test Data:
- Use the trained model to make predictions on the test set.
- Create Evaluation Metrics:
- Calculate accuracy, precision, recall, and F1-score to evaluate the model's performance.
- Decide if the Model is Good Enough:
- Based on the evaluation metrics, determine if the model meets the required performance criteria.
Testing the Model
1. Set Up the Environment
- Ensure Necessary Libraries:
- Make sure you have OpenCV (cv2) and Pickle installed in your Python environment.
2. Load the Trained Model
- Open the Pickle File:
- Load the trained model from the pickle file that was saved during the training process.
3. Capture Video Feed
- Use cv2.VideoCapture:
- Initialize the video capture using cv2.VideoCapture.
- You can use a video file path for pre-recorded footage or 0 for a live camera feed.
4. Process Each Frame
- Read Frames in a Loop:
- Continuously read frames from the video capture in a loop.
- Process each frame to detect faces and extract features.
5. Predict Emotions
- Use the Loaded Model:
- Use the loaded trained model to predict emotions based on the extracted features from each frame.
6. Display Results
- Overlay Text on Frames:
- Use OpenCV’s text functions to display the predicted emotion class on the screen.
- Ensure the text is clear and positioned appropriately.
Bodylanguage Model Creating Data
Now that we have a working emotion detection we are going to make another model for the bodylanguage following the same steps. we will use mediapipe holistics for this model ,
Bodylanguage Model Training
follow steps in step2:
Bodylanguage Model Testing
follow steps in step 3
Flask Website
Create the HTML Structure:
- Create an index.html file with the basic structure and sect ions for each model.
Add CSS for Styling:
- Create a styles.css file to style the website and make it visually appealing.
Add JavaScript for Interactivity:
- Create a script.js file to add any interactive elements or functionality.
Raspberry Pi
Now that we have 2 working models , we will connect the raspberry pi model to the emotion detection model. the connection happens with wifi for fast and reliable connection.
Making a Nice Display
1. Set Up the LED Matrix on Raspberry Pi
- Connect the LED Matrix:
- Connect the LED matrix to your Raspberry Pi using appropriate GPIO pins or an I2C connection.
- Install Necessary Libraries:
- Ensure you have the required libraries for controlling the LED matrix, such as rpi_ws281x or any library specific to your LED matrix model.
2. Prepare the Python Script on Raspberry Pi
- Initialize the LED Matrix:
- Set up the LED matrix using the chosen library.
- Define Emotions:
- Create bitwise representations for different emotions (e.g., happy, sad, neutral, angry). Each bitwise pattern will represent the pixels to light up on the LED matrix.
- Display Function:
- Write a function to update the LED matrix with the appropriate emotion based on received data.
3. Send Data from AI Model to Raspberry Pi
- Establish Connection:
- Use a network connection (e.g., sockets) to send emotion data from the machine running the AI model to the Raspberry Pi.
- Sending Data:
- Write a script on the AI model machine to send emotion data to the Raspberry Pi over the network.
4. Update Raspberry Pi Script to Handle Incoming Data
- Receive Data:
- Modify the Raspberry Pi script to listen for incoming connections and data.
- Handle Emotions:
- Update the LED matrix display based on the received emotion data.
Maker Part
Frame Construction:
1 Build the Frame:
- Assemble the main structure using angle brackets.
- Use screws and bolts to secure the brackets together.
2 Attach the Wood Panels:
- Measure and cut the wood panels to fit the frame.
- Prime and paint the wood panels beforehand.
- Attach the wood panels to the outside of the frame using screws or nails.
3 Install Inner Supports:
- Inside the frame, add more angle brackets to create support for the project board.
- Ensure these brackets are securely fastened to hold the weight of the Raspberry Pi and other components.
4 Mount the Project Board:
- Screw the project board onto the inner angle brackets.
- Position the board so that it hangs in the front of the robot.
5 Create the LED Matrix Cutout:
- Measure and mark the location on the front panel where the LED matrix will be visible.
- Use a saw to cut a hole in the chest area, ensuring it aligns with the LED matrix on the project board.
6 Add Ventilation:
- Measure and mark the locations for vents in front of the Raspberry Pi.
- Use a saw to cut out the vents to prevent overheating.
7 Cable Management:
- Cut a hole in the back of the robot for the Raspberry Pi cables.
- Ensure the hole is large enough to accommodate all necessary cables without causing strain.
Head Construction:
1 Build the Head Frame:
- Use angle brackets to construct the head frame, similar to the main frame.
2 Drill Camera Cable Holes:
- Drill holes on the side of the head frame for the camera cables.
- Ensure the holes are positioned to allow the cables to move freely.
3 Mount the Cameras:
- Attach the two cameras to a makeshift GoPro stick or similar mounting device.
- Secure the stick inside the head frame, ensuring the cameras can move up, down, and to the sides without obstruction.
4 Assemble and Attach the Head:
- Once the head is fully assembled, attach it to the main frame using angle brackets.
- Ensure it is securely fastened and aligned with the rest of the robot.
Final Touches:
- Paint Touch-Ups:
- After assembly, do any necessary touch-up painting to cover any scratches or marks from construction.
- Testing:
- Test all components to ensure they are securely fastened and functioning correctly.
- Verify that the Raspberry Pi, LED matrix, and cameras are working properly and that the ventilation is adequate.
Visual Guide:
- Frame:
- A rectangular structure with wood panels.
- The front panel has a cutout for the LED matrix.
- Vents are positioned near the Raspberry Pi.
- A hole in the back for cable management.
- all panels are removable for easy acces to components
- Head:
- A smaller rectangular structure.
- Mounted cameras on a movable stick.
- Holes for camera cables.
- all panels are removable for easy acces to components