Raspberry Pi Based Emotion Recognition System
by Orange Digital Center in Circuits > Cameras
4885 Views, 6 Favorites, 0 Comments
Raspberry Pi Based Emotion Recognition System
Prepared by: WISSAL HEMCHA
Introduction
A Facial Expression Recognition System can be used in a number of applications. It can be used to study or analyze the sentiments of humans.
In this circuit workshop we are going to build a system that detects the emotion of people entering into a space and stores data in the cloud for further analytics.
Many companies are implanting a Facial Expression Recognition System to study the depression level of their workers. Some gaming companies are applying a Facial recognition system to record the satisfaction level of the gamer.
Downloads
Supplies
In this project we are going to need the following material in order to achieve our goal:
- Raspberry pi 3 + micro SD card 16G with operating system
Raspberry Pi: is a credit card-sized, ARM-based, single-board nanocomputer designed by professors in the Department of Computer Science.
The Raspberry Pi is a low cost, that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It is a capable little device that enables people of all ages to explore computing, and to learn how to program in languages like Scratch and Python. It’s capable of doing everything you’d expect a desktop computer to do, from browsing the internet and playing high-definition video, to making spreadsheets, word-processing, and playing games.
Your Raspberry Pi needs an operating system to work. This is it. Raspberry Pi OS (previously called Raspbian) is our official supported operating system.
- Pi Camera Module
The Camera Module 2 can be used to take high-definition video, as well as stills photographs. It’s easy to use for beginners, but has plenty to offer advanced users if you’re looking to expand your knowledge. There are lots of examples online of people using it for time-lapse, slow-motion, and other video cleverness. You can also use the libraries we bundle with the camera to create effects.
The camera works with all models of Raspberry Pi 1, 2, 3 and 4. It can be accessed through the MMAL and V4L APIs, and there are numerous third-party libraries built for it, including the Picamera Python library. See the Getting Started with Picamera resource to learn how to use it.
- Charger C type
- Rj-45 cable
Downloads
Connecting to the Raspberry PI Via SSH
In order to connect to the raspberry Pi interface using SSH we are going to use the command “ssh” in the terminal followed by the raspberry pi name then ‘@’ and the ip address, exemple: " ssh pi@192.168.1.8" .
The terminal (also known as the shell or command-line interface) is a text-based interface that accepts and interprets your commands. You can use terminal commands in Raspbian to run programs, execute scripts, manipulate files, etc.
Linux Basic Commands:
Before we go on to the list of commands, you need to open the command line first. If you are still unsure about the command-line interface, check out this CLI tutorial.
Although the steps may differ depending on the distribution that you’re using, you can usually find the command line in the Utilities section.
Here is a list of basic Linux commands:
ls Command
ls is probably the first command every Linux user typed in their terminal. It allows you to list the contents of the directory you want (the current directory by default), including files and other nested directories.
cd Command
The cd command is highly popular, along with ls. it refers to "change directory" and, as its name suggests, switches you to the directory you're trying to access.
Downloads
Creating and Activating the Python Virtual Environment
In this step we will create a virtual environment in order to separate our python from the system’s python.
-Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms
- A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.
The module used to create and manage virtual environments is called venv. venv will usually install the most recent version of Python that you have available. If you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.
To create a virtual environment, decide upon a directory where you want to place it, and run the venv module as a script with the directory path:
python3 -m venv myenv
cd myenv
Once you’ve created a virtual environment, you can activate it.
On Unix or MacOS, run:
source bin/activate
Downloads
Installing Libraries
Installing the required libraries for our python program to function:
OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision. OpenCV is used here for digital image processing. The most common applications of Digital Image Processing are object detection, Face Recognition, and people counter, we are going to use the command:
python3 -m pin intall opencv-python==3.2
Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and mathematical, we are going to use the command:
python3 -m pin intall keras==2.0.5
TensorFlow is an open source machine learning framework for all developers. It is used for implementing machine learning and deep learning applications. To develop and research on fascinating ideas on artificial intelligence
Machine learning is the art of science of getting computers to act as per the algorithms designed and programmed. Many researchers think machine learning is the best way to make progress towards human-level AI. Machine learning includes the following types of patterns:
Supervised learning pattern
Unsupervised learning pattern
we are going to use the command:
python3 -m pin intall tensorflow==1.1.0
Pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, we are going to use the command:
python3 -m pin intall pandas==0.19.1
NumPy, which stands for Numerical Python, is a library consisting of multidimensional array objects and a collection of routines for processing those arrays. Using NumPy, mathematical and logical operations on arrays can be performed. we are going to use the command:
python3 -m pin intall numpy==1.12.1
The h5py package is a Pythonic interface to the HDF5 binary data format. HDF5 lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorized and tagged however you want. we are going to use the command:
python3 -m pin intall h5py==2.7.0
Python's statistics is a built-in Python library for descriptive statistics. we are going to use the command:
python3 -m pin intall statistics
Downloads
Writing the Code to Detect Faces in the Video Stream
What Is Face Detection?
Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images.
The goal of face detection is to determine if there are any faces in the image or video. If multiple faces are present, each face is enclosed by a bounding box and thus we know the location of the faces.
Human faces are difficult to model as there are many variables that can change for example facial expression, orientation, lighting conditions . The result of the detection gives the face location parameters and it could be required in various forms, for instance, a rectangle covering the central part of the face, eye centres or landmarks including eyes, nose and lips, mouth corners, eyebrows, nostrils, etc.
How face detection works
Face detection applications use algorithms and machine learning to find human faces within images and video , which often incorporate other non-face objects such as landscapes, buildings and other human body parts like feet or hands. Face detection algorithms typically start by searching for human eyes -- one of the easiest features to detect. The algorithm might then attempt to detect eyebrows, the mouth, nose, nostrils and the iris. Once the algorithm concludes that it has found a facial region, it applies additional tests to confirm that it has, in fact, detected a face
To make algorithms as accurate as possible, they must be trained with huge data sets that contain hundreds of thousands of images. Some of these images contain faces, while others do not. The training procedures help the algorithm’s ability to decide whether an image contains faces, and where those facial regions are located.
To detect the faces we used tensor flow with cv2 to do the images/video treatment.we used the pretrained model haarscascade
cv2.namedWindow('window_frame') video_capture = cv2.VideoCapture(0) while True: bgr_image = video_capture.read()[1] gray_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2GRAY) rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB) faces = detect_faces(face_detection, gray_image)
Writing The Code To Find The Region Of Interest
ROI or region of interest is the face region that contains the expression, we will feed the face detection algorithm the ROI in order to predict the face expression.
for face_coordinates in faces: x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets) gray_face = gray_image[y1:y2, x1:x2] try: gray_face = cv2.resize(gray_face, (emotion_target_size)) except: continue gray_face = preprocess_input(gray_face, True) gray_face = np.expand_dims(gray_face, 0) gray_face = np.expand_dims(gray_face, -1)
Downloads
Feeding the Model to Predict the Expression
we are going to implement an Emotion Recognition System or a Facial Expression Recognition System on a Raspberry Pi. We are going to apply a pre-trained model to recognize the facial expression of a person from a real-time video stream. The “FER2013” dataset is used to train the model with the help of a VGG-like Convolutional Neural Network (CNN).
We are using Two Classes here that are 'Happy', 'Sad'. So, the predicted images will be among these classes.
emotion_prediction = emotion_classifier.predict(gray_face) emotion_probability = np.max(emotion_prediction) emotion_label_arg = np.argmax(emotion_prediction) emotion_text = emotion_labels[emotion_label_arg] emotion_window.append(emotion_text) if len(emotion_window) > frame_window: emotion_window.pop(0) try: emotion_mode = mode(emotion_window) except: continue if emotion_text == 'sad': color = emotion_probability * np.asarray((0, 0, 255)) elif emotion_text == 'happy': color = emotion_probability * np.asarray((255, 255, 0)) else: color = emotion_probability * np.asarray((0, 255, 0))
Downloads
Sending Analytics to the Cloud
In this final coding step we are going to send the data to the cloud Storage, we are using cloud firestore NoSQL database REST api.
Firebase is a development platform known originally for its realtime database that's still at its core a multi-node, key-value database optimized for synchronizing data, often between user machines or smartphones and centralized storage in the cloud.
date = datetime.today().strftime('%Y-%m-%d') url="https://firestore.googleapis.com/v1/projects/happy-ad847/databases/(default)/documents/counters/" + date … … if emotion_text == 'sad': color = emotion_probability * np.asarray((255, 0, 0)) elif emotion_text == 'sad': sad = sad + 1 total = total + 1 print('sad = ' + str(sad)) payload = { "fields": { "total": {"integerValue": str(total)}, "sad": {"integerValue": str(sad)}, "happy":{"integerValue": str(happy)} } } rsp = requests.patch(url,data = json.dumps(payload)) color = emotion_probability * np.asarray((0, 0, 255)) elif emotion_text == 'happy': happy = happy + 1 total = total + 1 print('happy = '+ str(happy)) payload = { "fields": { "total": {"integerValue": str(total)}, "sad": {"integerValue": str(sad)}, "happy":{"integerValue": str(happy)} } } rsp = requests.patch(url,data = json.dumps(payload)) color = emotion_probability * np.asarray((255, 255, 0))
now we will execute the code:
python3 happy_sad_cloud.py
and we will see the results in the cloud as shown below
Design for 3D Printing
After installing Solideworks ,which is a mechanical design automation application that lets designers quickly sketch out ideas, experiment with features and dimensions, and produce models and detailed drawings.I start by taking the camera measures and then continue with the modeling in Solideworks.
Before sending the file in the 3D ,we need to export to .stl File.
I'll use XYZware Software.
Sending the File in to 3D Machine
Finally we are printing a 3D object for the camera.
Prepared by: WISSAL HEMCHA