AI-Based Smart Home Security System
by mahmoud_asadi_heris in Circuits > Raspberry Pi
213 Views, 5 Favorites, 0 Comments
AI-Based Smart Home Security System
In this Instructable, we will create an AI-based smart home security system that utilizes facial recognition to enhance home security. The system will identify authorized individuals and grant access, while detecting unauthorized individuals and notifying the homeowner via SMS. Additionally, an image of the unauthorized individual will be saved for future reference.
Supplies
Materials and Components
- Raspberry Pi: The brain of our system, used for processing and controlling other components.
- USB Camera: To capture images and video for facial recognition.
- LCD Display: To display system status and predictions.
- Servo Motor: To control the door mechanism.
- 3D Printed Door: A prototype to simulate the actual door operation.
- Keypad: For entering passwords as an additional security measure.
- RFID Reader: To read authorized RFID tags.
- Storage: Local storage (hard drive) or cloud storage for saving images.
- SMS Notification Service: To send alerts to the homeowner.
I have also added a BOM(Bill Of Materials), so you can have an idea how much each component would cost.
links related to the components:
https://www.raspberrystore.nl/PrestaShop/nl/raspberry-pi-v5/513-raspberry-pi-5-8gb-starter-pack.html
https://www.raspberrystore.nl/PrestaShop/nl/raspberry-pi-v5/510-active-cooler-voor-de-raspberry-pi-5-5056561803357.html
https://www.amazon.com.be/-/en/Freenove-Projects-Raspberry-607-Page-Detailed/dp/B092V1BPBC/ref=asc_df_B092V1BPBC/
https://www.bol.com/be/nl/p/webcam-full-hd-1080p-webcam-met-microfoon-usb-autofocus-thuiswerken-webcam-voor-pc-zwart-windows-mac/9300000010145823/
Data Annotation
To prepare my dataset for face recognition and classification, I utilized Roboflow for annotation. For the face detection dataset, I sourced images from Kaggle, with the link provided below. I manually annotated and labeled 25% of these images to ensure high-quality data. The remaining 75% were annotated using Roboflow's Auto-labeling feature, which significantly expedited the process.
For face classification, I employed a different dataset, specifically the "Human-face" dataset from Kaggle. This stage did not require individual image labeling. Instead, I created a main folder named "CLASSIFICATION," which contained two subfolders: "Authorized" and "UnAuthorized." I organized the images accordingly and uploaded this structured classification folder to Roboflow.
By using Roboflow, I streamlined the annotation process, ensuring a robust and well-organized dataset for my AI-based smart home security system.
Human-face: https://www.kaggle.com/datasets/ashwingupta3012/human-faces
people-dataset: https://www.kaggle.com/datasets/atulanandjha/lfwpeople
Preprocessing and Augmentation
To enhance the quality and diversity of my dataset, I leveraged Roboflow's powerful preprocessing and augmentation tools. These steps were crucial in improving the performance and robustness of my AI models for face recognition and classification.
Preprocessing includes options like , Resizing, Normalization, Grayscale Conversion. These tools directly changes you original images.
Augmentation: This tool will create more images based on your original dataset. here, we have a wide range of options like Grayscale, brightness and etc.
By applying these preprocessing and augmentation techniques in Roboflow, I was able to create a robust and diverse dataset. This comprehensive approach significantly improved the training process and performance of my AI models, making them more effective in real-world scenarios.
Final Adjustments and Export of Dataset
After completing the preprocessing and augmentation steps in Roboflow, the next stages involved further refining the dataset and training the models. Here’s a detailed description of the subsequent steps:
Dataset Splitting
Train, Validation, and Test Split: The preprocessed and augmented dataset was split into three subsets: training, validation, and testing sets. This splitting ensures that the model has data to learn from, validate against during training, and finally test its performance on unseen data. Typically, the split was done in a ratio of 60% training, 20% validation, and 20% testing.
So, after my dataset was ready I could export my dataset as I have shown it in the image i have provided. the we can use the code we have recieved to download our dataset.
Training the Model
After completing the preprocessing and augmentation steps in Roboflow, I proceeded with downloading the prepared dataset and training the face recognition and classification models. Here’s a detailed description of this process:
Dataset Download from Roboflow
- Generating the Download Code: In Roboflow, I utilized the platform’s feature to generate a code snippet for downloading the annotated and augmented dataset. This code snippet ensures that I have the latest and most optimized version of the dataset for training my models.
- Downloading the Dataset: By running the provided code snippet in my development environment, I downloaded the dataset, which was preprocessed and augmented, ensuring high-quality input for model training.
Training the Models
Using the downloaded dataset, I proceeded to train the face detection and classification models with the code snippets provided by Roboflow. These snippets were tailored for efficient and effective model training.
Final-connection of Raspberry Pi and Face Recognition Models
The final step of the project involved integrating the trained models on my laptop with the Raspberry Pi to create a cohesive and functional AI-based smart home security system. Here’s a general description of this integration process:
Setting Up Communication Between Laptop and Raspberry Pi
- Networking Setup: Both the laptop and the Raspberry Pi were connected to the same local network to facilitate seamless communication. Static IP addresses were assigned to ensure stable connections.
- Socket Programming: Socket programming was used to establish a communication channel between the laptop (client) and the Raspberry Pi (server). This setup allowed real-time data transfer and control commands.
AI Client (Laptop)
Purpose:
- The primary function of this code is to run AI models for face detection and classification using the YOLO framework.
- It captures the video feed from a USB camera, processes each frame to detect and classify faces, and determines if the detected face is authorized or unauthorized.
- Depending on the classification result, it sends corresponding commands to the Raspberry Pi server to either unlock the door or send an alert.
NOTE: For alert I used trial SMS service by twilio.
Raspberry Pi Server
Purpose:
- This code serves as the server that receives commands from the AI client (laptop) and performs hardware actions based on these commands.
- It controls hardware components such as a servo motor (to unlock the door), an LCD display (to show messages), and an RFID reader (for additional access control).
- Additionally, it monitors keypad input for manual password entry and sends SMS alerts for unauthorized access using Twilio.