Building an Automated Fault Detection System for Production Lines Using Raspberry Pi and Machine Learning

by yashpindersaini in Circuits > Raspberry Pi

50 Views, 0 Favorites, 0 Comments

Building an Automated Fault Detection System for Production Lines Using Raspberry Pi and Machine Learning

Screenshot 2024-11-10 235354.png

Build a smart QA system with Raspberry Pi & ML! Capture, train, and detect product flaws on a budget, automating quality control with ease.

Supplies

download.jpg
71u2Od-VqZL.jpg
download.jpg

Raspberry Pi 5 (any model with a camera interface will also work)

Raspberry Pi Camera Module (any USB camera will also work)

Conveyor belt (optional, or use any object for product simulation)

Stepper motor (optional, if using a conveyor belt)

Ultrasonic sensor (for product detection, to trigger image capture when a product is detected)

Power supply for the Raspberry Pi

MicroSD card with Raspberry Pi OS

Lighting setup (for consistent image quality)

Screenshot 2024-11-10 220917.png

Set up the Raspberry Pi: Insert the MicroSD card with Raspberry Pi OS, connect the power supply, and connect the Raspberry Pi Camera Module to the camera port.

Set up the Ultrasonic Sensor: Mount the ultrasonic sensor before the camera to detect when a product passes through. The sensor will trigger the image capture process. Preferably, connect the sensor to an Arduino if you're using it to control the conveyor belt, and then establish communication (e.g., I2C, serial, or GPIO) between the Arduino and the Raspberry Pi. This will allow the Raspberry Pi to know when the sensor detects a product and trigger the camera to capture the image.

Camera Positioning: Mount the camera in a fixed position that captures clear images of the product passing through (adjust the angle for optimal viewing).

Optional Conveyor Belt: If using a conveyor belt, mount it on a stable surface and connect the stepper motor to the Raspberry Pi to control movement. Alternatively, you can use a microcontroller (like an Arduino) or connect the Raspberry Pi to the existing controller of the conveyor system to manage the motor and movement. This can help offload the motor control task from the Raspberry Pi, freeing it up for other processing duties.

Background Setup: Use a white background or a white sheet behind the product for consistent lighting and better image contrast. This helps the camera distinguish the product from the background, improving the accuracy of defect detection.

Capturing Images Using Picamera2 for Training

Before running the script, ensure that Picamera2 is installed on your Raspberry Pi. You can install it using the following commands:

sudo apt update

sudo apt install python3-picamera2

Next, download the provided Python script to capture images. You can adjust the resolution and the number of images according to your requirements. Additionally, modify the script to create two separate folders: one for correctly labeled images and one for incorrectly labeled images.

Once you have captured the images, divide the dataset into two parts: one for training and the other for testing the model. Recommended ratios for the split are 70/30 or 80/20, with the larger portion used for training and the smaller portion for testing.

Downloads

Setting Up the CNN Model for Image Classification

1. Install Required Libraries

Before proceeding, ensure that TensorFlow is installed. If you haven't installed it yet, you can do so with the following command:

pip install tensorflow numpy pip Pillow

Note: It is highly recommended to perform these steps in a virtual environment rather than installing them globally.

2. Download and Update the Script

Next, download the provided Python script. Once downloaded, you need to modify the script by adding the locations of your training and testing image directories. Also, specify the location where you want to save the trained model file.

Here are the necessary updates:

  1. Training Directory (train_dir): Set the path where you have saved your training images.
  2. Validation Directory (validation_dir): Set the path where you have saved your testing images.
  3. Model Save Path (model_save_path): Specify the path where you would like the trained model to be saved.

After making these updates, you can proceed to run the script to build, train, and save your convolutional neural network (CNN) model.

Downloads

Testing

Now that the model is created, you can test its performance before putting it into action. To do so, capture a new set of images of the same kind, i.e., all the images should belong to either the "correct" or "incorrect" category. Use the code provided earlier to capture and feed these images into the model for testing.

After running the code, you will receive output with labels for "correct" and "incorrect" predictions. First, count the number of accurate predictions and the total number of predictions. Then, use the following formula to calculate the accuracy of the model:

Accuracy = (Number of Correct Predictions / Total Number of Predictions) × 100%

If the accuracy does not meet your desired standards, consider retraining the model with a larger dataset and improved lighting conditions.

Downloads

Microcontroller Deployment

For this step, I am using HTTP communication between the Raspberry Pi and the Arduino. You can use other communication methods if they suit your needs better. If you are using a Wi-Fi-enabled board, you can use the code provided below.

This code detects an object, then after a brief delay of one second, it activates the motor until it reaches the image capturing location. Once it reaches the location, the code sends a command to the Raspberry Pi to capture an image.

Downloads

Final Deployment

Download the code provided below, change the directory to your own, and then execute it. You will see a user interface with a button to click and evaluate the image. The interface will also display a percentage showing the accuracy of the product line.

If you are connected to the microcontroller, the images will be clicked automatically, and the evaluation will take place without needing to manually trigger the capture.

Downloads

Acknowledgments

I would like to express my heartfelt thanks to Dr. Amit Pandey and Mr. Inderpreet Singh for their invaluable guidance and support throughout this project. Their insights and encouragement greatly contributed to its success.