VigilDrive - AI Powered Driver Alert System

by MatteoDiCelmo in Circuits > Assistive Tech

23 Views, 0 Favorites, 0 Comments

VigilDrive - AI Powered Driver Alert System

VigilDrive-3.png

Hello everyone!

Im thrilled to walk you through building my first AI safety project as a first-year student in the Creative Tech and AI programme at HOWEST University in Kortrijk. I built VigilDrive-a driver fatigue detector powered by a self-trained YOLO model

The setup includes a small webcam that watches the drivers face during each journey. A trained network that analyses facial expressions sorts the person's face into Awake, Tired, or Asleep. Once nodding or closed eyes persist for over three seconds, a Raspberry Pi sounds a buzzer and turns on a red LED, in addition a vibration is activated. The drivers status also lights up on a tiny LCD.

Im excited to show you how I put VigilDrive together! It was my first real AI project, and I picked up tons of lessons about wiring, code, and smart models banging together.

If you want to build something close to this, heres what you should already know:

Decent comfort with Python, basic knowledge on machine learning and computer vision, decent knowledge of Raspberry Pi and its GPIO pins.

Let's check it out!



Supplies

To create VigilDrive, I brought together a mix of hardware, software, and design components. Here's a breakdown of everything I used:

Electronics & Hardware

  1. Raspberry Pi 5 - 8Gb - Starter Pack (2023): It handles all inputs and outputs.
  2. Active cooler for the Raspberry Pi 5
  3. Web Camera: can be mounted either on the car dashboard or on the windshield.
  4. 1 Push Button: you only need a button to start and stop the program
  5. Resistors for the buttons: a resistor soldered to the button wire.
  6. Vibration motor: triggered when the driver is tired or asleep
  7. FREENOVE PROJECT KIT:
  8. 16x2 I2C LCD Display – used to show the output of the prediction or suggesting the driver to rest.
  9. RGB LED (with PWM control) – Gives visual feedback: green for awake class, yellow for tired class, and red for asleep class.
  10. Wires – For connecting all components to raspberry.
  11. Passive Buzzer: sound alarm when the asleep class is detected.
  12. 1.5 m spare cable: used to make the vibration motor cable longer to then attach it to a bracelet.

Physical Construction

  1. Laser-cut MDF 3mm Box: Custom designed using MakerCase and Adobe Illustrator.
  2. Camera Mount: I used a car phone holder and glued the camera on it
  3. Magnets: I attached 4 magnets both on the box and its lid in order to open it and close as I want.
  4. Car phone-holder: used to hold the camera.

Software & AI

  1. Python: the programming language used to control the hardware and run state detection logic on VisualStudioCode.
  2. HTML: language to make a web-user interface.
  3. OpenCV: library for capturing and processing images from the camera.
  4. Trained Fatigue Detection Model: I locally trained a YOLOv11 model on my laptop after training it on Roboflow on a custom dataset.
  5. Custom LCD, RGB LED, vibration motor, button, buzzer classes – To simplify and organize the code.
  6. Shapr3D – Used for designing the physical box; you can use any software you are comfortable with.
  7. MakerCase: Used to design the faces of the box.
  8. Adobe Illustrator: used to add holes for hardware section and cables.


Attachments

Here you can find the bill of materials where you can better visualise what components I used and how much they cost.

I ended up spending about 263 euros but you could try it with some components a laptop and camera!

Gathering Data

Screenshot 2025-06-18 at 21.24.30.png

For this project the first step was collecting data. I started by making pictures of myself and my friends either in a car or in different setups to make my model more robust, I then added many pictures I found in others public datasets on Roboflow. Each picture is taken from different angles and in different lighting environments. These images were then uploaded on Roboflow, removed those that did not match the style and format of the dataset, hand-labelled and finally split in training, validation and testing sets.

Labeling Data

Screenshot 2025-06-18 at 22.26.47.png
Screenshot 2025-06-18 at 22.28.48.png

Now let's dive into the labelling process!

In order to train my VigilDrive AI to identify the signs of fatigue, I had to manually label a lot of images that show drivers in different states. It was an important step because the model relies on these labels to learn what "Awake", "Tired", and "Asleep" looks like.

I used Roboflow as my labeling tool. I stitched all the images together in Roboflow and created three classes: Awake, Tired, Asleep. Then I went through each image one at a time, and labeled them based on the person's facial expression, whether/how open their eyes were, head position and mouth opening (yawning). This part took quite a bit of focus, especially because during labelling it's crucial to be consistent with the bounding boxes drawing so I tried to be as methodical as possible.

At times, labeling can seem fairly modest, but it made me realize that all machine learning projects rely on it, and the more thoughtfully I labeled each image, the more successfully the model identified real driver behavior.

Model Training

Screenshot 2025-06-18 at 23.00.25.png

After completing the labelling of my images, I began training a first version of the model directly on Roboflow using YOLOv11.

I incorporated data augmentation into my training process to enhance my model's performance. This simply meant creating varieties of my original images; I made changes such as changing the brightness, rotating, flipping, or zooming in slightly. These little alterations assisted my model on its ability to recognize signs of fatigue in various lighting and angles, and it simply allowed the model to be more accurate and robust in real scenarios.

I trained the model many times until I was satisfied with my model's performance. Once I had these results, I downloaded the data set and trained my model locally making use of YOLOv8 in VS Code. I created a Python environment, grabbed the Ultralytics library and trained a custom model that I could then integrate into my project.


Coding

Screenshot 2025-06-19 at 10.26.07.png

After my model was trained, I started to build the actual code that will then connect everything from AI predictions to real-world actions on the Raspberry Pi. I wrote the main script in Python on my laptop. First, I loaded my model using the ultralytics library and linked it to a webcam feed. The script actively scanned each frame, predicting if the driver is Awake, Tired, or Asleep.

When making predictions, I was able to send them to the raspberry throughout a web socket that would finally trigger many output devices connected to the GPIO pins:

  1. If the model predicted that the driver was "Asleep" for a duration of 3-seconds, I triggered a passive buzzer, a vibration and turned on the red LED on the Pi.
  2. If the model predicted the driver was yawning (Tired class), I illuminated a yellow LED and notified the driver trough a vibration.
  3. If the model predicted that the driver was Awake, I illuminated a green LED.

I also wrote a separate LCD display class with the smbus2 library, so the current state of the user could be displayed live on the screen.

Design

IMG_0057.PNG
Screenshot 2025-06-19 at 10.54.36.png

For VigilDrive, I wanted the design to function well, but also to be practical for real use in the car.

I first made a model on Shapr3D because it's a program I am comfortable with. I then made a box on MakerCase and exported it as .svg file, added holes on AdobeIlustrator and finally put it into a laser cut machine.

I constructed the housing with 3mm MDF for the enclosure, because it is strong enough, but also light and easy to work with. To make it portable and stable, I used a plastic cup to hold everything down so that device could neatly slide into a car cupholder to prevent slippage, was safe for the vehicle and ensured that it was not too large for the cupholder. I placed the LCD on the front panel for the driver to be able to easily read the current state of driver awareness and left little cutouts for the LED, buzzer, button and wires. For the camera I simply bought a car phone holder and attached the camera to it. Finally for the vibration motor I decided to combine it with a elastic to make a vibrating bracelet

By this point, the overall design was compact, practical and set to be used in real driving case scenarios.

Assemble

CDD39092-F9A4-42FE-B038-BCEB4B18E810.png
IMG_7621.png
IMG_7665.png

Once I had all of the finished components, I started the process of putting the physical build of VigilDrive together.

I began by gluing the MDF box together using Pattex glue after which I let the glue dry until I was confident that the box was stable. Then I painted the box black to give it a sleek/professional appearance. I used a plastic cup for the base of VigilDrive so that I could introduce an entire product into a car's cup holder, but before painting the cup, I sanded the surface of the cup to ensure good adhesion of the paint. Just like with the MDF box, I painted the cup black as well for uniformity in design. Once the paint on both the cup and box dried, I glued the MDF box on the cup to form a compact, car-friendly product. Once complete, I mounted the Raspberry Pi, wired the LCD, LEDs, buzzer, a button with a soldered resistor and a vibration motor, and made sure it all worked accordingly.

Time to Drive Safe!

The entire process of building VigilDrive has been a ton of learning as I pulled together AI, coding, hardware and design into one coherent project. As a first-year student, I never imagined I’d actually be able to build something that feels so real and practical.

If you're considering building this project, don't be afraid. One of the cool things about VigilDrive is that you could completely modify what I made, make it your own! Swap out the enclosure, add voice alerts, change the mounting system. VigilDrive provides a flexibility for you to customize based on your creativity. Lastly, if you're thinking about building a similar project, don't hesitate to contact me with any questions or if you want to know more about how I set its design. I can't wait to see what other people do!