Smart Cat Feeder: Automate Your Pet's Feeding With AI and Raspberry Pi

by yayazhang in Circuits > Raspberry Pi

315 Views, 5 Favorites, 0 Comments

Smart Cat Feeder: Automate Your Pet's Feeding With AI and Raspberry Pi

Image_20240615225102-removebg-preview.png

In this project, you'll learn how to create a smart cat feeder that uses a trained AI model to detect different cats and dispense the right food for each cat. We'll use a Raspberry Pi connected to a stepper motor and LEDs to control the feeder. The detection process will be handled by a YOLOv8 model running on a connected laptop. The project will involve a combination of hardware design, AI model training, and software development.

Gather Materials

servomotorpin.jpg

Materials Needed:

  • Raspberry Pi 5 (with SD card and power supply)
  • Stepper motor
  • LEDs
  • External USB camera
  • Laser cutter (for cutting the feeder parts)
  • Laptop (for running the AI model and Streamlit application)
  • Project board

Estimated Cost:

  • Raspberry Pi: euro 149
  • Project kit - motor, sensor, LED light: euro 59
  • External USB camera: euro 15

Total: euro 223

Capture Data

feeding_area.png

Objective: Capture images of your cats and other pets to use as training data for your model.

  • Prepare Your Environment:
  • Set up the food area where you usually place your cat’s food.
  • Ensure good lighting conditions to capture clear images. Use both natural and artificial light.
  • Capture Images:
  • Take multiple images of your cat(s) from different angles (front, side, top).
  • Capture images at different times of the day to get variations in lighting.
  • Ensure to take pictures of other pets (like dogs) if possible to help the model differentiate.
  • Image Quality:
  • Ensure the images are clear and focused.
  • Avoid blurry or low-resolution images as they can affect the training quality.


Prepare the Dataset

Roboflow 6 Minute Intro | Build a Coin Counter with Computer Vision
anotate_data.png

Objective: Label the images to indicate where the cats (and other pets) are located in each image.

  • Upload to Roboflow:
  • Create an account on Roboflow.
  • This is the link for the video tutorial of roboflow link
  • Create a new project and upload your images.
  • Annotate:
  • Use the annotation tools in Roboflow to draw bounding boxes around the cats and other pets.
  • Label each bounding box with the correct label.
  • Ensure each image is accurately labeled and saved.
  • Export Data:
  • Once all images are annotated, export the dataset in YOLO format.
  • Download the annotated dataset to your computer.

the video is a short intro of how you use roboflow to anotate your data, and how it look like

Train the Model

Objective: Train the YOLOv8 model using the annotated images.

Set Up Environment:

  • Ensure you have Python installed on your machine, create virtuale enviroment for the project. (It is a good choice to set up a vvirtuale environment to work on since you are going to have both serverside and client side of code) below you will see and how do you set up the virtual enviroment for your project


What is a Virtual Environment? See:
https://datascientest.com/en/python-virtualenv-your-essential-guide-to-virtual-environments

- Open the Command Palette (`View > Command Palette`).
- Search for `Python: Create Environment`.
- Select `Venv` and choose any Python 3.11 interpreter.
- When prompted, check the `RPi/requirements.txt` and confirm by pressing OK.
  • after set the environment, its time to Install YOLOv8 and its dependencies, since for this project, we are going to use YOLOv8 to train the model
		pip install ultralytics

Prepare Dataset:

  • Move your annotated dataset to a directory accessible by your training script.(export your dataset by the code roboflow give you, example code like below)
rf = Roboflow(api_key=api_key)
project = rf.workspace("label-my-cats-images").project("cat_feeder_project")
version = project.version(9)
dataset = version.download("yolov8")

Train Model:

  • Create a training script or use the command line to train the model to detect which cat is which

example code

from ultralytics import YOLO

model = YOLO('yolov8s.pt')  # Load a pre-trained YOLOv8 model
model.train(data='path/to/your/dataset', epochs=50, imgsz=640)  # Train on your dataset


Save Model:

  • After training, save the trained model weights to a file (e.g., detect_cat_v8.pt).

Write Image to the Pi

rapiimage.png
rapiimage3.png
raspiimage2.png
rapiimage5.png
raspiimage4.png
fba7ed3a-195f-4366-b379-f6687aa119bd.png

4. Write Image to the Pi

Objective: Set up the Raspberry Pi with the necessary software and dependencies.

Prepare SD Card:

  • Download the latest Raspberry Pi OS from the official site.
https://www.raspberrypi.com/documentation/computers/getting-started.html

from this link you can find the detail steps how you write image to your SD card for your raspberry pi, after write the image to your SD card, you are ready to start on your server part

Initial Setup:

  • Insert the SD card into the Raspberry Pi and power it on.
  • Follow the on-screen instructions to set up the Raspberry Pi, including connecting to Wi-Fi.

Install Dependencies:

  • Open a terminal and install the required libraries:
sudo apt update
sudo apt install python3-pip
pip3 install opencv-python flask requests
pip3 install ultralytics

Motor Connecting to Pi

servomotor.jpg
servomotorpin.jpg

Using project board is much easier for people who doesnt have electronic background, that you just need to find the pin position to connect

Connect the Pi With Laptop

Objective: Establish a connection between your Raspberry Pi and your laptop for development and testing.

SSH into Pi:

  • Enable SSH on the Raspberry Pi through the Raspberry Pi Configuration tool.
  • Use an SSH to connect to your Pi:
bash
Copy code
ssh pi@<your_pi_ip_address>

Maker Design

makerdesign.png

Objective: Design and build the physical components of the smart cat feeder.

  • Design Components:
  • Use Inkscape to create vector designs for the feeder components.
  • Design sections for the food plate, lid, and servo motor mounting.
  • Create Design:
  • Open Inkscape and set the document properties (e.g., units, size) according to your material and machine.
  • Draw the shapes and parts for the feeder mechanism. Use the tools in Inkscape to ensure precise measurements and alignment.
  • Export Design:
  • Save your design as an DXF file.
  • Laser cutting Components:
  • If you have access to a laser cutter, create the parts based on your design.
  • Alternatively, use manual tools to build the components from available materials.
  • Assemble:
  • Assemble the feeder mechanism, ensuring that the servo motor can control the lid's opening and closing.
  • Securely attach all components.

Assemble the Cat Feeder

Image_20240615225102-removebg-preview.png
project board.jpg

Assemble the Structure:

  • Follow the design to assemble the cat feeder structure. Use glue as needed to secure the parts together.
  • Ensure all moving parts, such as the food door, move smoothly.

Install the Electronics:

  • Connect raspberry pi to project board
  • Mount the stepper motor in place and connect it to the motor driver module.
  • Stick LEDs in their designated slots and connect them to the project board.
  • Connect the external USB camera to the Raspberry Pi.


Combining Bounding Box With Pi

codesnip1.png
codesnip2.png

Objective: Integrate the object detection model with the Raspberry Pi to control the feeder.

  • Set Up Steamlit:
  • Handle streamlit web application to handle the video stream and commands.
  • The codesnips show the function i use for connecting server(rapi) part and client part
  • Integrate Model:
  • Ensure the trained YOLOv8 model is loaded in your application.
  • Implement the logic to process video frames and control the servo motor and LEDs based on detection results.
  • once both side runs, when video streaming, the servo motor will turn to the certain angle base on the detected cat
  • Test and Debug:
  • Test the entire setup, ensuring the feeder operates correctly when a cat is detected.
  • Debug any issues and refine the model or code as needed.


Here is the link to this project's github repo