AI-Powered Recipe Assistant

Ever stared at a bunch of ingredients in your fridge and had no idea what to cook? I certainly have! That’s why I created this AI-Powered Recipe Assistant (affectionately nicknamed ChefBot). This project uses computer vision to identify ingredients from a photo and then suggests yummy recipes you can make with them. It’s like having a smart sous-chef that never runs out of ideas!
What does it do? Simply snap a photo of your ingredients (or upload one), and the system will detect what food items it sees using a custom-trained YOLOv8 object detection model. Then it cross-references those ingredients with a recipe database (and even an online API) to recommend dishes you can cook. The best part is it runs almost all on a single Raspberry Pi 5 with a user-friendly Streamlit web app interface. The Pi is hooked up to an LCD screen and an RGB LED for real-time feedback – so it’ll literally flash and tell you when it’s processing or when recipes are found!
Why is it useful? Aside from wowing your friends with some AI kitchen magic, this assistant can help reduce food waste by suggesting ways to use up whatever you have on hand. It’s also a fun way to learn about AI and hardware: it blends computer vision, web APIs, and a bit of electronics (LEDs, display) into one cohesive project.
In this Instructable, I’ll take you through how I built ChefBot from scratch. We’ll cover everything: setting up the Raspberry Pi and camera, training the ingredient detection model, coding the Flask and Streamlit apps, and even handling tricky parts like live video streaming and network connections. If you have basic Python and Raspberry Pi experience, you can follow along and build your own! So let’s get cooking – with AI!
(P.S. – I’ve added some extra features like the RGB module, the PI casing and the custom branding (see the cute ChefBot logo!). These aren’t strictly necessary, but they add to the polish and fun of the project.
Supplies
Let’s start with the ingredients for our build (pun intended). Here’s a list of all the hardware and software you’ll need for the AI Recipe Assistant. I’ll include placeholders for prices and links – you can find these parts online or maybe in your local electronics shop. The total project cost for me was roughly 190€, but you might spend less if you have some parts already or skip optional items. You can find my bill of materials here.
Hardware & Electronics:
- Raspberry Pi 5 (8GB) – The brain of our assistant. I used the Pi 5 for its extra performance, especially since we’re dealing with image processing. (A Pi 4 can work too, but the Pi 5 makes everything snappier.). 113.18€.
- MicroSD Card (32GB or larger) – For the Raspberry Pi’s OS and storage. 9.95€.
- 5V 3A Power Supply for Raspberry Pi – Ensure your Pi has a stable supply, especially if using USB devices. 13.95€.
- USB Webcam – Used to capture ingredient images. I used a wild angle camera module (17.99€). If you have the official Pi Camera module, you can use that instead (with some tweaks), but a USB camera is plug-and-play.
- 16x2 I2C LCD Display – For real-time messages from the Pi. Mine came from a kit (it’s a basic 16x2 character LCD with an I2C backpack, address 0x27). 8.95€.
- Common-Anode RGB LED – This tri-color LED provides visual status (red/green/blue/purple indications). Common-anode type is used so we can control colors easily with the Pi’s GPIO.
- Resistors (3x ~220 Ω) – Current-limiting resistors for the LED’s red, green, and blue pins. Prevents burning out the LED or GPIO. These often come in starter kits.
- Jumper wires and Breadboard – To connect the LCD and LED to the Raspberry Pi’s GPIO pins. Female-to-male jumper wires will connect Pi pins to the breadboard, etc.
- Ethernet cable or Wi-Fi – To network your Pi. An internet connection is needed if you want to use the recipe API (and also to connect to the model server if it runs on another PC).
Software & Data:
- Raspberry Pi OS – I used Raspberry Pi OS 64-bit on the Pi 5. Make sure to enable GPIO and I2C in the settings.
- Python 3 + Pip – The project code is in Python. The Pi comes with Python, but you’ll install a few libraries.
- Project Code (Streamlit app & Flask server) – All the code I wrote for this project includes the Streamlit UI and the Flask for the AI model, plus some helper scripts (like creating a recipe database).
- YOLOv8 Model Weights – A trained YOLOv8n model for ingredient detection. You can train your own (as I did) or use pre-trained models. I trained it on images of common fruits and veggies – more on that in the next steps.
- Ultralytics YOLOv8 library – We’ll use Ultralytics’ package to run the object detection. This will be installed via pip (`ultralytics` package).
- Streamlit – The web app framework for our UI. (`pip install streamlit`).
- Flask – Web server library for hosting the model API. (`pip install flask`).
- Additional Python libs – `opencv-python` (for image processing), `gpiozero` (GPIO control for LED), `RPLCD` (for the LCD display), `requests` (for web requests), `sqlite3` (built-in, for local recipes DB), and a few others like `NumPy`, `pandas` (used in the code for managing data). We’ll install all of these in a later step.
- MJPG-Streamer – A lightweight tool to stream the USB camera feed with low latency. This lets us have a live preview in the web app. We’ll set this up on the Pi so we can grab frames on demand.
Hardware Setup – Wiring the Raspberry Pi, LCD, and LED

Time to build the hardware side of things! In this step, we’ll hook up the 16x2 LCD and the RGB LED to the Raspberry Pi’s GPIO pins, and set up our camera. Don’t worry, the wiring is fairly simple – no soldering needed, just jumper wires and a breadboard.
1. Prepare the Raspberry Pi: If you haven’t already, flash Raspberry Pi OS onto your microSD card and boot up the Pi. Enable SSH or attach a monitor/keyboard to work on it. It’s a good idea to update your Pi (sudo apt update && sudo apt upgrade) at this point. Also, enable I2C and the camera interface using raspi-config:
- Run sudo raspi-config.
- Go to Interface Options and enable I2C (this allows the Pi to communicate with the LCD screen via the I2C bus).
- If you plan to use a Raspberry Pi Camera Module (instead of a USB webcam), also enable the Camera interface here.
- Reboot the Pi after enabling these.
2. Wire up the I2C LCD: The LCD will show status messages (like the device IP address, prompts, etc.). It uses only four wires thanks to the I2C interface. Connect the LCD’s I2C backpack pins to the Pi as follows:
- VCC on LCD -> 5V pin on Pi (Pin 4 or 2 on the GPIO header). The LCD module typically runs on 5V. (Note: 5V is fine for the LCD’s power; the I2C data lines will still be 3.3V logic.)
- GND on LCD -> GND pin on Pi (any ground pin, e.g., Pin 6).
- SDA on LCD -> GPIO 2 (SDA1) on Pi (Pin 3 on header).
- SCL on LCD -> GPIO 3 (SCL1) on Pi (Pin 5 on header).
- Double-check these connections – SDA/SCL lines are right next to each other on the Pi header. With these connected, the Pi will be able to send text to the LCD. The I2C address of most 16x2 LCD backpacks is 0x27 by default (some are 0x3F). We’ll assume 0x27 in our code. If your LCD doesn’t show messages later, we may need to scan for the address or adjust it in code.
3. Connect the RGB LED: This LED will give us colorful visual feedback. We’re using a common-anode RGB LED, meaning the three colors share a positive voltage pin. The LED typically has four legs – the longest leg is the common anode (positive), and the other three legs are for Red, Green, and Blue (negative cathodes). Here’s how to wire it:
- Place the RGB LED on a breadboard. Connect the longest leg (common anode) to the Pi’s 3.3V rail. (Use a jumper from the LED leg to a 3.3V pin on the Pi, e.g., Pin 1). This will supply power to the LED.
- Now connect the three color legs:
- Red cathode -> GPIO 17 (Pin 11) through a resistor (~220 Ω). That means: from the LED’s red leg, put a resistor in series to a jumper wire, and connect that to Pi GPIO17.
- Green cathode -> GPIO 27 (Pin 13) through a ~220 Ω resistor.
- Blue cathode -> GPIO 22 (Pin 15) through a ~220 Ω resistor.
- The resistors can be on either side of the LED (between the LED leg and the Pi pin is fine). They limit current for each color. Without them, the LED or Pi could be damaged, my LED have built-in resistors.
- Common anode to 3.3V means each GPIO will turn its color on by driving its pin LOW (0V) and turn it off by driving HIGH (3.3V). We’ll handle that in code using the gpiozero library. Just make sure you used the correct type of LED (common-anode). If you only have a common-cathode RGB LED, you’d wire common to GND and reverse the logic in code (or swap .on()/.off() calls).
4. Plug in the Camera: If you’re using a USB webcam, simply plug it into one of the Pi’s USB ports. The Pi should recognize it automatically (most USB UVC cameras are plug-and-play on Linux). You can test it later with a quick program or via our app.
- If you’re using the Raspberry Pi Camera Module: Connect it to the camera CSI ribbon connector on the Pi board. Make sure it’s inserted correctly and the connector latch is closed. Since our software is geared toward USB cams (for streaming ease), using the Pi cam might require additional steps or software (e.g., using libcamera or a different streaming method). For this Instructable, I’ll assume a USB cam for simplicity.
5. Network the Pi: Connect your Raspberry Pi to your network. You can use Wi-Fi or an Ethernet cable. For initial setup, Wi-Fi is fine (just ensure you know the Pi’s IP address). If you have the option, Ethernet can be more stable, especially if you plan to run the AI model on another machine – it can simplify IP addressing and speed up image transfer. In my setup, I often connect via Ethernet when available. The Pi’s LCD will later display its IP address on the network, which is super handy for accessing the web app.
At this point, our hardware assembly is done! You should have a Raspberry Pi with an LCD screen and an RGB LED connected, plus a camera ready to capture images. It’s a neat little setup: the LCD will show status messages (like the Pi’s IP address or prompts like “Processing…”), and the LED will glow different colors (for example, green when ready, blue when capturing or processing, red on errors, etc.).
Preparing the AI Model – Training YOLOv8 for Ingredient Detection

Now for the AI magic behind our recipe assistant: we need a model that can recognize different ingredients in an image. I chose to use YOLOv8 (You Only Look Once, version 8) – a state-of-the-art object detection model. The idea is to train YOLOv8 to detect food items like tomatoes, apples, carrots, eggs, etc., in a photo. Once it knows what ingredients are present, we can use that info to find matching recipes.
Collecting and Annotating Data: For a custom model, you need a dataset. I gathered my training images from various sources:
- I took my own photos of ingredients (from my kitchen and grocery store) to make sure the model sees real-life examples.
- I also supplemented with some public datasets: e.g., Fruits 360 (a fruits image dataset) and a Grocery Store dataset, plus a few from Roboflow’s public datasets.
- In total, I collected a few thousand images of fruits and vegetables. I made sure to have multiple examples of similar-looking items (like tomatoes vs red apples vs red bell peppers) because the model can easily confuse them if not trained well.
Once I had images, I used Roboflow (an online tool) to annotate them – basically drawing bounding boxes around each ingredient in each image and labelling it with the ingredient name. This part was time-consuming (imagine labelling hundreds of tomatoes, bananas, etc.), but it’s crucial for accuracy. I ended up with a dataset of common ingredients (about a dozen categories in my case, including tomato, apple, banana, bell pepper, onion, carrot, broccoli, cauliflower, egg, pasta, milk…). If you want to create your own dataset, Roboflow or LabelImg are good tools to help with labelling. Tip: include images from different angles, lighting, and backgrounds so the model generalizes well.
Training the YOLOv8 Model: With data in hand, I trained a YOLOv8 model on this dataset. I used the Ultralytics YOLOv8 repository – it provides a convenient way to train on custom data. I chose the YOLOv8m model variant, which is medium size and fast. Training was done on my PC with a decent GPU (training on the Pi itself would be painfully slow). It took a couple of hours of training and tweaking hyperparameters (and a few retrains when I noticed confusion between similar items). For example, at first the model confused red apples and tomatoes – to fix that, I added more images and examples of those in different contexts. Also, I noticed camera quality matters: images from a low-res webcam can be grainy, so I included some intentionally lower-quality images and used data augmentation (like blur, noise) during training. This helped the model handle different image qualities better.
In the end, I got a pretty decent ingredient detector model. It can identify things like an apple vs a tomato in an image, even if multiple items are present. If you want to train your own model, follow Ultralytics’ guide on custom training – or you can skip the hard part and use ready model weights.
Training Summary: I trained YOLOv8n for about X epochs until it reached an acceptable accuracy on my validation set. Don’t worry if you’re not an AI expert – you can use my model as-is. But it’s good to know what’s under the hood: YOLOv8 will take an image and output a list of detected ingredient names with confidence scores and bounding boxes. That’s exactly the info our recipe app needs.
Testing the Model: After training, I tested the model on some test photos (ones it hadn’t seen). I was excited to see it correctly recognized multiple ingredients! For instance, in a photo of a banana and an apple on a table, it detected both. It’s not 100% perfect (few models are), but it’s pretty reliable for our needs.
Setting Up the YOLO Model Server (Flask API)
To use our trained YOLOv8 model in the project, I created a simple Flask web server that hosts the model. The Raspberry Pi will send images to this server and get back the list of detected ingredients (and even an image with bounding boxes drawn for display). This separation is useful because running the model can be resource-intensive. You have two options here:
- Run the model server on the Raspberry Pi 5 itself. The Pi 5 is quite powerful, and if you use the small YOLO model, it can run locally (though it may still take a couple of seconds to a few minutes per image). This makes the setup self-contained but pushes the Pi to its limits.
- Run the model server on a separate machine (like a laptop/PC with a good GPU). This is the approach I took during development for faster results. The Pi sends images over the network to the PC, the PC does the heavy AI computation, and sends back the result. This way, the Pi isn’t bogged down by neural network processing.
I’ll describe the general setup which works for either case – just keep in mind where you decide to host the model server.
1. Install required libraries on the server machine: Make sure you have Python 3 and install the Ultralytics YOLOv8 package (pip install ultralytics) and Flask (pip install flask). Also, if you’re using a GPU, install the appropriate GPU support (CUDA) as needed for Ultralytics, or it will default to CPU.
2. The model server:
- Loads the YOLOv8 model weights (the best.pt we trained or downloaded).
- Starts a Flask web server on a specified host and port (we use port 5000 by default).
- Defines an endpoint (e.g., /detect) that accepts an image (we send it as a base64 string for convenience) and runs the YOLO model on it.
- Returns the detection results in JSON, including a list of detected ingredients with confidence scores, and even a “debug image” (the image with boxes drawn, encoded in base64) which we can display on the Pi’s UI.
3. Launch the model server: On the machine that will run the model (Pi or PC), open a terminal in the directory containing model_server.py and start it with:
This command tells Flask to listen on all network interfaces (0.0.0.0) on port 5000. We do this so that your Raspberry Pi can connect to it over the network. If you run it on your Pi, you could also use 127.0.0.1 (localhost), but using 0.0.0.0 doesn’t hurt and allows flexibility. You should see output indicating the server started.
Security Note: This is a simple local network server. I did not implement authentication on it, so make sure you’re running it in a trusted network (your home LAN, for instance). In a school or public network, be mindful that the port is open (though someone would have to specifically know about it to use it).
4. Get the server IP address: If you run the server on a PC, find out that PC’s IP address on your LAN. For example, it might be something like 192.168.1.100. This is the address the Pi will need to send requests to. If the Pi and server PC are on the same Wi-Fi, just ensure they’re in the same subnet (usually they will be). If you connected the Pi directly via Ethernet to the PC or a router, it might have an IP like 192.168.0.x or 192.168.168.x. In my case, I sometimes used a direct Ethernet link and set static IPs (hence my code tries a default like 192.168.168.10 for convenience). But for most, using the normal LAN IP is fine.
5. Test the server (optional): You can do a quick test by opening a browser and navigating to http://<server-ip>:5000. The model_server.py might serve a simple message at the root or just a 404 if not defined – it doesn’t have a UI. But more directly, you could try sending a test via command line:
This is advanced, but basically if it returns a JSON with detections, it’s working. We’ll trust it’s working and test through the Pi’s app soon.
Troubleshooting connection: One challenge I faced was how the Pi would discover the server’s IP. I initially tried using mDNS (so I could have a nice name like modelserver.local), but many networks (especially campus or corporate networks) block those protocols. I ended up implementing a fallback system in the code: the Pi’s app will try some common addresses and then, if it can’t find the server, it will ask you to input the IP and port. That way, you can manually tell it “192.168.x.y:5000”. It will remember this IP for next time (stores in a text file). This is really handy – you only need to do it once if your network is stable.
Software Setup on the Raspberry Pi (Streamlit App & Dependencies)
Our Raspberry Pi will run the Streamlit web application that forms the user interface (UI) for the Recipe Assistant. It will also handle the hardware components (LED, LCD) and communicate with the model server and recipe API. In this step, we’ll install all necessary software on the Pi and prepare the application environment.
1. Set up your Python environment: On your Raspberry Pi, ensure Python 3.11 or newer is installed. You can verify with python3 --version. Create a new virtual environment or use Poetry for dependency management. Then install the following Python packages:
2. Prepare your application structure: In your working directory, create the following files and folders based on the descriptions in this guide:
- Streamlit_app.py: This is the main file that will contain the user interface built using Streamlit.
- componentsControl.py: Handles GPIO LED control and LCD display logic.
- create_db.py: Script that initializes a local SQLite recipe database.
- style/: A folder containing your custom style.css and font.css for theming the Streamlit app.
3. Create your local recipe database: Write a script that creates a recipes.db file. Populate it with some sample recipes that use the same ingredients your model will detect. You can run it once with:
4. Configure LCD and LED: Ensure you’ve wired your LCD (I2C, typically at address 0x27) and RGB LED (connected to GPIO 17, 27, and 22) as described earlier. Use gpiozero for LED and RPLCD.i2c for LCD. Enable I2C in Raspberry Pi config using:
Then navigate to Interface Options > I2C and enable it.
5. Set up MJPEG-Streamer if you want live USB camera support. This enables near-real-time video stream which can be embedded in your Streamlit UI. MJPEG-Streamer must be installed and run separately. Example command:
Running the AI Recipe Assistant (Time to Cook!)

With your model server running and the Raspberry Pi set up, launch your Streamlit application with:
Access the app from a browser via the Pi’s IP and port 8501, e.g.:
If the model server IP isn’t auto-detected, the app should prompt you to enter it. It will be saved locally for future sessions.
Use the interface to:
- Upload or capture an image
- Send it to the model server
- Display bounding boxes and detect ingredients
- Edit or confirm the list of detected ingredients
- Search recipes via Spoonacular API or the local SQLite DB
- View steps and ingredient lists for each suggested recipe
GPIO feedback:
- RGB LED: color-coded status (green = ready, blue = processing, red = error)
- LCD: shows IP address and process updates (e.g., “Processing Image”, “Image Captured!”)
Conclusion – Tips, Troubleshooting, and Next Steps
Troubleshooting tips:
- If MJPEG stream doesn't load, verify camera is connected and the streamer is running.
- If LCD doesn't show text, check wiring and I2C address (use i2cdetect -y 1).
- If the model server isn’t reachable, double-check IP, port, and that it’s running.
- If detection fails, lower the confidence threshold or improve lighting.
- If no recipes are found, reduce number of input ingredients or check your DB/API connection.
Ideas for extensions:
- Build a mobile app interface or voice assistant integration.
- Generate recipes using a language model.
- Control IoT kitchen appliances based on the detected ingredients.
Final words: This AI-powered assistant is a fun and practical blend of computer vision, Raspberry Pi hardware, and creative UI. With your own implementation and creativity, you can personalize it to your needs and continue building new features.
Happy hacking and bon appétit!