IntelliBot V1 - a Voice-Controlled Personal Assistant Robot
by curious_cat in Circuits > Raspberry Pi
75 Views, 1 Favorites, 0 Comments
IntelliBot V1 - a Voice-Controlled Personal Assistant Robot




Welcome to my Instructables entry! Meet IntelliBot, a voice-activated smart robot designed to assist with daily tasks like telling the time, setting reminders, playing music, or even interacting through a screen. Built using a Raspberry Pi, this robot leverages speech recognition, sensor input, and display output to provide a personal assistant experience—like Alexa or Siri—but DIY and fully customizable.
Whether you’re a student, hobbyist, or engineer, this project will walk you through building your own AI-based robot using common electronics and programming skills. Let’s dive in!
Why This Project?
As a mechanical engineering student, I quickly realized that pure mechanical knowledge isn’t enough in today’s rapidly evolving tech landscape. The future belongs to interdisciplinary skills, where mechanical systems work hand-in-hand with electronics and software. That realization inspired me to step outside the traditional boundaries of mechanical design and start learning the fundamentals of electronics—controllers, sensors, actuators like servos and displays, and the basics of embedded coding.
Starting with platforms like Arduino gave me a strong foundation, but I soon reached the limitations of basic microcontrollers when trying to implement more complex, logic-driven features. That’s where the Raspberry Pi comes into the picture. With its Linux-based environment and support for high-level programming languages like Python, it allows much more advanced functionality, such as real-time voice recognition, multitasking, and seamless hardware-software integration.
IntelliBot is the result of this journey—a stepping stone from mechanical fundamentals to smart automation using high-level logic and control. It’s not just a robot; it’s a learning platform, a personal assistant, and a showcase of what can happen when mechanical meets intelligent systems.
How This Project Helps You Learn
This project isn't strictly beginner-level, but it’s an excellent stepping stone for anyone looking to move beyond basic electronics into more advanced, real-world applications. It provides hands-on experience across a range of essential concepts that are key in robotics, automation, and intelligent systems.
You’ll get practical exposure to:
I2C communication – for connecting and managing peripherals like sensors and displays efficiently.
Displaying text and images – using both OLED and LCD modules to create a responsive user interface.
Actuator control and calibration – controlling servo motors and calibrating them to respond accurately to data inputs.
Multimodal data conversion – such as converting text to audio (TTS) and audio to text (STT), enabling real-time voice interaction.
System integration – combining mechanical, electrical, and software components into a cohesive, functional assistant.
It bridges the gap between theory and real-world application, teaching not just the “how,” but the “why” behind each step. You’ll also learn to manage timing, multitask with concurrent functions (like reminders running in the background), and handle user interaction dynamically through voice and display.
This is a great entry point into fields like embedded systems, human-machine interaction, and AI-based automation. It lays the foundation for more complex projects and helps you build the mindset needed to design and customize intelligent machines.
Who Can Build IntelliBot?
Anyone with curiosity and a desire to learn! Whether you're a student, researcher, or hobbyist, this project is designed to help you dive into the exciting world of electromechanical systems and high-level programming integration. You don’t need to be an expert—just someone willing to explore and experiment.
This project acts as a gateway to understanding how mechanical components, electronic sensors, actuators, and software come together to form intelligent machines. With each step, you'll gain hands-on experience in areas like voice recognition, actuator control, and real-time interaction—skills that are increasingly important in both academia and industry.
What makes IntelliBot truly special is its customizability. Creativity is the only limit here—you can modify, extend, and adapt the project in endless ways. Add new features, integrate it with IoT, enhance mobility, or give it a personality. I’ll also be releasing future versions of IntelliBot with new capabilities, so stay tuned!
Let’s start building, learning, and innovating—together.
Supplies
Hardware Requirements
- Raspberry Pi (Model 3 or 4)
- MicroSD card with Raspberry Pi OS
- USB microphone
- RTC module (e.g., DS1307)
- Bluetooth speaker or 3.5mm audio jack speaker
- 0.96 inch OLED display (I2C)
- Waveshare 1.6 inch LCD screen (SPI)
- Toggle switch
- 2 × LEDs
- 2 × Resistors (220–330Ω)
- Power supply (5V, 2.5A or higher)
- Arduino (Uno, Mega, or similar)
- 3 × Servo motors, with 2 single servo arm and 1 double servo arm
- Jumper wires
- 3D printed robot parts (Body, Hands, Head) or custom parts using foam board
Software requirements
- Raspbian (Raspberry Pi OS Lite or Full version with desktop)
- Ensure I2C is enabled via sudo raspi-config
Dependent Libraries installation commands
Open terminal in Raspberry Pi and install these one by one
- sudo apt update
- sudo apt install python3-pip
- sudo apt install festival python3-serial python3-smbus libatlas-base-dev portaudio19-dev libjpeg-dev zlib1g-dev libfreetype6-dev liblcms2-dev libopenjp2-7 libtiff-dev libavformat-dev libswscale-dev
Notes: i have used wave share LCD 1.69inch any lcd with same communication protocol can be used with same wiring connection
Design Files



IntelliBot V1 features a 3D-printed modular design with four main parts: head, arms, body front, and body back. The head houses the LCD display; the arms are lightweight and allow future servo-driven motion—just glue a micro servo horn into the printed arm for gesture control. The front body holds the main OLED display and optional sensors, while the back body secures the Raspberry Pi and power components with ventilation and cable access. All parts are optimized for PLA printing and easy assembly, with STL files customizable for future upgrades like mobility and image processing.
Hardware Setup


To complete the wiring for this project, connect the following components: Raspberry Pi, Arduino Mega, 0.96" OLED display (I2C), DS1307 RTC module (I2C), push-button switch (connected to GPIO16), 3 servo motors (connected to Arduino), a logic level shifter (to safely connect 5V devices like Arduino and RTC to the 3.3V Pi), and the Waveshare 1.69" LCD module (SPI). Make sure I2C and SPI interfaces are enabled on your Raspberry Pi using sudo raspi-config. Use the logic level shifter between the Pi and any 5V I2C devices to avoid damaging the Pi. For the Waveshare LCD module, follow the wiring instructions and setup steps provided in the official Waveshare reference PDF carefully to make it work properly.
Also, connect your USB microphone to any of the Raspberry PI USB ports
Connect the Bluetooth or AUX-based speaker to the Raspberry Pi
Downloads
USB Microphone and Bluetooth Speaker Setup
Steps to connect bluetooth speaker and USB microphone
- Launch Bluetooth control tool – to manage Bluetooth interactions.
- Turn on the Bluetooth adapter – enables the Raspberry Pi’s Bluetooth.
- Enable agent and set as default – allows the Pi to handle pairing requests.
- Start scanning for nearby devices – searches for discoverable Bluetooth devices.
- Identify your speaker from the list – note the device name and corresponding MAC address.
- Stop scanning – to stop listing new devices.
- (Optional) Pair and trust the device – for secure and automatic future connections.
- Exit the tool – completes the setup process.
🟢 Part 1: Check if USB Microphone is Connected
Terminal command:
This lists all connected audio capture (recording) devices. A USB microphone usually appears here if connected properly.
Example Output:
🔎 Step-by-Step: Find Bluetooth MAC Address
- Open Terminal on your Raspberry Pi.
- Start the Bluetooth command line tool:
- Inside the bluetoothctl prompt, run the following commands:
🔄 Now wait for 5–10 seconds. You’ll see a list of discoverable Bluetooth devices around you.
✅ Example Output:
- The left part (e.g., F4:4E:FD:8D:50:98) is the MAC address.
- The right part (e.g., JBL GO 2) is the device name.
- Stop scanning once you find your device:
- You can optionally pair and trust the device here:
- Exit the tool:
🟢 Part 2: Automatically Connect Bluetooth Speaker on Boot
Assume your Bluetooth speaker MAC address is F4:4E:FD:8D:50:98.
Step-by-step Terminal Setup:
- Install required packages:
- Enable and start Bluetooth service:
- Trust and pair your Bluetooth speaker (one-time setup):
Put your speaker in pairing mode, then run:
Then inside the Bluetooth CLI:
- Create a startup script to auto-connect the speaker on boot:
Create a script:
Paste this:
Make it executable:
- Run the script on boot using crontab:
Add this line at the end:
🟢 Final Test Output Example After Reboot
You can add debug prints to your script or check manually with:
Example Output:
Directory or Folder Setting in Raspberry Pi
















📘 Setting Up the Waveshare LCD Display for the Project
- Test the Waveshare LCD Display
- After completing Step 2 and successfully testing your Waveshare LCD display, you will see a folder structure similar to the one shown in IMG-01.
- Locate Python Dependencies
- Inside this folder, locate the required Python dependency files as shown in IMG-02.
- Organize Your Project Folder
- Copy the four essential folders highlighted in IMG-03.
- Create a new folder for your project.
- Paste the copied folders into your newly created project folder, as demonstrated in IMG-04.
- Add Project Files
- Download all the Python scripts I’ve provided for the assistant.
- Paste them into the same project folder (refer to IMG-04 for the layout).
- Collect all the GIFs and image files you want to use in your assistant.
- Place them inside the pic/ folder within your project directory.
- Final Folder Structure
- Once everything is copied and organized correctly, your project folder should closely resemble the layout shown in IMG-04.
Running the Program and Voice Commands
✅ Final Setup Steps
- Update File Paths in function_handler.py
- Open the function_handler.py file in your project folder.
- Replace the placeholder paths for the GIFs and images with your actual file locations:
- Replace "YOUR GIF LOCATION.GIF" with the full path to your GIF file(s).
- Replace "YOUR IMAGE LOCATION.JPG" with the full path to your image file(s).
- You're All Set!
- Hurray! Once you’ve updated the file paths, simply run the main.py script using Python to start your voice assistant project.
- Auto-Start on Boot (Optional but Recommended)
- To make the assistant start automatically every time your Raspberry Pi boots up, you can configure it to run on startup. This way, you won’t need to manually launch the program each time.
- To do this:
- Open the terminal and run:
- Before the line that says exit 0, add the following:
Replace /home/pi/YourProjectFolder/main.py with the full path to your main.py script.
- Save and exit by pressing Ctrl + X, then Y, then Enter.
- Your assistant will now launch automatically every time your Raspberry Pi starts up.
Testing
📝 Voice Command List
🔊 Main Voice Commands (after saying: “Hey Charlie”)
- What is the time now?
- ➤ Replies with the current time from the RTC module.
- Hydration reminder
- ➤ Enters hydration reminder setup mode.
- Play mode
- ➤ Enters Play Mode to execute gesture commands.
💧 Hydration Mode Subcommands (after saying: “Hydration reminder”)
- Start
- ➤ Initiates setup for hydration reminders.
- [Number] minutes (e.g., “10 minutes”, “5 minutes”)
- ➤ Sets the reminder interval.
- Stop
- ➤ Cancels the hydration reminder loop.
🎮 Play Mode Subcommands (after saying: “Play mode”)
- Right hand up
- ➤ Commands robot to raise right hand.
- Left hand up
- ➤ Commands robot to raise left hand.
- Both hand up
- ➤ Commands robot to raise both hands.
- Shake head
- ➤ Commands robot to shake its head.
- Shake hands
- ➤ Commands robot to shake hands.
- Home
- ➤ Commands robot to return to default/home position.
- Play mode stop
- ➤ Exits Play Mode.
Conclusion
This project marks the beginning of a powerful and interactive voice-controlled assistant built with Raspberry Pi and Arduino integration. It currently supports features like real-time clock reporting, hydration reminders, and gesture-based play mode—providing both utility and fun.
However, this is just the start.
In Version 2 (V2), I plan to introduce image processing with camera feedback, as well as a companion mobile robot that can respond to commands and interact with its environment—bringing the assistant to life even further. Additional features like alarm management, personalized reminders, music playback, ChatGPT integration, and weather monitoring are also in the pipeline.
Creativity is the only limit for this project. The possibilities are vast, and with each version, the assistant will only get smarter, more interactive, and more helpful.
Stay tuned for V2—bigger, better, and even more intelligent!