AutonRoboCar: Self-Driving RC Car Project

by jdaggett in Workshop > Electric Vehicles

19 Views, 0 Favorites, 0 Comments

AutonRoboCar: Self-Driving RC Car Project

car.png

AutonRoboCar is a final project for a university controls systems course, in which I set out to build a miniature, low-cost self-driving car using off-the-shelf components, two webcams, and a Raspberry Pi 5.

The project explores key areas of robotics, including electronics, path planning, computer vision, and motion control.

AutonRoboCar features:

  1. A stereo vision camera system to estimate depth and detect obstacles
  2. A* path planning to compute safe routes through an obstacle-filled field
  3. PID control for smooth trajectory following
  4. A hacked RC car remote, controlled via digital potentiometers, to simulate analog throttle and steering inputs

All processing is performed directly onboard using a Raspberry Pi, making the system compact, self-contained, and extensible for future development.

Supplies

The full Bill of Materials can be found here:

https://docs.google.com/spreadsheets/d/1ycFNvBeD0YGGGTk2chqilqS91Ez1bylCXsrjYhg-F00/edit?usp=sharing

Electronics Setup

wiring.png
IMG_7749.jpeg

Before running any code or controlling the car, I needed to wire and power all the onboard electronics.

Goals of this step:

  1. Disassemble the RC car and remote
  2. Power the Raspberry Pi and peripherals
  3. Connect the Pi to digital potentiometers and route them to the RC transmitter

Disassembling and Rewiring the Transmitter

I started by disassembling the RC remote, which contains the transmitter PCB (a small L-shaped board with the power button still attached). This board originally connected to the steering and throttle potentiometers inside the remote. I carefully disconnected those wires and soldered new jumper wires from the transmitter PCB to a breadboard, which allowed me to interface with digital potentiometers controlled by the Raspberry Pi.

Powering the Raspberry Pi

Initially, I attempted to use an 11.1V LiPo battery regulated by a UBEC to provide 5V to the Pi. However, I ran into issues wiring the UBEC cleanly and safely. I switched to an 8V ELEGOO battery pack, which worked for powering the Pi alone but couldn’t handle the additional load from the potentiometers. For future builds, I recommend a 25W USB-C power bank (5V, 5A) to reliably power the Pi and peripherals.

Circuit Notes

The wiring diagram I created includes GPIO numbers for each digital potentiometer connection. Note that these are in the BCM numbering convention, not physical BOARD pin numbers. I routed all connections through a breadboard to make testing and rerouting easier.

Testing and Validation

After wiring everything, I powered on the Pi and used a multimeter to verify voltage at critical points. I learned firsthand how important stable, well-soldered connections are—especially when working with the small pins on digital potentiometers, where even slight looseness can cause voltage dropouts or erratic behavior.

Building the Car

IMG_7757.jpg

With the electronics wired and tested, the next step was to physically mount everything onto the RC car in a secure and accessible way.

Goals of this step:

  1. Mount the Raspberry Pi, cameras, and breadboard to the chassis
  2. Route power and signal cables neatly
  3. Ensure the car is balanced and components are protected

Mounting Strategy

To build the onboard rig, I used a single acrylic mounting plate and zip ties to create a platform above the chassis. This platform housed:

  1. The Raspberry Pi 5
  2. The breadboard with potentiometer wiring
  3. The RC transmitter module

Acrylic was easy to cut and drill, allowing for a lightweight and adaptable mounting solution. Most components were secured using zip ties, drilled mounting holes, and an adhesive strip for the breadboard.

Camera Placement

Stereo vision requires fixed camera separation (baseline), so it’s important to mount the two Logitech C270 webcams at a consistent distance. I first removed the monitor mounts from the webcams, then aligned and fixed them horizontally to the front bumper to mimic human binocular vision. The USB-A cables were routed underneath the chassis and around to the Pi, secured tightly with tape and zip ties for added stability. Ensuring the cameras are level and firmly attached is essential for reliable stereo disparity estimation.

Final Assembly

After mounting all components:

  1. I verified that the steering mechanism still had full range of motion
  2. I ensured the car’s balance wasn’t overly front- or back-heavy
  3. I checked that the added weight didn’t compress the car’s suspension

Vision System Setup

visresult.png
vision.png

To enable obstacle detection and distance estimation, I implemented a stereo vision system using two Logitech C270 webcams and OpenCV.

Goals of this step:

  1. Display live video feeds from both cameras
  2. Detect a target object using color/brightness thresholds
  3. Calculate depth to the object using stereo disparity

Camera Configuration

I used two Logitech C270 HD webcams, each running at 1280×720 resolution. At this point, my cameras were in a controlled setup on a table approximately 12.5 cm apart (baseline), not yet mounted to the RC car, but this step also works on the car if the baseline is adjusted to match the new camera setup.

Object Detection

To simplify object detection, I tested with a black sheet of paper taped to a white wall 100cm away from the center of the cameras. I tuned a grayscale intensity threshold (~35) to isolate the dark object from the bright background using basic image processing:

  1. Convert each frame to grayscale
  2. Apply a binary threshold (inverse) to detect dark regions
  3. Find contours and draw bounding boxes on the object

This allowed for clear detection and tracking of the black object in both camera views with a green box labeled “object detected.”

Depth Estimation with Stereo Vision

To estimate the distance to the object:

  1. I computed the disparity map using OpenCV’s StereoSGBM_create() method
  2. The focal length (4.0 mm) and baseline (12.5 cm) were used in the standard depth formula:
  3. Distance (cm) = (focal length × baseline) / disparity
  4. Disparity was computed from the pixel difference in object position between the two camera views

In testing, disparity maps were noisy with low texture, so I added a small white cross on the black paper to improve feature matching. The resulting distance was a few centimeters off (~106cm for paper 100cm away), but this is likely because of poor lighting during my testing and rough measurements. The code for this experiment can be found in my GitHub repo.

Path Planning

pathplan.gif

With the hardware and vision system ready, the next step was to teach the robot how to plan a safe route from its starting point to a goal — avoiding obstacles along the way. For this project, I used A* search, a popular and efficient graph-based algorithm for finding optimal paths in grid-based maps.

Goals of this Step:

  1. Convert real environment workspace to 2D grid config space
  2. Setup obstacles in the environment
  3. Implement the A* search algorithm
  4. Extract and visualize the final path in simulation

Map and Configuration Space

To simulate the robot’s surroundings, I created a 72×54 (based on the dimensions in inches of the real environment) grid where:

  1. 1 represents free space
  2. 0 represents an obstacle
  3. To account for the robot's size, I expanded each obstacle slightly using padding so that the path planner keeps enough clearance from obstacles. This is called the configuration space and allows me to treat the robot like a point agent in the code which is very useful in reducing the implementation complexity.

A* Search Implementation

I implemented A* in Python with:

  1. L2 (Euclidean) heuristic for smoother paths
  2. 8-point connectivity, allowing diagonal movement
  3. Obstacle padding to simulate robot width
  4. Intermediate point removal to simplify the path

The path visualization includes:

  1. A start position (green square)
  2. A goal position (red square)
  3. Blue line = planned path
  4. Obstacles = black squares/walls
  5. The robot (red arrow moving along the path)

The A* path planning, visualization, and environment setup code can all be found in my GitHub repo.

Control

control.png

At this stage of the project, I had built the robot, wired up the electronics, and implemented, debugged, and tested path planning and vision systems in simulation. However, I ran into many challenges in the beginning, particularly with the electronics and hardware, both of which I have little to no experience with. I tried re-soldering the potentiometers multiple times but always got poor/inconsistent multimeter readings. After many hours of debugging and final exams to focus on for graduation, I decided it would be best to drop the goal of real-world navigation, focus on simulation, and document my process so that I could create a detailed guide for future steps.

This step describes the planned control system, based on all the components I built and tested. It offers guidance to anyone who would like to continue this project and bring the robot’s movement to life.

Control Loop Overview

The complete control loop is designed to take:

  1. A known map with obstacles
  2. A start and goal position
  3. Live camera data
  4. Live position estimates

...and generate motor control signals to navigate safely and autonomously.

How It All Works Together (Module-by-Module)

Here’s the intended flow of information:

Vision Module

Takes in live stereo camera frames and estimates:

  1. Obstacle distances using disparity
  2. Bounding boxes of detected objects
  3. This module feeds new obstacles to the path planner in real-time.

Position Module

Uses estimated robot velocity and orientation over time to integrate position.

This feeds real-time position to the PID controller and the path planner.

Path Planner

Uses A* or Hybrid A* to generate a path from the current position to the goal.

When new obstacles appear (from vision), it replans a new path.

PID Controller

Compares the robot’s actual position and heading (from position module) to the next point in the path.

It outputs directional movement commands (e.g., "turn 20° right and move forward").

Movement Controller

Converts the directional commands from the PID into analog voltage values for throttle and steering, matching the expected voltage range (0–3V).

Digital Potentiometers (X9C103s)

Adjust the resistance to output voltages that simulate the analog signals the RC transmitter expects.

Transmitter (Remote Control)

The adjusted voltages simulate manual stick movement, which is wirelessly sent to the RC car.

RC Car (Receiver)

Interprets the wireless signal and drives the motors to move and steer.

Building and Tuning

Had I more time, I would’ve followed this development plan:

  1. Phase 1: Log potentiometer voltages and motor behavior to understand how voltages map to speed/steering.
  2. Phase 2: Tune PID controllers in the simulation to ensure smooth path following.
  3. Phase 3: Replace simulated movement controller with real digital potentiometer output.
  4. Phase 4: Test with short, straight-line paths before adding obstacles and turns.
  5. Phase 5: Integrate live stereo vision disparity into the planning loop.

Suggestions for Improvement

  1. Failsafe Voltage Defaults: Currently, the RC car’s behavior with no potentiometer signal is dangerous (pull-up resistor). Consider using MOSFETs or relays to cut the signal during power loss.
  2. PID Tuning in Hardware: Real-world control will likely need different gains than in simulation. Use logging to debug over/undershoot.
  3. Power Stability: Ensure the Pi and pots get clean power. I recommend a high-output USB-C power as a simpler alternative to a voltage-regulated larger battery.
  4. Pose Estimation: Position tracking is very hard with an RC like this because we have to rely on trial and error (modeling). It might be more reliable to try adding wheel encoders or using visual odometry for better position tracking.

Status

While this control loop is not yet implemented in hardware, I have:

  1. Designed and tested every module independently
  2. Written simulation code for planning and control
  3. Built the hardware and vision systems

All code can be found in my GitHub repo.