Rpibot - About Learning Robotics

by makerobotics in Circuits > Robots

441 Views, 1 Favorites, 0 Comments

Rpibot - About Learning Robotics

IMG_20201025_105224.jpg

I am an embedded software engineer in a German automotive company. I started this project as a learning platform for embedded systems. The project was cancelled early but I enjoyed it so much that I continued in my free time. This is the result...

I had following requirements:

  • Simple hardware (focus is the software)
  • Cheap hardware (about 100€)
  • Expandable (some options are already part of the description)
  • Supply voltage for all components from single 5V source (powerbank)

There was not really a goal apart of learning. The platform can be used for learning, surveillance, robotic contests, ...

It is not a beginner tutorial. You need some basic knowledge about:

  • Programming (Python)
  • Basic electronics (to connect modules together by the right voltage)
  • Basic control theory (PID)

Finally you will probably face problems as I did. With some curiosity and endurance, you will go through the project and solve the challenges. My code is as simple as possible and the critical code lines are commented to give hints.

The complete source code and files are available here: https://github.com/makerobotics/RPIbot

Supplies

Mechanics

  • 1x Plywood board (A4 size, 4 mm thick)
  • 3x M4 x 80 Screw and nut
  • 2x Gear motors with secondary output shaft for encoder. Wheels.
  • 1x Free wheel
  • 1x Pan and tilt camera mounting (optional)

Electronics

  • 1x Raspberry Pi Zero with header and camera
  • 1x PCA 9685 servo control
  • 2x Optical encoder wheel and circuit
  • 1x Female jumper wires
  • 1x USB powerbank
  • 1x DRV8833 dual motor driver
  • 2x Micro servos SG90 for camera pan and tilt (optional)
  • 1x MPU9250 IMU (optional)
  • 1x HC-SR04 ultrasonic distance sensor (optional)
  • 1x perforated board and soldering wire, headers, ...

Build the Chassis

IMG_20201025_105942.jpg
IMG_20201025_105917.jpg
IMG_20201025_105426.jpg
IMG_20201025_105359.jpg
IMG_20201025_105440.jpg
IMG_20201025_105532.jpg

I am not a good mechanic designer. Also the projects goal is not spending too much time in the chassis. Anyway I defined following requirements:

  • Cheap materials
  • Fast assembly and disassembly
  • Expandable (e.g. space for added sensors)
  • Light materials to save energy for the electronics

An easy and cheap chassis can be made of plywood. It is easy to machine with a fretsaw and a hand drill. You can glue small wooden parts to create the holdings for sensors and motors.

Think about the replacement of defect components or the electric debugging. The main parts should be fixed by screws to be replaceable.
A hot glue gun may be simple, but probably not the best way to build a chassis...
I needed a lot of time to think about an easy concept to disassemble the parts easily. 3D printing is a good alternative, but can be quite expensive or time consuming.

The free wheel is finally very light and easy to mount. The alternatives were all heavy or full of friction (i tried a couple of them before finding the final one).
I only had to cut a wooden spacer to level the tail free wheel after mounting the main wheels.

Wheel properties (for software calculations)

Circumference: 21,5 cm
Pulses: 20 pulses/rev.
Resolution: 1,075 cm (finally 1 pulse is about 1cm, which is easy for software calculations)

Electronics and Wiring

RPIbot_Hardware_V2.png
rpibot_image_top.jpg
IMG_20201025_105338.jpg
IMG_20201025_105507.jpg
IMG_20201025_105742.jpg

The project is using different modules as shown on the diagram.

The Raspberry Pi Zero is the main controller. It is reading the sensors and controlling the motors by a PWM signal. It is connected to a remote PC by wifi.

The DRV8833 is a dual motor H-bridge. It is providing the sufficient current to the motors (which the Raspberry Pi can't do as the outputs can only deliver some mA).

The optical encoder are providing a square shaped signal each time the light is going through the encoder wheels. We will use the HW interrupts of the Raspberry Pi to get the information each time the signal is toggling.

The pca9695 is a servo control board. It is communicating by an I2C serial bus. This board is providing the PWM signals and supply voltage which are controlling the servos for pan and tilt of the cam.

The MPU9265 is a 3-axis acceleration, 3-axis angular rotation speed, and 3-axis magnetic flux sensor. We will use it mainly to get the compass heading.

The different modules are all connected together by jumper wire. A breadboard is acting as a dispatcher and provides supply voltages (5V and 3.3V) and grounds. The connections are all described in the connection table (see attachment). Connecting 5V to a 3.3V input will probably destroy your chip. Take care and check all your wiring twice before supplying (here specially the encoder have to be considered).
You should measure the main supply voltages on the dispatch board with a multimeter before connecting all the boards. The modules were fixed by nylon screws into the chassis. Also here I was happy to have them fixed but also removable in case of malfunction.

The only soldering was finally the motors and the breadboard and headers. To be honest, I like the jumper wires but they can lead to loose connection. In some situations, some software monitorings may support you in analyzing the connections.

Software Infrastructure

Screenshot from 2020-10-26 18-44-52.png
closeLoop50.png

After achieving the mechanics, we will set up some software infrastructure to have comfortable development conditions.

Git

This is a free and open source version control system. It is used to manage large projects as Linux, but can also easily be used for small project (see Github and Bitbucket).

The project changes can be tracked locally and also pushed to a remote server to share software with the community.

The main used commands are:

git clone https://github.com/makerobotics/RPIbot.git [Get the source code and git configuration]

git pull origin master [get the latest from the remote repository]

git status [get the status of the local repository. Are there any files changed?]
git log [get the list of commits]
git add . [add all changed files to the stage to be considered for the next commit]
git commit -m "comment for commit" [commit the changes to the local repository]
git push origin master [push all the commits to the remote repository]

Logging

Python is providing some built in logging functions. The software structure should define already all the logging framework before starting further development.

The logger can be configured to log with a defined format in the terminal or in a log file. In our example, the logger is configured by the webserver class but we could also do it on our own. Here we only set the logging level to DEBUG:

logger = logging.getLogger(__name__)

logger.setLevel(logging.DEBUG)

Measurement and plotting

To analyze signals over time, the best is to plot them in a chart. As the Raspberry Pi has a console terminal only, we will trace the data in a semicolon separated csv file and plot it from the remote PC.

The semicolon separated trace file is generated by our main python code and must have headers like this:

timestamp;yawCorr;encoderR;I_L;odoDistance;ax;encoderL;I_R;yaw;eSpeedR;eSpeedL;pwmL;speedL;CycleTimeControl;wz;pwmR;speedR;Iyaw;hdg;m_y;m_x;eYaw;cycleTimeSense;
1603466959.65;0;0;25;0.0;-0.02685546875;0;25;0;25;25;52;0.0;23;0.221252441406;16;0.0;0;252.069366413;-5.19555664062;-16.0563964844;0;6;
1603466959.71;0;0;50;0.0;0.29150390625;0;50;0;25;25;55;0.0;57;-8.53729248047;53;0.0;0;253.562118111;-5.04602050781;-17.1031494141;0;6;
1603466959.76;0;-1;75;0.0;-0.188232421875;1;75;2;25;25;57;0;52;-24.1851806641;55;0;0;251.433794171;-5.64416503906;-16.8040771484;2;7;

The first column is containing the timestamp. The following columns are free. The plotting script is called with a list of columns to be plotted:

remote@pc:~/python rpibot_plotter -f trace.csv -p speedL,speedR,pwmL,pwmR

The plot script is available in the tool folder:
https://github.com/makerobotics/RPIbot/tree/master/t...

The plotter is using mathplotlib in Python. You must copy it into your PC.

For more comfort, the python script is called by a bash script (plot.sh) which is used to copy the Raspberry Pi trace file to the remote PC and call the plotter with a signal selection.
The bash script "plot.sh" asks if file has to be copied. This was more convenient for me instead of manually copying each time.
"sshpass" is used to copy the file from the Raspberry Pi to the remote PC via scp. It is able to copy a file without asking for the password (it is passed as a parameter).

Finally a window is opened with the plot as shown in the picture.

Remote communication

The development interface to the Raspberry Pi is SSH. Files can be edited directly on target, or copied by scp.

To control the robot, a web server is running on the Pi, providing control via Websockets. This interface is described in the next step.

Setup the Raspberry Pi

There is a file describing the setup of the Raspberry Pi in the "doc" folder of the source code (setup_rpi.txt). There are not many explanations but many useful commands and links.

The User Interface

Screenshot from 2020-10-24 18-01-42.png

We use the lightweight Tornado web server to host the user interface. It is a Python module which we call as we start the robot control software.

Software architecture

The user interface is built by following files:
gui.html [Describing the web page controls and layout]
gui.js [Contains the javascript code to handle the controls and open a websocket connection to our robot]
gui.css [Contains the styles of the html controls. The positions of the controls are defined here]

The websocket communication

The user interface is not the coolest, but it is doing the job. I focused here on technologies which were new to me like Websockets.

The web site is communicating with the robot web server by Websockets. This is a bidirectional communication channel which will stay open as connection was initiated. We send the robot's commands via Websocket to the Raspberry Pi and get information (speed, position, camera stream) back for display.

The interface layout

The user interface has a manual input for the commands. This was used at the beginning to send commands to the robot.
A checkbox is turning the camera stream on and off. The two sliders are controlling the camera pan and tilt.
The top right part of the user interface is controlling the robots movement. You can control the speed and target distance. The basic telemetry information is displayed in the robot drawing.

Programming the Robot Platform

rpibot SW architecture.png
IMG_20201028_133540.jpg
openLoop50.png
closeLoop50.png
closeLoop_P_I_control.png

This part was the main goal of the project. I refactored a lot of the software as I introduced the new chassis with the DC motors.
I used Python as a programming language for different reasons:

  • It is the Raspberry Pi main language
  • It is a high level language with many built in features and extensions
  • It is object oriented but can also be used for sequential programming
  • No compilation nor tool chain necessary. Edit the code and run it.

Main software architecture

The software is object oriented, divided in a few objects. My idea was to split the code in 3 functional blocks:

Sense --> Think --> Actuate

Sense.py

Main sensor acquisition and processing. The data is stored in a dictionary to be used by the following stage.

Control.py

An actuation subclass is controlling the motors and servos after some abstraction.
The main Control object is handling the high level commands and also the control algorithms (PID) for the motor.

rpibot.py

This main object is managing the Tornado web server and instantiating the sense and control classes in separate threads.

Each module can be run alone or as part of the whole project. You can sense only and print out the sensor information to check that sensors are connected correctly and delivering the right information.

The PID control

First task is to find out what we want to control.
I started by trying to control the position, which was very complex and not helping much.

Finally, we want to control each wheel speed and also the robot direction. To do that we have to cascade two control logics.

To increase the complexity step by step, the robot should be controlled:

open loop (with a constant power)

pwm = K

then add the close loop algorithm

pwm = Kp.speedError+Ki.Integration(speedError)

and finally add the direction control as a last step.

For the speed control I used a "PI" control and "P" only for the yaw. I manually set the parameters by experimenting. Probably much better parameters could be used here. My target was just a straight line and I almost got it. I created an interface in the software to write some variables by the user interface. Setting the parameter Kp to 1.0 needs following command in the user interface:

SET;Kp;1.0

I could set the P parameter just low enough to avoid any overshot. The remaining error is corrected by the I parameter (integrated error)

It was difficult for me to find out how to cascade both controls. The solution is simple, but I tried many other ways before...
So finally, I changed the speed target of the wheels to turn in one or the other direction. Changing the speed control output directly was an error as the speed control was trying to remove this perturbation.

The used control diagram is attached. It shows only the left side of the robot control.

The Sensor Calibrations

Screenshot from 2020-10-28 11-50-50.png
calibrated_IMU.png
Calibrated_IMU.png

First thing to consider is that the whole IMU has to work properly. I ordered 3 parts and sent them back until I had a full working sensor. Each previous sensor had some parts of the sensor not working properly or not at all.
I used some example scripts to test the basics before mounting it in the robot.

The IMU sensor signals need to be calibrated before using it. Some sensor signals are depending on the mounting angle and position.

The acceleration and rotation speed calibrations

The easiest calibration is for the longitudinal acceleration (A_x). At standstill there should be around 0 m/s² . If you rotate the sensor properly, you can measure the gravity (around 9,8 m/s²). To calibrate a_x, you just have to mount it properly and then define the offset to get 0 m/s² at standstill. Now A_x is calibrated.
You can get the offsets for the rotation speeds in a similar way at standstill.

The magnetometer calibration for the compass

A more complex calibration is necessary for the magnetic field sensors. We will use m_x and m_y to get the magnetic field in the horizontal level. Having m_x and m_y will give us the opportunity to calculate a compass heading.

For our simple purpose we will only calibrate the hard iron deviation. This must be performed as the sensor is in the final position as it is depending on magnetic field perturbations.

We record m_x and m_y while we turn the robot around the z-axis. We plot the m_x vs m_y in a XY chart. The result in an ellipsis as shown in the picture. The ellipsis has to be centered to the origin. Here we consider the maximum and minimum values of m_x and m_y to get the offsets in both directions. Finally we check the calibration and see that the ellipsis is now centered.

Soft iron calibration would mean that we change the picture from an ellipsis to a circle. This can be made by adding a factor on each senor value.

A test routine can now be coded to re calibrate or at least to check that the sensors are still calibrated.

The compass heading

The magnetometer data will now be used to calculate the compass heading. For this, we have to convert the m_x and m_y signals into an angle. Python is directly providing the math.atan2 function which has this goal. The complete calculation is defined in the mpu9250_i2c.py file ("calcHeading(mx, my, mz)").

Alternative Designs

rpibot_image_side.jpg
RPIbot_Hardware.jpg
IMG_20201028_133300.jpg
rpibot_image.jpg
rpibot_image_top.jpg

The project took a lot of time as the design was completely open. For each component I made some prototype implementation and experienced the limits of the system.

The most complex topic was the wheel encoder. I tested 3 different options before finding the currently used optical encoder. I think that the aborted solutions are also very interesting in such a project. It concerns the parts where I learned most.

Continuous rotation servo connected to pca 9695

To avoid an additional H-bridge for a DC motor, I first started with continuous rotation servos. These were driven by the already present pca 9695 servo driver. All propulsion mechanics and the correspondent electronics were much simpler. This design had two drawbacks:

  • The poor control range of the servos.
  • The missing encoder holding location

The servos start moving with 50% pwm and have full speed at about 55%. This is a very poor control range.

Without an encoder holding, it was very difficut to find a ready to go encoder. I tested 3 different reflectance encoder which were mounted on the chassis. I taped a self made encoder wheel on the outside of the wheel with black and white sections. I used the QTR-1RC sensors which need a lot of signal processing to get the right signal. The Raspberry Pi was not able to perform that kind of real time processing. So I decided to add a NodeMCU D1 mini as a real time controller to the robot. It was connected to the raspberry Pi by the serial UART to deliver the processed sensor data. The NodeMCU was also managing the HC-SR04 sensor.
The mechanics were difficult and not very robust, the serial line was getting noise from I2C line and the motors, so finally I built the second version of the chassis with simple gear DC motors driven by a H-bridge. These motors have a secondary output shaft to place an optical encoder.

Image Processing

contours.jpg
blurred.jpg
bw.jpg
canny.jpg
horizontal.jpg
vertical.jpg
mix.jpg

To improve the autonomous driving, we can make some image processing.

The opencv library is a reference for that. It can be used by Python to rapidly implement obstacle detection.

We capture an image and apply some image processing tasks:

First tests were made with Canny and Sobel transformations. Canny can be a good candidate but is not sensible enough. Sobel is too sensible (too much objects detected).

Finally I made my own filter to mix all the horizontal and vertical gradients (detect furniture):

  • Transform the color image to a gray level image
  • Blur the image to remove small noise
  • Threshold the image to a black and white image
  • Now we detect horizontal and vertical gradients to detect objects as walls and furniture
  • We filter only the big remaining contours (see colored contours in the picture)

Now we can use this new information to detect obstacles...

Next Steps...

IMG_20201025_105814.jpg
IMG_20201025_105801.jpg

Now, we have a simple robot platform with sensors, actuators and a camera. My goal is to move autonomously and go back to the station without adding any further sensors. For this I will need following steps:

  • Sensor fusion of yaw and magnetic heading signals
  • Camera image processing (only low CPU available for that)
  • Collision detection (ultrasonic distance and camera)
  • Map building or orientation

Now go and create your own challenges or targets...