Final Project for Introduction to Robotics: Using a 3d Camera to Detect an Object and Move a Robot Arm

by ArturoH10 in Circuits > Arduino

1203 Views, 1 Favorites, 0 Comments

Final Project for Introduction to Robotics: Using a 3d Camera to Detect an Object and Move a Robot Arm

VID_20201211_160612_Moment.jpg

In this project, I focused on making the 3d camera create a bounding box around a set of boundaries in the color map of the 3d image and then tracking the centroid of said image in the X, Y, and Z planes. To this I used the following supplies:

Intel RealSense D435 camera sensor-https://www.intelrealsense.com/depth-camera-d435/

Robot Arm- https://www.amazon.com/gp/product/B0017OFRCY/ref=p...

Arduino Mega- https://www.amazon.com/ELEGOO-ATmega2560-ATMEGA16U... or

Arduino Uno:https://www.amazon.com/ELEGOO-Board-ATmega328P-ATM...

(The Arduino Uno will have the disadvantage that once the shield is installed the I/O pins will be limited)

Arduino Motor Shield- https://www.amazon.com/LM-YN-Stepper-Shield-Arduin...

These 4 were the main components that I used to do this project. Before I go in-depth as to how the code was made I will explain how the robot was assembled.

Building the Arm

To build the robot first follow the instruction manual includes with it to build it. Then you will put the motor shield on top of the Arduino mega and plug it in on the top half of said board.

Once this is done you will take the 4 motors for the base and the 3 arms and connect them in the following order to the motor shield:

Base- motor 3 in the shield

Shoulder of the arm- motor 4

Elbow of the arm- motor 2

Wrist of the arm- motor 1

Once this is done this final step is optional, you can choose to remove the gripper attachment if it's too heavy for the arm to move or you can plug the motor into the Arduino using an LN298N motor driver and a couple of jumper wires.

LN298N Driver- https://www.amazon.com/DAOKI-Controller-H-Bridge-S...

As a final step you will need to mount the camera to the robot arm, the way I did it is I attached a scrap piece of metal to the elbow joint and screwed the camera on top of that piece of metal so the camera could have some viewing clearance. This can be seen in the cover image

Downloading the Libraries and Running Them.

Before you do anything else you need to download this github library

https://github.com/IntelRealSense/librealsense

Once you do it, open the MATLAB part of the library and run some of the sample codes in there. Before doing any of that you need to connect the camera to the computer using the USB type A to C cable included in the box.

Once this is connected you can start messing with the examples and seeing how the camera works

The for Single Image Detection

untitled.jpg

There are two Matlab codes that were made for this project, single image, and live video analysis. They both work almost identically, with just small differences in how they loop stuff.

The way this loop works is first it takes the function for the color map and it creates two output functions, the colormap, and the grayscale map. Once these two are made, they are called out of the function, and then the black and white (BW) version of the grayscale is made, then the blobs in this BW mode are extracted and the small or extraneous blobs are removed. Then once we have the blobs that matter we make the bounding boxes for them and finally once the bounding boxes are made they are made so a single focus point for the object can be obtained. Finally, once this is all done the centroid of the object is obtained and the depth of the centroid is obtained by using the depth of the centroid is determined by using the grayscale image values.

One final note to be made is the file that is uploaded here is a .txt file so to run it download it, and copy its content into a Matlab .mat file that is made inside the real sense library.

Here is a screen capture of how the bounding boxes look:

Live Video Tracking

For this step, the process of detecting the blobs and the coordinates are the same as for the single image, with the only difference in the code being how the function is constructed.

For this file the blob and centroid detection process is kept inside of the main function. This whole process is then also put into a loop that runs this process for a determined amount of time before stopping.

One final note to be made is the file that is uploaded here is a .txt file so to run it download it, and copy its content into a Matlab .mat file that is made inside the realsense library.

Here is a short video demo of the tracking:

Sending Data From MATLAB to Arduino

Finally comes the Arduino movement and data transmission part of the project.

First to send the data from MatLab the following lines of code needs to be added at the beginning of the program or function depending on which version of the tracking app you are using:

At the beginning of the code:

arduinocom=serial('COM5','BaudRate',9600 );<br> fopen(arduinocom)

*here replace COM5 with whatever COM# port your Arduino has been assigned

At the end of the code:

fclose(arduinocom)

Finally, put the following line of code after the string that has all the data you are going to send

str = strcat('<HI,1,2,',num2str(cent),',',num2str(xcenter),',',num2str(ycenter),'>')
 fprintf(arduinocom,str)<br>

Once these are all added it is time to receive the data in the Arduino and use it.

Receiving Data From MATLAB and Moving the Arm

coordinated.PNG

The Arduino will receive the data from MATLAB over serial communication. To receive the data MATLAB will send it as a string and the Arduino will break it up into usable pieces of data based on a set delimiter which will then be used to control the robot.

The way the arm movement works is similar to that of a binary AND function table, where each of the functions has to be higher or lower than a certain value and then the Arduino will move in a certain direction. The way this works is that for one axis of movement there are only two states of movement, forward and backward, but with two axes of motion, two more states of motion are added. Each of these new states will require to have its own set of original motion so that the movements can be mapped correctly. This process can then again be repeated once the 3rd set of movements are added to the code.

A figure to better illustrate what is being explained above is added to this step.

Finally here is the code that is used, if you so desire you can change the speeds and locations of the motors but if you do so remember to also edit the rest of the code so it works with it.

Areas Where the Project Can Be Improved

As always with projects there are areas where this project could be improved, these are those areas for this project:

-Figure a way to keep the bounding box steady on one single object instead of moving around in the object

-Determine how to make the camera track and latch onto a single object even if more do come into the frame.

-Get a more accurate robot so better testing can be done.

-Figure how to improve depth tracking by improving how close an object can come to the camera

-Determine how to remove shadows from the image that is being captured.

These are the areas of improvement that may be looked at more in the future to see how they can be improved so the camera tracking can be improved.

Conclussion

Overall this project was very good, the camera provided a great challenge on how it could be used for what was intended as well as to how to make a bounding box algorithm for color maps and 3d tracking. I hope the information here can be clear and helpful to everyone.

For more resources please watch the following video for a live demonstration and short explanation of how all of this works.

I would like to thank Dr Becker and his introduction to robotics class for allowing me to develop this project and Dr. Weihang Zhu for lending me the Intel RealSense D35 camera.

Youtube link: