TinyML Image Recognition With Edge Impulse, Nano 33 BLE and OV7670 Camera

by kevinjpower in Circuits > Arduino

1602 Views, 2 Favorites, 0 Comments

TinyML Image Recognition With Edge Impulse, Nano 33 BLE and OV7670 Camera

Headerimage.png

Use a TinyML neural network to recognize images taken by a OV7670 camera attached to a Arduino Nano 33 BLE. Inferencing and recognition runs on the Nano and gives predictions of which object is placed in front of the camera. The network is trained and deployed from Edge Impulse Studio.

The Nano 33 BLE is a microcontroller in the Arduino family based on the nRF52840 from Nordic Semiconductors, a 32-bit ARM® Cortex®-M4 CPU running at 64 MHz. It is pin equivalent to the Arduino Nano and is billed as been able to handle edge AI tasks using TinyML concepts and infrastructure.

The OV7670 is an image sensor that can be used to capture pictures when controlled by a microprocessor such as the Nano 33 BLE.

Edge Impulse Studio is a platform that builds AI applications for so-called edge devices. To quote from their website:

“Build datasets, train models, and optimize libraries to run on any edge device, from extremely low-power MCUs to efficient Linux CPU targets and GPUs.”

The Nano 33 BLE is an offically supported device on Edge Impulse, which means that it can be interfaced directly to the Studio for data acquisition purposes. Once the model is trained, the result can be deployed to a Nano 33 BLE from the studio.

This project shows how you can bring together the Nano 33 BLE, the OV7670 camera and Edge Impulse Studio to construct a TinyML project to recognize objects.

Supplies

Here are the major components for this project

  1. Arduino Nano 33 BLE (or Nano 33 BLE Sense)
  2. OV7670 camera module
  3. Perf Board, wire and headers
  4. Random Objects for recognition

Hardware Setup

IMG_3826.JPG
IMG_3835.JPG
IMG_3836.JPG
Connectiontable.png

The OV7670 camera needs to be connected to the Arduino Nano 33 BLE. Refer to a previous project for full details

https://www.hackster.io/umpheki/ov7670-camera-and-image-sensor-with-nano-33-ble-497c5f

Also, Edge Impulse help documentation has an article on how to do this

https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nano-33-ble-sense#connecting-an-off-the-shelf-ov7675-camera-module

Connection between the Nano and OV7670 as per the table

There are two options for completing the connection: breadboard or perf board.

Picture included of the completed breadboard option

As a more robust alternative, you can use perf board, wires and headers to solder a mounting board. Included are some pictures of the finished product:

Once connected, test the camera to make sure that it is taking pictures and transmitting images, before starting the rest of this project.

Software Setup

arduinocli.png
flashlinux.png
impulsedeamon.png

Lot of stuff needed here

Assuming the Arduino IDE is installed, you need to ensure that Nano 33 BLE board support is installed. This can be done using the boards manager. See the article on the official Arduino website

https://www.arduino.cc/en/Guide/NANO33BLE

You need to sign up for an Edge Impulse free account.

https://studio.edgeimpulse.com/signup

To follow the full instructions in this project, you will also need to install the Edge Impulse CLI (Command Line Interface) which is used to control local devices.

An overview on how to install and use the library is included in the excellent documentation from Edge Impulse.

https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-overview

https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-installation

If you are unsuccessful with installing the CLI, consult the help sections of the Edge Impulse forum.

In order for Edge Impulse CLI to work with the Nano 33 BLE, you need to install the Arduino CLI, which contains utilities that can communicate with Arduino boards over the serial connection. Essentially it does the same as the IDE without the graphic interface.

https://arduino.github.io/arduino-cli/0.33/

https://arduino.github.io/arduino-cli/0.33/installation/

To verify you have Arduino CLI installed on your computer, type in following command at the command line

$arduino-cli help core

Finally, read the Edge Impulse documentation for the Nano 33 BLE at:

https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nano-33-ble-sense


Connect Nano 33 BLE to Edge Impulse

nanoconnect.png

Here are the steps to connect your Nano 33 BLE to Edge Impulse

  1. On your Edge Impulse dashboard, create a new project. The project dashboard will be displayed
  2. On the dashboard, make sure that the target device is set to Arduino Nano 33 BLE Sense (Cortex-M4F 64MHz)
  3. In the left hand menu, select Devices
  4. Connect your Nano to the computers serial port
  5. Download the latest Edge Impulse firmware https://cdn.edgeimpulse.com/firmware/arduino-nano-33-ble-sense.zip
  6. Unzip the file to a location of your choosing
  7. Press RESET twice on the Nano to start the bootloader. Orange LED should be flashing
  8. Execute the flash script for your operating system. For example $./flash_linux.sh
  9. If this is successful, press RESET once on the Nano to enter normal mode
  10. From a command prompt run the Edge Impulse CLI command $edge-impulse-daemon -- clean
  11. If you have multiple projects, you will be required to choose a project.

If all this works, the Nano will show up as a connected device in your Edge Impulse project

$./flash_linux.sh
$edge-impulse-daemon --clean

Gathering Image Data

wire.jpg
wheel.jpg
camera.jpg
collectdata.png
TestTrain.png

You are now ready to start collecting image data for your project. Select objects that the final project will recognize once the completed firmware is deployed. Try objects that have different colors and contrasts. Limit the number of objects to three or four.

On the Edge Impulse left menu, select Data Acquisition

If all is connected correctly, there will be a Collect Data window in Edge Impulse. Select Camera (160x120) from the sensor list and type in a label describing the current object. A preview of what the camera is seeing will show up.

You can now start taking images of the objects you have chosen for recognition by using the Start Sampling button.

Take multiple pictures of the objects, varying angle, distance from camera, attitude, lighting and other variables such no two pictures are the same.

Change the label for different types of objects and make sure the images are split 80% / 20% between training and test

Examples of data used in the project included.

Once data acquisition is complete, you can move onto Impulse Design

Impulse Design

impulsedesign.png

Impulse design is completed in the Create Impulse section and consists of three blocks:

  1. Input Block
  2. Processing Block
  3. Learning Block

The best configuration for this specific application is shown in the pictures

Because the project uses Transfer learning (more on this later), the incoming picture has to be resized to fit the neural network. The optimum size is 96 x 96 pixels; the input block will resize the picture from 160x120 (camera setting) to 96x96. The algorithm for doing this is specified by Resize mode drop down menu. Check out Edge Impulse help screen for an explanation of this process.

Once the impulse design is completed, two more steps remain – Image and transfer learning

Image

ondeviceperformance.png

This step consists of Digital Signal Processing (DSP) which takes the raw data and via signal processing converts it to the actual input to the neural network.

In this case, we are converting the image to greyscale 96x96. This reduces the amount of information that is fed into the neural network and reduces complexity, allowing inferencing to be done on an edge device such as the Nano 33 BLE with limited memory.

Follow these steps:

  1. Set color depth to greyscale and click on “Save parameters”
  2. This will automatically take you to the generate features screen. Generate the features
  3. The feature explorer should show clusters of distinct features for each object in the dataset

Take note of the On device performance metric. This is the amount of time the target device (in this case Nano 33 BLE) will take to take the incoming data (160 x 120 picture with RGB representation) and convert it to the neural network input (96x96 greyscale picture)

Transfer Learning

RAMusage.png

It is now time to train the network for image recognition. The network to be used is the MobileNetV1 96x96 0.25 (no final dense layer, 0.1 dropout).

MobileNet was created by Google

https://ai.googleblog.com/2017/06/mobilenets-open-source-models-for.html

It is a pre-trained network based on an existing large image dataset. The transfer learning technique does not train the initial layers of MobileNet, and only changes a few of the final output layers. By doing this, it is possible to get acceptable image recognition performance on a small dataset – most of the hard work has already been done.

Once all the preconditions are met, start training can be activated.

Training takes some time but once complete some important information is displayed

  1. Model Accuracy – this must be close to 100% if possible
  2. Confusion matrix – how many samples were mis classified
  3. On Device performance – an estimate of how long an inferencing cycle will take on the target device and how much RAM is required to carry out the inferencing

In the case of the Nano 33 BLE, max RAM capacity is 256 K. The amount of RAM reported in transfer learning is for inferencing only (arena), it does not include other variables and image buffers. So any value over about 180 K RAM will not work once the model is deployed to the Nano.

Model Testing

Once the training is complete, the test set of data gathered during data acquisition can be submitted to the model to see how it performs.

Use the “Classify All” button

Once complete, an accuracy matrix will be presented. This shows how many of the test samples are classified correctly and which were not. The list of test will highlight in red the specific samples that was misclassified. These samples can be examined in the original data acquisition step to see what specifically may have caused the misclassification.

Deployment

screeninferencing.png

You are finally ready to deploy the trained impulse to the Nano 33 BLE. There are two options to test the impulse:

  1. Use the Edge Impulse CLI command “edge-imulse-run-impulse”
  2. Use the Arduino library

Here are the steps required to to use the Edge Impulse CLI command

  1. Go to the deployment page on the projects
  2. In the Search Deployment options, find the Arduino Nano 33 BLE and then start the build, which will compile the Impulse into a binary file. Make sure the quantized int8 option is selected. The Nano does not have enough memory to run the other option.
  3. Once the build is complete, a .zip file will be downloaded to your computer.
  4. Unzip the file to a location of your choosing and make this directory your current directory
  5. Press RESET twice on the Nano to start the bootloader. Orange LED should be flashing
  6. Execute the flash script for your operating system from the command prompt. For example, on a Linux system $./flash_linux.sh
  7. If this is successful, press RESET once on the Nano to enter normal mode
  8. From a command prompt run the Edge Impulse CLI command edge-impulse-run-impulse – debug
  9. This will start a inferencing session on the Nano 33 BLE and the results will print out on the command prompt screen
$edge-impulse-run-impulse

If you want to see a feed of the camera and live classification in your browser, use the address shown on the command prompt screen (typically 192.168.12.49:4915).

Here are the steps needed to use the Arduino library

  1. Before using this approach, define the area that the camera is taking a picture of. Once this is done, don’t move the camera. The reason for this is that, unlike the previous deployment option, there is no picture of what is been analyzed which allows adjustment of camera or object to get the best picture.
  2. Go to the deployment page on the project
  3. In the Search Deployment options, find the Arduino library option and then start the build, which will compile the Impulse into a Arduino library file. Make sure the quantized int8 option is selected. The Nano does not have enough memory to run the other option.
  4. Once the build is complete, a .zip file will be downloaded to your computer.
  5. In the Arduino IDE, access the command Sketch → Include Library → Add .ZIP library and select the zip file
  6. Once library is installed, it will appear in File → Examples, using the name of your project. You need to find the example nano_ble33_sense_camera
  7. Compile and upload this sketch to Nano 33 BLE.
  8. Once the program starts running, the results of the inferencing will print out in the Serial Monitor

If you need to write additional code for a specific application, this can be integrated into the library.

For the three objects used in this project, accuracies of above 0.9 were achieved. No bad for the limited training set running on a micro controller with a $5 camera.