Trying Out TensorFlow Lite Hello World Model With ESP32 and DumbDisplay

by Trevor Lee in Circuits > Arduino

1164 Views, 1 Favorites, 0 Comments

Trying Out TensorFlow Lite Hello World Model With ESP32 and DumbDisplay

ddhelloworld.jpg

In this post, I am going to show the steps I learned from trying out the "hello world" example of TensorFlow Lite, which is detailed in resources like https://www.tensorflow.org/lite/microcontrollers/get_started_low_level.

The first step is to generate a "hello world" Deep Learning model -- given an angle (in radian between 0 to 2π), approximate the sine of the input angle.

Yes, it is just the sine function of any scientific calculator, and a very restricted one. Nevertheless, frankly, if I am given a programming task to come up with such a sine function, without using sine function from a library, I don't know how to go about achieving the goal :-)

Next will be to write a program (a sketch) to run the model with ESP32 (with the help of TensorFlow Lite library), and to somehow present the results wirelessly to mobile phone (with the help of DumbDisplay).

As my previous post -- Arduino AI Fun With TensorFlow Lite, Via DumbDisplay, this post is also just for fun. Nothing really serious.

Google's Colaboratory

In this post, the "hello world" Deep Leaning model is built with Google's Colaboratory

Assuming you have a Google account, you should be able to use Colaboratory free of charge.

Head there and create a notebook -- a Jupyter notebook -- for building this "hello world" model.

Import the Needed Python Modules

model_00.png

First cell will be to import all the needed Python modules.

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import math

Input the above to a cell, then run it to import the needed Python modules, with the desired short-hands.

Create the Source of Traning Data

model_01.png

Normally, getting the training data for Deep Learning might not be easy. But for this "hello world" model, all needed is the sine function, which of cause comes standard with Numpy module. Hence the second cell for acquiring the training data can be as simple as

SAMPLES = 1000

x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
np.random.shuffle(x_values)
y_values = np.sin(x_values)

plt.plot(x_values, y_values, 'r.')
plt.show()  
  • First will randomly generate 1000 input x values in the range 0 to 2π.
  • Then the x values are shuffled / re-arranged randomly.
  • Next, the corresponding sine of x is produced as the output y (model answer)

The X and Y values constitute our source of training data. In order for easy visualization, the X and Y values are plotted with the Mathplotlib module.

Add a Bit of "Noise"

model_02.png

Since

machine learning models are good at extracting underlying meaning from messy, real world data

Therefore, we add a bit of noise to the Y values (sine of X values) of the training data.

y_values += 0.1 * np.random.randn(*y_values.shape)

plt.plot(x_values, y_values, 'b.')
plt.show()

Split the Source of Training Data

model_03.png

Normally, the source of training data acquired in previous step need be split into 3 sets

  • for training (60% here)
  • for validating (20% here)
  • for testing (20% here)
TRAIN_SPLIT =  int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)

x_train, x_validate, x_test = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_validate, y_test = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])

assert (x_train.size + x_validate.size + x_test.size) ==  SAMPLES

plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.legend()
plt.show()

Compile Training Model

model_04.png

The "training" data set and the "validating" data set will be used to compile a training model

from tensorflow.keras import layers

model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
model.summary()

Since I will not pretend to be familiar with Deep Learning, I will not even try to explain beyond what you can already see from the code :-)

  • The first layer has a single input (as we only have a single input), and 16 output
  • The second and third layers both have 16 input and 16 output
  • The last layer has 16 input and 1 output (as we only need a single output)

Then we call compile of the model object to compile it for training.

Fit to Train the Model

model_05.png

After compiling the model, training is easy, just call fit and wait.

history = model.fit(x_train, y_train, 
                    epochs=600, batch_size=16,
                    validation_data=(x_validate, y_validate))
  • x_train is the training data input; y_train is the training data output (model answer)
  • the training data set is run 600 times
  • after each batch of 16 training data input values, the model parameters are adjusted (optimized)
  • after each round, the "validating" data set (x_validate and y_validate) is used to validate the adjusted model parameters, producing "loss" stats for you to evaluate how well the training goes.

Try Out the Trained Model

model_08.png

To try out the trained model

loss = model.evaluate(x_test, y_test)
predictions = model.predict(x_test)

plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()

Notice

  • First the "loss" is evaluated, logging purpose
  • Then predict of the model object is called with the "testing" data set
  • Next, the predictions and the actual values are plotted so that can be easily visualized.

Export the Trained Model to TensorFlow Lite Format

model_09.png

Certainly, the trained model needs be saved to some format understandable by TensorFlow Lite.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("sine_model.tflite", "wb").write(tflite_model)

Here we will save the model in TensorFlow Lite format with the name "sine_model.tflite".

TensorFlow Lite Format to C Code

model_10.png

Nevertheless, "sine_model.tflite" still is not the format to use by ESP32. Instead, we will be presenting the model as C code to ESP32. Hence, we need to convert "sine_model.tflite" to C code that captures the data of "sine_model.tflite".

!apt-get -qq install xxd
!xxd -i sine_model.tflite > sine_model.h

Notice that the tool xxd will be used to capture the data of "sine_model.tflite" and save the captured data as C code -- "sine_model.h".

Download the C Code

model_11.png

After running the cell, you can download "sine_model.h" to your computer to be included with the sketch shown next.

The ESP32 Code

ddhelloworld_ss.jpg

When the model is ready, calling TensorFlow Lite to do inferencing with the model is not very complicated. I guess it is more challenging to present the inference results.

Due to lack of creativity, here I will simply use DumbDisplay to showoff the model inferencing in two ways

  • a plotter like display to show how the x and y values varies
  • a drawing canvas like display to show the positions of the x and y values.

The Sketch

You can download the sketch here. And you can download the model (C code) I used here.

Even though using TensorFlow Lite for "hello world" model is not complicated, it still requires several steps

  • Oh yes, since the sketch will connect to DumbDisplay app wirelessly, you first will define the name of the Bluetooth by commenting out the line like
#define BLUETOOTH "ESP32BT"
  • Create a tflite::ErrorReporter object for capturing errors from TensorFlow Lite when running the model.
class DDTFLErrorReporter : public tflite::ErrorReporter {
public:
  virtual int Report(const char* format, va_list args) {
    int len = strlen(format);
    char buffer[max(32, 2 * len)];  // assume 2 times format len is big enough
    sprintf(buffer, format, args);
    dumbdisplay.writeComment(buffer);
    return 0;
  }
};
// Set up logging
DDTFLErrorReporter error_reporter_impl;
tflite::ErrorReporter* error_reporter = &error_reporter_impl;
  • Create tflite::Model model object from the model C code "sine_model.h"
#include "sine_model.h"
...
// Map the model into a usable data structure. This doesn't involve any
// copying or parsing, it's a very lightweight operation.
const tflite::Model* model = ::tflite::GetModel(sine_model_tflite);
  • Create tflite::AllOpsResolver object
// This pulls in all the operation implementations we need
tflite::AllOpsResolver resolver;
  • Create memory areas for TensorFlow Lite use. Note that memory allocation is actually done in later steps.
// Create an area of memory to use for input, output, and intermediate arrays.
// Finding the minimum value for your model may require some trial and error.
const int tensor_arena_size = 2 * 1024;
uint8_t tensor_arena[tensor_arena_size];
  • Create tflite::MicroInterpreter interpreter object, which is the object to use for inferencing
// Build an interpreter to run the model with
tflite::MicroInterpreter interpreter(model, resolver, tensor_arena,
                                     tensor_arena_size, error_reporter);
  • Allocate memory to the memory areas declared previously
  // allocate memory from the tensor_arena for the model's tensors
  interpreter.AllocateTensors();
  • From the memory, get the location to set inference input to
  // obtain a pointer to the model's input tensor
  input = interpreter.input(0);

Now that we have all the TensorFlow Lite objects ready, loop doing inferencing can be simple like

...
  // provide an input value
  input->data.f[0] = in;
...
  // run the model on this input and check that it succeeds
  TfLiteStatus invoke_status = interpreter.Invoke();
...
  TfLiteTensor* output = interpreter.output(0);
  // obtain the output value from the tensor
  float out = output->data.f[0];
...

Upload the Sketch

To build the sketch, place the sketch "ddhelloworld.ino", as well as the model C file "sine_model.h" in the same directory "ddhelloworld". Then use Arduino IDE to build and upload the sketch to your ESP32 board.

Before compiling the sketch, you will need to install two addition libraries -- "TensorFlow Lite ESP32" library and "DumbDisplay" library.

You will also need to install the DumbDisplay Android app to your Android phone as well.

Note that the sketch will make connection with DumbDisplay Android app using Bluetooth with name BT32.

Enjoy!

Hope you can agree that it is not very difficult to run simple TensorFlow Lite model with microcontroller board like ESP32. Enjoy!

Peace be with you. Jesus loves you. May God bless you!