A primer on TinyML featuring Edge Impulse and OpenMV Cam H7
This is a tutorial on building a Coins v.s Notes classifier because I don’t own any cats or dogs. This is a toy project for evaluating the features offered by Edge Impulse. The proposed model is 60Kb in size and has achieved an accuracy above 95% in this naive task.
Machine Learning and specifically deep learning has taken the world by storm. With the boom of cheap compute and an enormous amount of data, artificial intelligence has become accessible to everyone. These AI models require lots of data and compute to train. They are deployed on computationally heavy servers where they sit behind APIs, which serve different applications. An interesting field related to artificial intelligence has started to gain traction, this is being called TinyML or Tiny Machine Learning. Basically, when machine learning meets embedded systems, and models get deployed on microcontrollers, it is called TinyML.
Edge Impulse is the platform for developing intelligent devices using embedded machine learning (TinyML). Basically, it is a web application that allows you to build TinyML projects from data collection to deployment. The best part is that you don't need to write a single line of code to work with it.
OpenMV Cam H7 is a small development board that features an ARM Cortex-M7 based microcontroller and a small camera which make this board super useful for machine vision applications. Edge Impulse can directly generate code for it.
The Dataset
To build a model that classifies images into coins and notes, we need a dataset with many images. I built a small dataset of around 180 images using the dataset creation script in the OpenMV IDE. The dataset contains an equal number of images of banknotes, coins, and other random things. The images are divided into three classes, i.e., coin, note, and unknown. You can directly create a dataset on Edge Impulse by connecting your mobile phone and using its camera. I used the OpenMV camera and uploaded it to Edge Impulse using the upload feature.
The Impulse
The pipeline also called impulse in Edge Impulse, includes all the operations starting from the captured image to the classification result. It includes operations like preprocessing, which included resizing, etc. Edge Impulse makes it very easy to design any pipeline, it just requires interacting with a graphical interface.
The captured image first gets resized to 48x48, it helps in keeping the final model small in size. The resized image is given to a neural network for classification, this NN is designed in the next section. Finally, the outputs are automatically inferred by Edge Impulse, from analyzing the data. The NN can also be a pre-trained model, in that transfer learning will be used.
After designing the pipeline we can see the results of the preprocessing on all images. Edge Impulse also generates 3-dimensional features for every image that can be visualized in a plot.
It is a very useful feature as we can see if the data is separable in 3-dimensions. In our case, it can be seen that all data is divided into 3 clusters which can be classified using a machine learning model.
The “Deep” Neural Network
As this is a toy project and I wanted to keep the model as small as possible, I used a very naive neural network that uses 2 convolutional layers and a single dense layer. I could have also used transfer learning using the MobileNet, but that would make the model very big.
As you can see in the above image, we get the options to select the number of epochs, learning rate, and minimum confidence too. This GUI is basically a graphical alternative to Keras and you also get the option to edit the python code to add layers.
The Training
Finally, you press the train button and it trains and fine-tunes the model. After the training is completed, you are presented with the confusion matrix and some other details like accuracy, etc. One useful feature of Edge Impulse is that it shows the on-device performance metrics like the inference time, RAM usage, and ROM usage.
Edge Impulse also shows you the test metrics but I am not sharing them here because this is just a toy project.
Deployment
The best part about Edge Impulse is the easy deployment procedure. In the deployment tab of the Edge Impulse studio, you can directly select the platform and it will generate all required code and supporting files for it. In my case, I chose the OpenMV and it generated a zip containing three files.
I copied the files to the OpenMV Cam H7 SD card and ran the supplied script. The model ran smoothly on the board and I tested it in real-time.
Here is a video
Conclusion
This tutorial is for explaining the steps to be followed while building a TinyML project using the Edge Impulse platform and deploying it on a microcontroller. The Example project shown here is very basic and uses very little data. The final tflite file is around 60KB in size and it achieved an accuracy above 95%. This project shouldn’t be taken seriously and should only be used as a reference for building real projects.
Feel free to point out any mistakes you find, constructive criticism does no harm.
Thank you for reading! I’d love to hear from you and collaborate on a project, so feel free to contact me on Linkedin or write to me at puranjay12@gmail.com
