MCUXpresso_MIMXRT1052xxxxB/boards/evkbimxrt1050/eiq_examples/tflm_label_image
Yilin Sun 6baf4427ce
Updated to v2.15.000
Signed-off-by: Yilin Sun <imi415@imi.moe>
2024-03-18 23:15:10 +08:00
..
armgcc Updated to v2.15.000 2024-03-18 23:15:10 +08:00
doc Updated to v2.15.000 2024-03-18 23:15:10 +08:00
source Updated to v2.15.000 2024-03-18 23:15:10 +08:00
board.c Updated to v2.15.000 2024-03-18 23:15:10 +08:00
board.h Updated to v2.15.000 2024-03-18 23:15:10 +08:00
board_init.c Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
board_init.h Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
clock_config.c Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
clock_config.h Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
dcd.c Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
dcd.h Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
evkbimxrt1050_sdram_init.jlinkscript Updated to v2.14.0 2023-11-30 20:55:00 +08:00
pin_mux.c Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
pin_mux.h Update SDK to v2.13.0 2023-01-26 09:35:56 +08:00
readme.md Updated to v2.15.000 2024-03-18 23:15:10 +08:00
tflm_label_image_v3_14.xml Updated to v2.15.000 2024-03-18 23:15:10 +08:00

readme.md

Overview

TensorFlow Lite model based implementation of object detector based on TensorFlow Lite example [2] adjusted to run on MCUs.

A 3-channel color image is set as an input to a quantized Mobilenet convolutional neural network model [1] that classifies the input image into one of 1000 output classes.

Firstly a static stopwatch image is set as input regardless camera is connected or not. Secondly runtime image processing from camera in the case camera and display is connected. Camera data are displayed on LCD.

HOW TO USE THE APPLICATION: To classify an image, place an image in front of camera so that it fits in the white rectangle in the middle of the display. Note semihosting implementation causes slower or discontinuous video experience. Select UART in 'Project Options' during project import for using external debug console via UART (virtual COM port).

[1] https://www.tensorflow.org/lite/models [2] https://github.com/tensorflow/tensorflow/tree/r2.3/tensorflow/lite/examples/label_image

Files: main.cpp - example main function image_data.h - image file converted to a C language array of RGB values using Python with the OpenCV and Numpy packages: import cv2 import numpy as np img = cv2.imread('stopwatch.bmp') img = cv2.resize(img, (128, 128)) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) with open('image_data.h', 'w') as fout: print('#define STATIC_IMAGE_NAME "stopwatch"', file=fout) print('static const uint8_t image_data[] = {', file=fout) img.tofile(fout, ', ', '0x%02X') print('};\n', file=fout) labels.h - names of object classes mobilenet_v1_0.25_128_quant_int8.tflite - pre-trained TensorFlow Lite model quantized using TF Lite converter (for more details see the eIQ TensorFlow Lite User's Guide, which can be downloaded with the MCUXpresso SDK package) (source: http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_128.tgz) stopwatch.bmp - image file of the object to recognize (source: https://commons.wikimedia.org/wiki/File:Stopwatch2.jpg) timer.c - timer source code image/* - image capture and pre-processing code model/get_top_n.cpp - top results retrieval model/model_data.h - model data from the .tflite file converted to a C language array using the xxd tool (distributed with the Vim editor at www.vim.org) model/model.cpp - model initialization and inference code model/model_mobilenet_ops.cpp - model operations registration model/output_postproc.cpp - model output processing video/* - camera and display handling

SDK version

  • Version: 2.15.000

Toolchain supported

  • IAR embedded Workbench 9.40.1
  • Keil MDK 5.38.1
  • GCC ARM Embedded 12.2
  • MCUXpresso 11.8.0

Hardware requirements

  • Mini/micro USB cable
  • EVKB-IMXRT1050 board
  • Personal computer
  • MT9M114 camera (optional)
  • RK043FN02H-CT display (optional)

Board settings

Connect the camera to J35 (optional) Connect the display to A1-A40 and B1-B6 (optional) Connect external 5V power supply to J2, set J1 to 1-2

Prepare the Demo

  1. Connect a USB cable between the host PC and the OpenSDA USB port on the target board.
  2. Open a serial terminal with the following settings:
    • 115200 baud rate
    • 8 data bits
    • No parity
    • One stop bit
    • No flow control
  3. Download the program to the target board.
  4. Either press the reset button on your board or launch the debugger in your IDE to begin running the demo.

Running the demo

The log below shows the output of the demo in the terminal window (compiled with ARM GCC):

Label image object recognition example using a TensorFlow Lite Micro model. Detection threshold: 23% Expected category: stopwatch Model: mobilenet_v1_0.25_128_quant_int8

Static data processing:

 Inference time: 88 ms
 Detected:  stopwatch (87%)

Camera data processing: Data for inference are ready

 Inference time: 88 ms
 Detected: No label detected (0%)

Data for inference are ready

 Inference time: 88 ms
 Detected:     jaguar (92%)

Data for inference are ready

 Inference time: 88 ms
 Detected:  pineapple (97%)