Introduction.
We have tried to perform AI inference using
ADLINK's I-Pi SMARC 1200 with an octa-core (Arm® Cortex-A78 x4 + A55 x4) MediaTek® Genio 1200.
We have summarized the procedure and invite you to give it a try.
<Main products used in this study
-
I-Pi SMARC 1200 (I-Pi SMARC Plus carrier, ADLINK SMARC LEC-MTK-I1200 module, 4GB LPDDR4X, 64GB UFS storage)
MediaTek® Genio 1200 platform-based I-Pi SMARC development Kit.
Other items to be prepared for this project:
1. Preparation
・Linux Host PC (used to write Yocto image to SMARC UFS storage)
・HDMI cable and monitor
・USB mouse and keyboard
・LAN cable (for Internet connection)
・Webcam (for AI demo)
2. For 2.0 or later
・Host PC (Linuc or Windows) with serial console access
・Serial console cable
・HDMI cable and monitor
・LAN cable (for connection to the Internet)
・Webcam (for AI demonstration)
Reference:
The main steps are as follows
- Preparation
- Serial console
- Preparation for AI Inference
- Running AI Inference
1. advance preparation
Refer to the following article to build a SMARC bootable environment using the pre-built Yocto Image provided by ADLINK.
As a precaution, please use the following Yocto Image for this project.
adlink-lec-1200-yocto-kirkstone_V3_R1_240201
2. Serial console
Connect to SMARC from Host PC by serial console referring to the following article.
ipiwiki:Reading The Data From Serial Port
The following is an example on Windows PC. If you see the following console on TeraTerm of Host PC at the time of SMARC startup, the serial console connection has been successfully established. 3.
3. Preparation before AI Inference
Start SMARC.
While "Hit any key to stop autoboot:" is displayed on the console screen of the Host PC,
If you input any key, the U-Boot prompt will be displayed.
(If no key is entered during this time, the system will boot automatically. In that case, log in as root, reboot, and try again.)
Execute the following command
setenv boot_conf "#conf-mediatek_lec-mtk-i1200-ufs.dtb#conf-gpu-mali.dtbo#conf-video.dtbo#conf-apusys.dtbo"
saveenv
reset
When the following screen appears, log in as root.
Confirm the video device name of the connected webcam.
# v4l2-ctl --list-devices
We found that the device name is /dev/video5.
Get camera information.
# v4l2-ctl --list-formats-ext -d /dev/video5
Check the format, size, and frame rate of the connected camera. 4.
4. run AI inference
Here is a sample of NNStreamer in Python. Let's try a few.
/usr/bin/nnstreamer-demo/
|
Python script |
Category |
|
run_nnstreamer_example.py |
Demo Runner |
|
nnstreamer_example_image_classification.py |
Image classification |
|
nnstreamer_example_object_detection.py |
Object detection |
|
nnstreamer_example_object_detection_yolov5.py |
Object detection |
|
nnstreamer_example_pose_estimation.py |
Pose estimation |
|
nnstreamer_example_face_detection.py |
Face detection |
|
nnstreamer_example_low_light_image_enhancement.py |
Image enhancement |
Move to /usr/bin/nnstreamer-demo/ with the cd command.
# cd /usr/bin/nnstreamer-demo/
Option --engine:
When running the sample, use --engine to select one of the following backends supported by the platform
neuronsdk tflite tflite neuronsdk , tflite armnn
options --camera: select input camera
option --performance: set the platform performance mode
Posture estimation
Run the sample Execute nnstreamer_example_pose_estimation_uvc.py.
Run with MDLA by neuronsdk: --engine neuronsdk
# python3 nnstreamer_example_pose_estimation_uvc.py \
--engine neuronsdk --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--engine tflite Running on CPU with tflite: --engine tflite
# python3 nnstreamer_example_pose_estimation_uvc.py \
--engine tflite --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--engine armnn --engine armnn --engine armnn --engine armnn --engine armnn --engine armnn --engine armnn --engine armnn --engine armnn
# python3 nnstreamer_example_pose_estimation_uvc.py \
--engine armnn --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
Face Detection
Sample Run nnstreamer_example_face_detection.py.
Execute with MDLA by neuronsdk: --engine neuronsdk
# python3 nnstreamer_example_face_detection_uvc.py \
--engine neuronsdk --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--engine tflite Running on CPU with tflite: --engine tflite
# python3 nnstreamer_example_face_detection_uvc.py \
--engine tflite --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--Run on GPU with Armnn: --engine armnn
# python3 nnstreamer_example_face_detection_uvc.py \
--engine armnn --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
object detection (ssd_mobilenet_v2_coco)
Sample Execute nnstreamer_example_object_detection_uvc.py.
Execute with MDLA by neuronsdk: --engine neuronsdk
# python3 nnstreamer_example_object_detection_uvc.py \
--engine neuronsdk --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--run on CPU with tflite: --engine tflite :.
# python3 nnstreamer_example_object_detection_uvc.py \
--engine tflite --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--engine armnn Running on GPU with Armnn: --engine armnn
# python3 nnstreamer_example_object_detection_uvc.py \
--engine armnn --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
Object detection (yolov5)
Sample Execute nnstreamer_example_object_detection_yolov5_uvc.py.
Run on CPU with tflite: --engine tflite :.
# python3 nnstreamer_example_object_detection_yolov5_uvc.py \
--engine tflite --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
--engine armnn Running on GPU with Armnn: --engine armnn
# python3 nnstreamer_example_object_detection_yolov5_uvc.py \
--engine armnn --cam 5 --width 640 --height 480 --performance G1200 --fullscreen 1
Summary
In this article, we have tried running samples of posture estimation, face detection, and object detection using NNStreamer.
Please edit the scripts and try them out.
Reference: