Run YoloV5s with TensorRT and DeepStream on Nvidia Jetson Nano
This article will help you to run your YoloV5s model with TensorRT and DeepStream.
Note : This article has GitHub repo links which will help you to run your model on TensorRT and DeepStream. I will just guide you/walk you through the steps following which you will be able to run COCO pretrained model. For using custom trained model, the GitHub repos have the steps to follow.
Getting Started…
In my previous article (link), I focused on how to setup your Jetson nano and run inference on Yolov5s model. For this article, I used docker image from Hello AI course by Nvidia (YouTube link) and ran inference on YoloV5s with TensorRT optimization. Further on, I installed DeepStream on Nano and ran inference on YoloV5s with it.
Assuming you are using official repo to train/run your YoloV5s model and the folder is in the home directory,
- Run this command to know your JetPack/L4t version
$sudo apt-cache show nvidia-jetpack
2. Clone this repo and pull the docker image from here as per your Jetpack version. This is official repo for Hello AI course by Nvidia. The docker has everything pre-installed — PyTorch, TensorRT, etc. (Follow the initial steps in the repo on how to clone the repo and pull the docker container)
$git clone --recursive https://github.com/dusty-nv/jetson-inference
3. Now move into jetson-inference folder (created by cloning the repo) and run the docker which you just downloaded
$cd jetson-inference
$docker/run.sh
If you get an error like : Error response from daemon: unauthorized: authentication required. See ‘docker run — help’.
Then run :
$docker/run.sh -c dustynv/jetson-inference:r32.x.x
r32.x.x -> put the version number of the docker container pulled
4. Check if you are able to use the docker container by typing python. After confirming, exit and come back to the home directory.
5. After you have trained your model (or if you want to run inference on the COCO pretrained model), convert the model from .pt to .wts format and build TensorRT engine. Follow this repo — https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5
6. Once you have followed the steps in the above repo (assuming you will have a folder named tensorrtx from above repo), to run the tensorrt engine, mount the folder while starting the docker
$docker/run.sh -c dustynv/jetson-inference:r32.5.0 --volume ~/tensorrtx/:/tensorrtx/
Now, your docker container can access the tensorrtx folder stored in home directory.
7. Now run this command to test your tensorrt engine
cd /tensorrtx/yolo
python yolov5_trt.py
8. Now, install DeepStream SDK in your Nano from here(Nvidia’s site). Exit from your docker. The docker container we used doesn’t have DeepStream installed. To download DeepStream SDK use this link(Nvidia’s site)
9. After setting up DeepStream, to run your YoloV5s TensorRT engine with DeepStream, follow this repo.
10. Assuming you are in home directory after setting up DeepStream, to run your YoloV5s tensorrt engine with DeepStream :-
$cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
$deepstream-app -c deepstream_app_config.txt
Thanks for reading my blog. Hope this would have helped you to run YoloV5s TensortRT engine with DeepStream. If you find any issues or any better resource, do mention it in the comments.
Thanks :).
Do connect with me on LinkedIn :)