Meeting date: 30-03-2023 (10h30)
After the last meeting, I created a launch file to automatically start the nodes. With this I could finally test if it is possible to run two inferences in parallel. For a first test, I started two nodes, both getting images from the same topic and both doing the inference using the YOLOPv2 model (I used other name (yolop) in one of the namespaces to avoid conflict):
After that I started to create a module for YoloV5. It took me some time to get the inference running on CUDA without errors: I thought that the problem was with my code, but, after a while, I discovered that the problem was with the model, which had not been converted well and had layers that were incompatible with cuda.
By default, the model requires that input images have a size of 640x640, which is far from ideal and give less accuracy due to high distortion (I think). In addiction to that, the labels are not right yet.
Since these two problems have an easy fix (I hope), I put them aside and I started writing a guide to use the nodes to avoid forgetting any details →ROS Inference
Meanwhile Pedro reached me again to do the inference with his last model. This one was a bit trickier than the other to get working since I had to do an additional step called Reparameterization where I had problems with tensor dimensions, but after a while I managed to figure out where I was going wrong. Moreover, I found out the hard way that I installed torchvision incorrectly and there was some files missing that were now needed. I don’t know why, but there isn’t any documentation related to how to install torchvision in jetson. There is only this post that was edited and got that information removed, but we can see previous versions of the post (I found this through a youtube video). To be able to remember how to install pytorch and torchvision if needed, I wrote a guide for that: PyTorch and TorchVision
Note: Start recording some videos