Meeting date: 09-03-2023 (10h30)

Summary

Unfortunately my grandmother passed away last week and I couldn't work at my usual pace. I had to go back home on public transportation and didn't think it was safe to bring Jetson with me. I ended up working directly on my computer using a docker container and keeping in mind that what I was doing had to be deployable in Jetson. I worked mostly on implementing ROS according to the diagram discussed last week

IMG_20230303_164033.jpg

I performed some tests and found that performing the inferences with the model in a TensorRT format and varying the data type of the weights between float16 and float32, thus varying its accuracy. Since I got very fast inferences by decreasing the precision of the weights and since it seemed to be the simplest solution, I decided to try this method in the "Inference Solution" module.

My first idea for the “Inference Manager” was to make it capable of handling multiple kinds of models automatically by loading some config files. To do this, there are at least two challenges:

A more manual solution is to create/adapt a inference script for each model and use it as “Inference Solution”. There multiple inference examples for some models provided by the authors or by the community. This is the easiest solution for a single implementation and I’m trying it with a YOLOPv2 example since is the one which I know better.


PSA

I have some versions of the drone’s CAD, but I’m not sure if I have the final one. I asked João and he told me the he probably has it, but currently he doesn’t have the computer since he asked someone to format and install a new OS. He told me that he will send me the files once he has the computer again.

I can send my files, however it is necessary to check the dimensions before printing.


Objectives


Daily report