Meeting date: 16-03-2023 (10h30)
The main objectives for this week were to find a way to generalize the results of the inferences and to implement a solution based in the diagram and test with YOLOPv2
To do this, I started by search already done solutions and I found this one: https://github.com/dusty-nv/ros_deep_learning/. This solution have multiple nodes (one for each kind of network), but it only supports segnet, detectnet and imagenet networks and was written in C and C++ (harder for me to adapt).
After that I started to create a solution by myself based in the diagram. I had in mind to use config files to make the node compatible with all kinds of models, but I concluded that this will turn everything too much complicated (similar to deepstream) and not user-friendly at all to other users -https://github.com/pytorch/vision/issues/428. I ended up taking a different approach: Instead of config files, I decided to use modules (each model will have its module) with two functions - “output_organizer” and “transforms”.
To create a module for a model, is required to know how the inference of the model works, by consulting the model’s paper/demo (similar to a config file) YOLOPv2 example
For now I have in mind to use two ROS topics to send the inference results: one for detection messages and the other for segmentation messages
Detection2D message
Segmentation message
What was done