The main idea behind this approach is to keep everything modular, thus, simple and organized. This way it relatively simple to do any changes needed to improve compatibility.

Here is a basic diagram to better illustrate how things are organized:

Week05_Dissertation-4-2.svg

As you can see there are four main blocks. Let’s break them down:

The 4 main blocks

Image sender

mediamodifier_cropped_image.png

This node only gets the image from the source and sends it through a ROS Topic

It’s a temporary node for tests and exemplification

Inference node

mediamodifier_cropped_image(2).png

This block consists of 2 sub-blocks: Inference Manager and Inference Solution.

Possible modification 1: Add a parameter for the model type (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, CoreML, TensorFlow SavedModel, TensorFlow GraphDef, TensorFlow Lite, TensorFlow Edge TPU, TensorFlow.js, PaddlePaddle or a custom one) and load the model in a module (increase compatibility)

Model

mediamodifier_cropped_image(1).png

Temporary: The model must be in TorchScript

Possible modification 1: Add a parameter for model type (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, CoreML, TensorFlow SavedModel, TensorFlow GraphDef, TensorFlow Lite, TensorFlow Edge TPU, TensorFlow.js, PaddlePaddle or a customized one) and load the model in a module (increase compatibility)

Receiver

mediamodifier_cropped_image(3).png

This is the node only gets the inference results from each ROS Topic and merge them with the original image.

It’s a temporary node for tests and exemplification