The aim of the live demo is to be able to classify different Lego figures (craftsman, astronaut, cook etc.) or different traffic signs. This task is performed by neural networks, more precisely by Convolutional Neural Networks (CNNs).
Basler initially trained two different CNNs, one for the classification of Lego figures, the other for the classification of traffic signs. The trained CNNs are not very large with only a few megabytes and can be transferred from the cloud to the edge device in an acceptable amount of time via a low-bandwidth connection. After transferring the Lego figure CNN, the Edge Device was able to reliably classify the figures and report the result to the cloud with low bandwidth requirements and low latency. To "retool" the Edge Device to classify traffic signs, only the corresponding traffic sign CNN had to be transmitted from the cloud, so that the smart sensor was then able to reliably detect different traffic signs.
To the software for embedded vision