Distributed and Context Aware Application of Deep Neural Networks in Mobile 3DMulti-sensor Systems Based on Cloud-, Edge- and FPGA-Computing
The use of deep neural networks (DNN) for 3Dimage processing significantly enhances visual cognition of mobile systems by considering spatial information. However, training and execution require high computing power. This is crucial in applications with real-time constraints since mobile systems have limited resources. Current approaches do not consider the usage of 3D-sensing. Furthermore, suggested system architectures solely focus on cloud- and edge-computing in combination with load balancing and parallelization for a distributed execution of DNNs. In contrast, we propose a novel system architecture for the distributed and context aware usage of DNNs for image processing tasks in mobile 3D-multisensor systems. Thereby, the scalable cloud- and edgeinfrastructure is complemented by realtime capable and energy-efficient FPGA-computing. The publishsubscriber pattern facilitates the distributed execution of DNNs as well as their dynamic deployment. Moreover, context information is considered. Thus, a rule-based context model dynamically loads specialized DNNs and selects appropriate devices for execution. Finally, a case-study on a mobile 3D-multi-sensor system for wheeled walkers demonstrates applicability and benefits of the proposed approach.
deep neural networks, context awareness, 3Dsensing, edge-computing, cloud-computing, fpga-computing