Poster
Unified Local-Cloud Decision-Making via Reinforcement Learning
Kathakoli Sengupta · Zhongkai Shangguan · Sandesh Bharadwaj · Sanjay Arora · Eshed Ohn-Bar · Renato Mancuso
# 300
Strong Double Blind |
Embodied vision-based real-world systems, such as mobile robots, require careful balancing between energy consumption, compute latency, and safety constraints to optimize operation across dynamic tasks and contexts. As local computation tends to be restricted, offloading the computation, i.e., to a remote server, can save local resources while providing access to high-quality predictions from powerful and large models. Yet, the resulting communication and latency overhead has led to limited usability of cloud models in dynamic, safety-critical, real-time settings. Towards effectively addressing this trade-off, in this work, we introduce UniLCD, a novel hybrid inference framework for enabling flexible local-cloud collaboration. By efficiently optimizing a flexible routing module via reinforcement learning and a suitable multi-task objective, UniLCD is specifically designed to support multiple constraints of safety-critical end-to-end mobile systems. We validate the proposed approach using a challenging crowded navigation task requiring frequent and timely switching between local and cloud operations. UniLCD demonstrates both improved overall performance and efficiency, by over 17% compared to state-of-the-art baselines based on various split computing strategies.
Live content is unavailable. Log in and register to view live content