
Friday Jun 13, 2025
Deep Reinforcement Learning-Based Vehicular Computation Offloading with Edge-to-Edge Colla...
The expansion of the Internet of vehicles (IoV) has spurred a significant increase in the demand for vehicular computation tasks, posing challenges for in-vehicle task processing. Multi-access edge computing (MEC), which is intended for low-latency task execution, experiences sub-band competition and workload imbalance due to the uneven distribution of vehicle densities. This paper presents a novel IoV architecture leveraging multi-roadside-unit (RSU) capabilities to facilitate efficient load balancing among RSUs through edge-to-edge collaboration. The optimization problem of computation offloading is formulated by minimizing overall task delay, which is further decoupled into two sub-problems: communication resource allocation and load balancing. We devise a two-stage deep reinforcement learning-based communication resource allocation and load balancing (DRLCL) algorithm to tackle these sub-problems sequentially. Based on real-world vehicle trajectories, experimental evaluations reveal that our proposed algorithm outperforms the baselines in reducing overall delay.
Deep Reinforcement Learning-Based Vehicular Computation Offloading with Edge-to-Edge Collaboration
Quan Chen, Shumo Wang, Southeast University; Xiaoqin Song, Nanjing University of Aeronautics and Astronautics; Tiecheng Song, Southeast University
No comments yet. Be the first to say something!