Episodes

Friday Jun 13, 2025
Friday Jun 13, 2025
Radar imaging provides non-contact, privacy-preserving, and environmentally robust monitoring for continuous human motion recognition (HMR) by leveraging diverse information embedded in various radar signal domains. However, current research has not effectively integrated multi-radar and multi-domain imaging to fully exploit the benefits of distributed radar systems. To bridge this gap, we propose a multi-radar, multi-domain parallel cross-attention model with four key components: intra-domain cross-radar weight sharing encoders specific to each domain for consistent feature extraction and parameter reduction, domain-level parallel cross-attention (DLPCAN) modules to fuse domain-specific features and enhance feature representation robustness in each radar, a source-level attention fusion (SLAF) module to highlight significant features from multiple radar inputs, and two bi-directional gated recurrent unit (BiGRU) modules to capture temporal information. The model is trained using connectionist temporal classification (CTC) loss for effective sequence prediction. By integrating data from multiple radar nodes and domains, our approach significantly improves continuous HMR performance compared to single radar systems and single domain data. Comparative evaluations demonstrate that our model outperforms state-of-the-art radar imaging-based HMR solutions.Distributed Radar Imaging with Parallel Cross-Attention for Continuous Human Motion RecognitionYijie Gao, Jianqiao Zhang, La Trobe University; Hao Xiong, Macquarie University; Jiquan Ma, Heilongjiang University; Qiangguo Jin, Northwestern Polytechnical University; Changyang Li, Sydney Polytechnic Institute; Peng Cheng, Hui Cui, La Trobe University

Friday Jun 13, 2025
Friday Jun 13, 2025
We propose a machine learning (ML) based end-to-end framework for pilotless communications that consists of two key components. The first component is an asymmetric modulation constellation that enables pilotless communications under channel impairments. The second component is a neural network (NN) receiver featuring an architecture that has a core of several serially-connected ResNet-like blocks. The transmitter only sends data symbols (without any pilots), and the NN receiver enables pilotless communications by using the received data symbols from the asymmetric constellation to perform implicit channel estimation/compensation and generate log-likelihood ratios (LLRs) for the bits comprising the data symbols. The combination of the asymmetric modulation constellation and the NN receiver achieves similar or superior performance to a traditional zero-forcing (ZF) receiver that relies on pilot symbols for channel estimation for 64-ary and 256-ary modulations for channels with limited time and frequency selectivity.AI/ML-Based Asymmetric Modulation Constellations and Pilotless CommunicationsCaleb Lo, Fabrizio Carpi, Joonyoung Cho, Samsung Research America; Charlie Zhang, Samsung

Friday Jun 13, 2025
Friday Jun 13, 2025
Next-generation mobile networks such as Beyond 5G and 6G have the potential to realize mission-critical connected services such as remote vehicle control. We previously proposed a real-time simulator in which OMNeT++, CARLA, and SUMO were connected and exchanged information to evaluate mission-critical services. A drawback of our current simulator is that the wireless propagation loss caused by the surrounding buildings in the simulator cannot be considered, mainly in terms of calculation time; thus, the performance of the services cannot be properly evaluated in the scenes where the services are provided. In this paper, we describe the wireless propagation simulation interacting with the wireless simulator in real-time, and evaluate its effectiveness through wireless image transmission for a remote vehicle control service.Wireless Propagation Simulation Interacting with Remote Vehicle Control Simulator in Real-TimeMasaki TAKANASHI, Toyota Centrl R&D Labs.; Kengo Sasaki, Yuma Taguchi, Takashi Machida, Katsushi Sanda, Toyota Central R&D Labs., Inc.

Friday Jun 13, 2025
Friday Jun 13, 2025
For future applications such as sensor sharing, computation task offloading for autonomous driving, and remote driving, robust real-time video transmission with low latency via cellular vehicle-to-everything (C-V2X) communication is essential to ensure operational reliability. While advanced communication services require sensor sharing, yet there is a lack of comprehensive latency analysis for high-volume data between vehicles and remote users or servers. Existing literature predominantly focuses on vehicle-to-vehicle communication or basic device status messages with low volume, which are insufficient for supporting advanced services such as autonomous driving. Consequently, it is crucial to analyze the latency involved in sensor sharing between vehicle and remote entity over different channel states. In this paper, the latency of sensor data transmission using 5G C-V2X Uu interface is investigated in consideration of various channel states and modulation and coding schemes. Additionally, we analyzed the latency depending on the resolution and encoding of the camera video image. Simulation results show the feasible frame rates and video resolutions for both raw and compressed video transmissions within these communication systems, and highlight the effects of channel state and multi-user scenarios on the feasibility of real-time camera video sharing.Latency Analysis of 5G C-V2X Real-Time Video Transmission Over Different Channel StatesHanyoung Park, Yongjae Jang, DGIST; Ji-Woong Choi, Daegu Gyeongbuk Institute of Science and Technology

Friday Jun 13, 2025
Friday Jun 13, 2025
A notable challenge in Electric Vehicle (EV) charging is the time required to fully charge the battery, which can range from 15 minutes to 2-3 hours. However, this idle period for the EV presents an opportunity to offer time-consuming or data-intensive services such as vehicular software updates. ISO 15118 referred to the concept of Value-Added Services (VASs) in the charging scenario, but it remained underexplored in the literature. Our paper addresses this gap by proposing EVOLVE, the first EV charger compute architecture that supports secure on-charger universal applications with upstream and downstream communication. The architecture covers the end-to-end hardware/software stack, including standard API for vehicles and IT infrastructure. We demonstrate the feasibility and advantages of EVOLVE by employing and evaluating three suggested value-added services: vehicular software updates, security information and event management (SIEM), and secure payments. The results demonstrate significant reductions in bandwidth utilization and latency, as well as high throughput, which supports this novel concept and suggests a promising business model for Electric Vehicle charging station operation.EVOLVE: a Value-Added Services Platform for Electric Vehicle Charging StationsErick Silva, King Abdullah University of Science and Technology (KAUST); Tadeu Freitas, Faculty of Science of University of Porto; Rehana Yasmin, Ali Shoker, King Abdullah University of Science and Technology; Paulo Esteves-Verissimo, RC3, KAUST

Friday Jun 13, 2025
Friday Jun 13, 2025
Overcoming the limitations of individual-vehicle line-of-sight (LOS) sensing holds significant importance for guaranteeing the safety of driving. With a wider perception field, vehicle-infrastructure (VI) collaborative perception can provide vehicles with more comprehensive perception assistance, which has received widespread attention in recent years. However, the perception data fusion between infrastructure and vehicles is still impeded by issues such as large data volume and complex processing procedures, constituting a threat to driving safety. To deal with these issues, this paper proposes a VI collaborative environment perception approach based on sparse bird’s eye view (BEV) features. By leveraging the representation of sparse BEV, features can be fused within a unified perspective in a lightweight manner, thereby enhancing the efficiency of feature fusion and reducing redundancy. Additionally, we present a solution for processing the overlapping features between the EGO-vehicle and road side unit (RSU) by taking the union of the coordinate points. Finally, the applicable vehicle and RSU datasets are collected through Carla. The experimental results demonstrate that the proposed approach can effectively mitigate the limitations of individual-vehicle perception by compensating for occluded information and provide a more comprehensive perception field.A Vehicle-Infrastructure Collaborative Environment Perception Approach based on Sparse BEV FeaturesZhixuan Liu, Yuchuan Fu, Zhenyu Li, Changle Li, Nan Cheng, Ruijin Sun, Xidian University; Jun Zheng, Southeast University

Friday Jun 13, 2025
Friday Jun 13, 2025
In recent decades, minimizing data transmission delay has been a key metric for Ultra Reliable Low-Latency Communication (URLLC). However, low-delay transmission does not always ensure high-quality network service, particularly for real-time applications where the timeliness of data is crucial. This has led to the emergence of the Age of Information (AoI) as a key metric for tracking the freshness of transmitted updates. AoI is especially relevant in large-scale networks like the Internet of Things (IoT), where massive updates are frequently sent. IoT nodes in regions with poor terrestrial network coverage can benefit from Low Earth Orbit (LEO) satellites, which provide reliable connectivity. This work introduces a stochastic geometry-based model to study uplink AoI in fully loaded large-scale IoT networks, where both LEO satellites and ground nodes follow independent Poisson Point Processes (PPPs). In this model, each source node uses a non-preemptive transmission scheme, meaning it completes the current update transmission before handling new ones. Numerical results highlight the influence of varying satellite numbers and altitudes on AoI performance.Statistical Analysis for Average Peak Age of Information in LEO Satellite-Enabled IoTBadiaa Gabr, National University of Ireland, Maynooth, Ireland; Mustafa Kishk, Maynooth University

Friday Jun 13, 2025
Friday Jun 13, 2025
DECT-2020 NR is a recently standardized radio access technology (RAT) for massive machine-type 5G communications operating in the license-exempt and licensed bands. By relying on multi-hop communications and listen-before-talk (LBT) medium access, DECT-2020 NR allows for cost-controlled flexible deployments. However, the medium access procedures for some of the bands currently used by classic DECT systems are strictly regulated by long-standing policies in the USA. The goal of this study was to evaluate and compare the performance of DECT-2020 NR and medium access specified by the Federal Communications Commission (FCC) for the Unlicensed Personal Communications Service (UPCS) band. Our results demonstrate that in a mesh system, the use of FCC access rules leads to drastic performance degradation, resulting in less than 85% of delivered packets compared to 98-99% for DECT-2020 NR. This results in a slight 1-2% degradation for coexisting classic DECT devices. For efficient use of the UPCS band, we recommend revisiting current FCC rules, allowing for LBT-based operations.Performance of DECT-2020 NR and FCC-Based Mesh IoT Systems in DECT BandsAndrey Samuylov, Roman Glazkov, Dmitri Moltchanov, Tampere University; Juho Pirskanen, Jussi Numminen, Wirepas Oy; Mikko Valkama, Tampere University

Friday Jun 13, 2025
Friday Jun 13, 2025
Dynamic resource allocation is crucial for sustaining optimal network performance in Internet of Things (IoT) environments. The frequent arrival and departure of devices result in dynamic topology changes, which pose significant challenges to effective resource allocation. To address these limitations, this study introduces a method that leverages hypergraph modeling to explicitly characterize multi-node resource collision relationships and proposes a graph reinforcement learning with hypergraph convolutions for dynamic resource allocation. Experimental evaluations indicate that the proposed method outperforms compared approaches in channel allocation efficiency and resource utilization.Dynamic IoT Resource Allocation Using Graph Reinforcement Learning with Hypergraph ConvolutionsShilong Zhang, Hosei University; Tong Liu, Jinhua Chen, Hosei University, Japan; Franck Junior Aboya Messou, Keping Yu, Hosei University; Mohsen Guizani, Qatar University

Friday Jun 13, 2025
Friday Jun 13, 2025
Future cellular networks will sustainably integrate computing, intelligence and services within a “network of networks” ecosystem that includes IoT devices and subnetworks for local communications and distributed processing. This integration creates an IoT-edge-cloud continuum that enables opportunistic task offloading across the continuum, enhancing network performance, reducing response times and allowing a flexible resource allocation that can facilitate the system to scale according to demand. Future networks should also natively support deterministic service levels for critical and time-sensitive vertical applications. In this paper, we propose a deterministic task offloading and resource allocation scheme for the joint management of communication and computing resources in the IoT-edge-cloud continuum. The proposed scheme prioritizes task completion before deadlines over minimizing the latency in the execution of individual tasks. The scheme leverages flexible latencies across tasks to support a higher number of tasks through a more efficient management of computing and communication resources that better adapts to scenarios with constrained resources.Deterministic Task Offloading and Resource Allocation in the IoT-Edge-Cloud ContinuumKeyvan Aghababaiyan, Baldomero Coll-Perales, Universidad Miguel Hernandez de Elche; Javier Gozálvez, Universidad Miguel Hernandez de Elche (UMH)