Episodes

Friday Jun 13, 2025
Friday Jun 13, 2025
With the advancement of Internet of Things (IoT) applications, it is essential to utilize an IoT platform to facilitate data exchange and application deployment. Existing platforms are typically either data-cloud-based or data-centralized, relying on servers as repeaters to exchange data. However, these architectures often face limitations related to triangle routing, network bottlenecks, and data scalability challenges, particularly in AIoT (Artificial Intelligence of Things) applications that require the fusion of numerous high-volume data streams. These challenges can be significantly mitigated through data-decentralized direct sender-to-receiver exchanges with a remote ‘Agent’, which is responsible for connectivity management. This work presents a prototype data-decentralized AIoT platform (DAIoTtalk) featuring peer-to-peer communications empowered by customized gRPC remote procedure calls based on the publish-subscribe (pub-sub) paradigm. An extension of IoTtalk, DAIoTtalk ensures device management with more adaptable node networking and offers a test bed for low-code development with decentralized communications. We demonstrate through extensive experiments that our design achieves at least 3 times more efficiency than a data-centralized approach. We also develop a case study to showcase the flexibility of our platform.DAIoTtalk: A Data-Decentralized Pub-Sub AIoT PlatformKit-Lun Tong, University of East Anglia; Hung-Cheng Lin, KUN-RU WU, National Yang Ming Chiao Tung University; Yi Ren, Gerard Parr, University of East Anglia; Yu-Chee Tseng, National Yang Ming Chiao Tung University

Friday Jun 13, 2025
Friday Jun 13, 2025
LoRa/LoRaWAN is one of the most popular technologies for IoT use cases requiring wide coverage and energy efficiency. In an IoT network, devices are often powered by unreliable energy sources, and replacing them can be difficult and rapidly increase network operating costs. For these reasons, it is essential to optimize the energy consumed by an IoT sensor, and this optimization must be based on a refined energy consumption model. In this work, an energy consumption model is proposed that takes into account the complexity of the sensor environment, the LoRa/LoRaWAN specifications, and the gateway input saturation. Simulation results put the energy consumption and the transmission success probability into perspective to show the limit number of sensors per gateway.Transmission Success Probability and Energy Consumed by a LoRaWAN nodeSebastien Maudet, Nantes Université; Guillaume Andrieux, Jean-François Diouris, Université de Nantes

Friday Jun 13, 2025
Friday Jun 13, 2025
In this paper, we propose a framework for semantic information sharing between vehicles to reduce energy consumed by computation related to processing of sensor data. The energy consumption is reduced via avoiding redundant semantics extraction by multiple vehicles. We develop an algorithm that allows sharing semantic information derived by one vehicle with neighboring vehicles, thereby reducing the need for individual semantics extraction. Of course, semantic sharing introduces communication energy overhead. Thus, in our work, we consider not only computation but also communication energy. We propose graph representation of the problem to allow mapping of a part of the energy consumption minimization to the maximum independent set problem, which we further combine with a greedy and recursive approach. Simulations demonstrate that the proposed algorithm can save up to 61% of energy compared to state-of-the-art approaches.Sharing Semantic Information among Vehicles to Reduce Computation and Communication Energy ConsumptionMostafa Kishani, Zdenek Becvar, Czech Technical University in Prague

Friday Jun 13, 2025
Friday Jun 13, 2025
Estimation problems in wireless sensor networks typically involve gathering and processing data from distributed sensors to infer the state of an environment at the fusion center. However, not all measurements contribute significantly to improving estimation accuracy. The ordered transmission protocol, a promising approach for enhancing energy efficiency in wireless networks, allows for the selection of measurements from different sensors to ensure the desired estimation quality. In this work, we use the idea of ordered transmission to reduce the number of transmissions required for sequential estimation within a network, thereby achieving energy-efficient estimation. We derive a new stopping rule that minimizes the number of transmissions while maintaining estimation accuracy similar to general sequential estimation with unordered transmissions. Moreover, we derive the expected number of transmissions required for both general sequential estimation with unordered transmissions and proposed sequential estimation with ordered transmissions and make a comparison between the two systems. Simulation results indicate that our proposed scheme can efficiently reduce transmissions while still ensuring the quality of estimation.An Energy-efficient Ordered Transmission-based Sequential EstimationChen Quan, Geethu Joseph, Nitin Jonathan Myers, Delft University of Technology

Friday Jun 13, 2025
Friday Jun 13, 2025
In a Flying Ad-hoc Network (FANET), the resilience of a network is defined as the ability to rebound from a threat event. The bi-connectivity of topology which can keep the network connected after a single unmanned aerial vehicle (UAV) fails, is an important feature of resilience. Due to the high dynamic and the limited node energy of FANET, it is difficult to generate and maintain the bi-connectivity. In this paper, in order to enhance network resilience, we proposed a distributed energy-efficient bi-connectivity generation and restoration mechanism based on double reference nodes (DEBGR-DRN) through adjusting UAV’s transmission power. Our algorithm can save the transmission power and communication cost by selecting the shortest edge to be added and limiting the path length. NS3 network simulations demonstrate the validity of the proposed algorithm.A Double Reference Nodes Based Resilience Topology Management for Energy-efficient FANETZeZhong Cao, Gaofeng Nie, Tian Hui, Beijing University of Posts and Telecommunications

Friday Jun 13, 2025
Friday Jun 13, 2025
Connected and Autonomous Vehicles (CAVs) can share their future trajectories with nodes around them as the intended navigation path, for nearby nodes to avoid crashing into them. However, trust must be established on the shared trajectories where the nearby nodes can verify the truthfulness of the shared trajectories in an efficient and timely manner. This paper proposes FLOWER, a federated learning based approach as a distributed zero trust security protocol for nearby nodes to verify the trajectories shared among CAVs by employing a machine learning algorithm to predict the corresponding future trajectories and verify the truthfulness of the data shared by the CAV via a blockchain based consensus. We employ several machine learning algorithms including transformer models on realistic trajectories from New York City to achieve this and results have shown that simple time series algorithms (RNN, LSTM, GRUs) achieved similar performance without additional complexity for real-time verification of CAV trajectories.FLOWER: Federated Learning based Zero-Trust Consensus Protocol for Real-time Trajectory Endorsement in CAVsBo Sullivan, Synnove Svendsen, Junaid Ahmed Khan, Western Washington University

Friday Jun 13, 2025
Friday Jun 13, 2025
The expansion of the Internet of vehicles (IoV) has spurred a significant increase in the demand for vehicular computation tasks, posing challenges for in-vehicle task processing. Multi-access edge computing (MEC), which is intended for low-latency task execution, experiences sub-band competition and workload imbalance due to the uneven distribution of vehicle densities. This paper presents a novel IoV architecture leveraging multi-roadside-unit (RSU) capabilities to facilitate efficient load balancing among RSUs through edge-to-edge collaboration. The optimization problem of computation offloading is formulated by minimizing overall task delay, which is further decoupled into two sub-problems: communication resource allocation and load balancing. We devise a two-stage deep reinforcement learning-based communication resource allocation and load balancing (DRLCL) algorithm to tackle these sub-problems sequentially. Based on real-world vehicle trajectories, experimental evaluations reveal that our proposed algorithm outperforms the baselines in reducing overall delay.Deep Reinforcement Learning-Based Vehicular Computation Offloading with Edge-to-Edge CollaborationQuan Chen, Shumo Wang, Southeast University; Xiaoqin Song, Nanjing University of Aeronautics and Astronautics; Tiecheng Song, Southeast University

Friday Jun 13, 2025
Friday Jun 13, 2025
We focus on computation offloading from moving devices, such as mobile robots or autonomous vehicles to Multi-Access Edge Computing (MEC) servers via mobile network. To this end, we develop and implement a prototype of small autonomous vehicle with capability to offload processing of sensor data to MEC server via mobile network. Then, we investigate an impact of communication channel on delay and energy consumed by the autonomous vehicle for two practical applications, namely road sign recognition and path planning, in the real-world environment with a real physical equipment. Via experiments, we demonstrate benefits of the computation offloading on both energy and delay. The experiments highlight the potential of MEC for the autonomous systems allowing to reduce cost and increase scalability of such autonomous systems. Furthermore, based on the real-world experiments, we derive detailed models of energy consumption and delay for both practical applications.Computational Offloading for Autonomous Systems: Real-World Experiments and ModelingJan Daněk, Zdenek Becvar, Adam Janes, Czech Technical University in Prague

Friday Jun 13, 2025
Friday Jun 13, 2025
With the rapid development of the Internet of Things (IoT), the number of connected devices has increased exponentially, bringing significant convenience to various aspects of daily life and business operations. However, communication between IoT devices requires a significant amount of bandwidth, putting a strain on the communication system. To address this challenge, we introduce a classification-oriented semantic communication approach that transmits only essential information. We present a novel end-to-end task-oriented semantic communication model, which efficiently serves the classification task at the receiver. In particular, the proposed model first utilizes a neural network-based semantic encoder to extract classification-related semantic features. A transformer-based semantic decoder is used at the receiver to retrieve semantic features and generate classification results. We further introduce a channel encoder and decoder module to improve the ability of a single model to deal with various channel conditions. Simulation results show that, compared with the traditional method, the proposed scheme achieves higher classification accuracy on the ESC-50 dataset and UrbanSound8K dataset and has better performance for various channel conditions.Classification-oriented Semantic Communication for Internet of ThingsXiaojiao Chen, Beijing Institute of Technology; Jing Wang, Beijing Institute of Technology, Beijing; Jingxuan Huang, Ming Zeng, Zhong Zheng, Beijing Institute of Technology; Ming Xiao, KTH

Friday Jun 13, 2025
Friday Jun 13, 2025
Ambient Internet of Things (IoT), with its low-cost and battery-free characteristics, holds substantial promise for future green IoT. However, such characteristics usually impose a strict constraint on the adopted radio frequency (RF) devices, resulting in undesirable RF link. One of the most important issues is carrier frequency offset (CFO), which adversely affects communication reliability. Noticing the signal sparsity in frequency domain, compressed sensing (CS) provides potential solutions to CFO estimation. In this paper, we decompose CFO into on-grid frequency and off-grid deviation, based on which two estimation algorithms are derived. In particular, the fast maximum likelihood-approximate message passing (FML-AMP) algorithm estimates the off-grid deviation with maximum likelihood estimator (MLE) and the on-grid frequency with AMP algorithm. The key is the derivation of the closed-form marginal distribution of the off-grid deviation, which enables the fast implementation of MLE and so facilitates the design of FML-AMP. To reduce complexity, a Newtonized orthogonal matching pursuit (NOMP) algorithm is further designed, which alternately applies Newton’s method to estimate the off-grid deviation and orthogonal matching pursuit (OMP) to estimate the on-grid frequency. Numerical results demonstrate that both algorithms outperform existing methods and approach the Cramér-Rao bound at medium to high signal-to-noise ratio (SNR) regime.Carrier Frequency Offset Estimation in Ambient Internet of ThingsQiuyang Hu, Fudan university; Shengsong Luo, Meng Liu, Fudan University; Jiang Zhu, Zhejiang University, China; Chongbin Xu, Fudan University; Hao Min, Fudan university; Xin Wang, Fudan University, China