Optimizing Energy Efficiency of LoRaWAN-based Wireless Underground Sensor Networks: A Multi-Agent Reinforcement Learning Approach

Guozheng Zhao, Kaiqiang Lin, David Chapman, Nicole Metje, Tong Hao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Downloads (Pure)

Abstract

Extended battery lifetime is always desirable for wireless underground sensor networks (WUSNs). The recent integration of LoRaWAN-grade massive machine-type communications (MTC) technology and WUSNs, namely LoRaWAN-based WUSNs, provides a promisingly energy-efficient solution for underground monitoring. However, due to the limited battery energy, massive data collisions and dynamic underground environment, the energy efficiency of LoRaWAN-based WUSNs still possesses a significant challenge. To propose a liable solution, we advocate reinforcement learning (RL) for managing the transmission configuration of underground sensors. In this paper, we firstly develop the multi-agent RL (MARL) algorithm to improve the network energy efficiency, which considers the link quality, energy consumption and packet collisions between packets. Secondly, a reward mechanism is proposed, which is used to define independent state and action for every node to improve the adaptability of the proposed algorithm to dynamic underground environment. Furthermore, through the simulations in different underground environment and at different network scales, our results highlight that the proposed MARL algorithm can quickly optimize the network energy efficiency and far exceed the traditional adaptive data rate (ADR) mechanism. Finally, our proposed algorithm is successfully demonstrated to be able to efficiently adapt to the dynamically changing underground environment. This work provides insights into the energy efficiency optimization, and will lay the foundation for future realistic deployments of LoRaWAN-based WUSNs.
Original languageEnglish
Article number100776
Number of pages18
JournalInternet of Things
Volume22
Early online date10 Apr 2023
DOIs
Publication statusPublished - Jul 2023

Keywords

  • Energy efficiency
  • Multi-agent reinforcement learning
  • Dynamic underground environment
  • LoRaWAN-based WUSNs
  • Adaptive data rate

Fingerprint

Dive into the research topics of 'Optimizing Energy Efficiency of LoRaWAN-based Wireless Underground Sensor Networks: A Multi-Agent Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this