Abstract
Extended battery lifetime is always desirable for wireless underground sensor networks (WUSNs). The recent integration of LoRaWAN-grade massive machine-type communications (MTC) technology and WUSNs, namely LoRaWAN-based WUSNs, provides a promisingly energy-efficient solution for underground monitoring. However, due to the limited battery energy, massive data collisions and dynamic underground environment, the energy efficiency of LoRaWAN-based WUSNs still possesses a significant challenge. To propose a liable solution, we advocate reinforcement learning (RL) for managing the transmission configuration of underground sensors. In this paper, we firstly develop the multi-agent RL (MARL) algorithm to improve the network energy efficiency, which considers the link quality, energy consumption and packet collisions between packets. Secondly, a reward mechanism is proposed, which is used to define independent state and action for every node to improve the adaptability of the proposed algorithm to dynamic underground environment. Furthermore, through the simulations in different underground environment and at different network scales, our results highlight that the proposed MARL algorithm can quickly optimize the network energy efficiency and far exceed the traditional adaptive data rate (ADR) mechanism. Finally, our proposed algorithm is successfully demonstrated to be able to efficiently adapt to the dynamically changing underground environment. This work provides insights into the energy efficiency optimization, and will lay the foundation for future realistic deployments of LoRaWAN-based WUSNs.
Original language | English |
---|---|
Article number | 100776 |
Number of pages | 18 |
Journal | Internet of Things |
Volume | 22 |
Early online date | 10 Apr 2023 |
DOIs | |
Publication status | Published - Jul 2023 |
Keywords
- Energy efficiency
- Multi-agent reinforcement learning
- Dynamic underground environment
- LoRaWAN-based WUSNs
- Adaptive data rate