Decentralised reinforcement learning for energy-efficient scheduling in wireless sensor networks

Mihail Mihaylov, Yann-Ael Le Borgne, Karl Tuyls, Ann Nowe

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

We present a self-organising reinforcement learning (RL) approach for scheduling the wake-up cycles of nodes in a wireless sensor network. The approach is fully decentralised, and allows sensor nodes to schedule their active periods based only on their interactions with neighbouring nodes. Compared to standard scheduling mechanisms such as SMAC, the benefits of the proposed approach are twofold. First, the nodes do not need to synchronise explicitly, since synchronisation is achieved by the successful exchange of data messages in the data collection process. Second, the learning process allows nodes competing for the radio channel to desynchronise in such a way that radio interferences and therefore packet collisions are significantly reduced. This results in shorter communication schedules, allowing to not only reduce energy consumption by reducing the wake-up cycles of sensor nodes, but also to decrease the data retrieval latency. We implement this RL approach in the OMNET++ sensor network simulator, and illustrate how sensor nodes arranged in line, mesh and grid topologies autonomously uncover schedules that favour the successful delivery of messages along a routing tree while avoiding interferences.

Original languageEnglish
Pages (from-to)207-224
Number of pages18
JournalInternational Journal of Communication Networks and Distributed Systems
Volume9
Issue number3
DOIs
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Decentralised reinforcement learning for energy-efficient scheduling in wireless sensor networks'. Together they form a unique fingerprint.

Cite this