Wireless Sensor Networks (WSNs) have appeared as a rapidly growing research topic due to their potential application areas, ranging from environmental monitoring to industry, military and health applications. A WSN is expected to consist of a potentially large number of inexpensive sensor nodes with the capabilities of sensing, computation and communications, each of which is likely to be battery-powered, small in size and communicate over short distances. In many cases, a distinctive feature of WSNs is that sensor nodes are randomly deployed in an unreachable area which often makes recharging or replacing batteries impossible. A typical WSN needs to be able to self-organize and be robust to environmental changes such as failure of nodes.
The purpose of this research is to design more efficient and intelligent assignment of capacity through medium access control (MAC) in wireless sensor networks, with the use of machine learning techniques, in particular Reinforcement Learning (RL), through practical and analytical methods. RL is a means of learning behaviour of a system by interacting with a dynamic environment through trial-and-error. It has the potential to provide more effective transmission strategy on the basis of its past experience on the channel.
The throughput performance of the schemes designed in the research is evaluated through OPNET and experiments conducted with a real-world testbed which consists of a number of Iris/MicaZ nodes. TinyOS is used to program the sensor nodes and to observe the data exchange in the practical environment.
Members
- Selahattin Kosunalp
- Paul Mitchell
- David Grace
- Tim Clarke
Funding
- Republic of Turkey Ministry of National Education
Dates
- Start: October 2011
Research