Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition

Bulat Khaertdinov*, Esam Ghaleb, Stelios Asteriadis

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review


Deep Learning models, applied to a sensor-based Human Activity Recognition task, usually require vast amounts of annotated time-series data to extract robust features. However, annotating signals coming from wearable sensors can be a tedious and, often, not so intuitive process, that requires specialized tools and predefined scenarios, making it an expensive and time-consuming task. This paper combines one of the most recent advances in Self-Supervised Leaning (SSL), namely a SimCLR framework, with a powerful transformer-based encoder to introduce a Contrastive Self-supervised learning approach to Sensor-based Human Activity Recognition (CSSHAR) that learns feature representations from unlabeled sensory data. Extensive experiments conducted on three widely used public datasets have shown that the proposed method outperforms recent SSL models. Moreover, CSSHAR is capable of extracting more robust features than the identical supervised transformer when transferring knowledge from one dataset to another as well as when very limited amounts of annotated data are available.
Original languageEnglish
Title of host publication2021 IEEE International Joint Conference on Biometrics (IJCB)
Number of pages8
ISBN (Electronic)978-1-6654-3780-6
Publication statusPublished - 20 Jul 2021
Event2021 IEEE International Joint Conference on Biometrics (IJCB) - Online, Shenzhen, China
Duration: 4 Aug 20217 Aug 2021


Conference2021 IEEE International Joint Conference on Biometrics (IJCB)
Abbreviated titleIJCB 2021
Internet address


Dive into the research topics of 'Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition'. Together they form a unique fingerprint.

Cite this