Show simple item record

dc.contributor.authorAlani, A.A.
dc.contributor.authorCosma, Georgina
dc.contributor.authorTaherkhani, Aboozar
dc.date.accessioned2020-07-02T11:15:41Z
dc.date.available2020-07-02T11:15:41Z
dc.date.issued2020-07-24
dc.identifier.citationAlani, A.A., Cosma, G., Taherkhani, A. (2020) Classifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learning, IEEE World Congress on Computational Intelligence (WCCI), Glasgow, UK., July 2020.en
dc.identifier.urihttps://dora.dmu.ac.uk/handle/2086/19912
dc.description.abstractIn smart homes, data generated from real-time sensors for human activity recognition is complex, noisy and imbalanced. It is a significant challenge to create machine learning models that can classify activities which are not as commonly occurring as other activities. Machine learning models designed to classify imbalanced data are biased towards learning the more commonly occurring classes. Such learning bias occurs naturally, since the models better learn classes which contain more records. This paper examines whether fusing real-world imbalanced multi-modal sensor data improves classification results as opposed to using unimodal data; and compares deep learning approaches to dealing with imbalanced multi-modal sensor data when using various resampling methods and deep learning models. Experiments were carried out using a large multi-modal sensor dataset generated from the Sensor Platform for HEalthcare in a Residential Environment (SPHERE). The data comprises 16104 samples, where each sample comprises 5608 features and belongs to one of 20 activities (classes). Experimental results using SPHERE demonstrate the challenges of dealing with imbalanced multi-modal data and highlight the importance of having a suitable number of samples within each class for sufficiently training and testing deep learning models. Furthermore, the results revealed that when fusing the data and using the Synthetic Minority Oversampling Technique (SMOTE) to correct class imbalance, CNN-LSTM achieved the highest classification accuracy of 93.67% followed by CNN, 93.55%, and LSTM, i.e. 92.98%.en
dc.language.isoenen
dc.publisherIEEEen
dc.subjectHuman Activity Recognitionen
dc.subjectDeep Learningen
dc.subjectImbalanced dataen
dc.subjectMulti-sensor dataen
dc.subjectMulti-modal dataen
dc.titleClassifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learningen
dc.typeConferenceen
dc.peerreviewedYesen
dc.funderNo external funderen
dc.cclicenceCC-BY-NCen
dc.date.acceptance2020-03-20
dc.researchinstituteInstitute of Artificial Intelligence (IAI)en


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record