Studying human motion requires modelling its multiple temporal scale nature to fully describe its complexity since different muscles are activated and coordinated by the brain at different temporal scales in a complex cognitive process. Nevertheless, current approaches are not able to address this requirement properly, and are based on oversimplified models with obvious limitations. Data-driven methods represent a viable tool to address these limitations. Nevertheless, shallow data-driven models, while achieving reasonably good recognition performance, require to handcraft features based on domain-specific knowledge which, in this cases, is limited and does no allow to properly model motion- and subject-specific temporal scales. In this work, we propose a new deep multiple temporal scale data-driven model, based on Temporal Convolutional Networks, able to automatically learn features from the data at different temporal scales. Our proposal focuses first on over-performing state-of-the-art shallows and deep models in terms of recognition performance. Then, thanks to the use of feature ranking for shallow models and an attention map for deep models, we will give insights on what the different architectures actually learned from the data. We designed, collected data, and tested our proposal in custom experiment of motion recognition: detecting the person who draw a particular shape (i.e., an ellipse) on a graphics tablet, collecting data about his/her movement (e.g., pressure and speed) in different extrapolating scenarios (e.g., training with data collected from one hand and testing the model on the other one). Collected data regarding our experiment and code of the methods are also made freely available to the research community. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the proposed technique as a support tool for better understanding the human movements and its multiple temporal scale nature.

The Importance of Multiple Temporal Scales in Motion Recognition: from Shallow to Deep Multi Scale Models

Zarandi Z.;Fadiga L.;D'Ausilio A.
Penultimo
;
2022

Abstract

Studying human motion requires modelling its multiple temporal scale nature to fully describe its complexity since different muscles are activated and coordinated by the brain at different temporal scales in a complex cognitive process. Nevertheless, current approaches are not able to address this requirement properly, and are based on oversimplified models with obvious limitations. Data-driven methods represent a viable tool to address these limitations. Nevertheless, shallow data-driven models, while achieving reasonably good recognition performance, require to handcraft features based on domain-specific knowledge which, in this cases, is limited and does no allow to properly model motion- and subject-specific temporal scales. In this work, we propose a new deep multiple temporal scale data-driven model, based on Temporal Convolutional Networks, able to automatically learn features from the data at different temporal scales. Our proposal focuses first on over-performing state-of-the-art shallows and deep models in terms of recognition performance. Then, thanks to the use of feature ranking for shallow models and an attention map for deep models, we will give insights on what the different architectures actually learned from the data. We designed, collected data, and tested our proposal in custom experiment of motion recognition: detecting the person who draw a particular shape (i.e., an ellipse) on a graphics tablet, collecting data about his/her movement (e.g., pressure and speed) in different extrapolating scenarios (e.g., training with data collected from one hand and testing the model on the other one). Collected data regarding our experiment and code of the methods are also made freely available to the research community. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the proposed technique as a support tool for better understanding the human movements and its multiple temporal scale nature.
2022
978-1-7281-8671-9
Attention Maps; Deep Learning; Feature Engineering; Feature Learning; Motion Recognition; Multiple Temporal Scales; Open Data; Open Implementation; Shallow Learning
File in questo prodotto:
File Dimensione Formato  
2022-IJCNN.pdf

solo gestori archivio

Descrizione: Pre-print
Tipologia: Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 537.89 kB
Formato Adobe PDF
537.89 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
The_Importance_of_Multiple_Temporal_Scales.pdf

solo gestori archivio

Tipologia: Full text (versione editoriale)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 8.89 MB
Formato Adobe PDF
8.89 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2497298
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact