Inizio della pagina -
Logo DISCO
|
Visita la Versione ad elevata leggibilità
|
Vai al Contenuto della pagina
|
Vai alla Fine dei contenuti
|
Vai al Menu Principale
|
Vai alla Barra di navigazione (sei in)
|
Vai al Menu di navigazione (albero)
|
Vai alla Lista dei comandi
|
Vai alla Lista degli approfondimenti
|
Vai al Menu inferiore
|
Logo Ateneo
   
Pinardi Stefano

Cycle : XXIII

E-mail: pinardi@disco.unimib.it

Phone: +39-0264487889

Room: T037 building U14, v.le Sarca 336 20126 Milano

Title: A Human Movement Language for Activity Recognition with Wearable Sensors

Who Is S.Pinardi

Pinardi Stefano is a Ph.D. Student graduated with honors in December 2006, with a Master thesis about Latent Semantic Analysis to solve polysemic and synonymic problems in text mining.
Now his research interests include social computing (social networks) and activity recognition in movement science; generally speaking the Human Behaviour from the Semantic/Machine Learning point of view.
He is working with the Nomadis Research Group, of which is an active member. In Nomadis his work concerns activity recognition of elder person with body worn sensors (mainly inertial and magnetometer ones). 

In the recent past he has also been active as a professional writer. He published a book about Windows operating system for a major Italian editor (Utet, 2002), several technical papers for Microsoft and also many specialized articles for some important technical journals (e.g. dot Net magazine).

He is also a professional trainer and a contract professor about dot NET technologies at the computer science department at the university of Milano-Bicocca.  

My Ph.D. Thesi In Short

In the movement science area,  tracking and analysis of human motion, and behavior comprehension are very popular topics, especially in computer vision.The motivation is driven by its wide application domain: activity/gesture recognition, medical rehabilitation, identity/gender identification, human animation, perceptual interfaces, athletic performance analysis, human surveillance.
Many are the related  research  areas  involved: vision or signal analysis [9], computer intelligence [5 ,6],  sensor fusion [30], ubiquitous computing [40], medical rehabilitation [3, 4], social science [7, 8], and natural language sciences [35]. 
Most of the applications are based on motion visual analysis [9, 10, 11], they try to understand human motion and behavior with the minimum amount of video cameras. They are usually marker based [42, 13, 14], or  marker free [19, 41]. Marker based analysis can be intrusive, require a good knowledge of instrumentation, and run only in supervised environments. Marker free analysis is non trivial because of ambiguities, occlusions, and kinematic questions that require a not negligible computational effort [15]: state-of-the-art machine learning technique, and geometrical assumption must be used. 
Model based methods are very popular: they are features based and normally use Bayesian techniques: Kalman Filters [18], Bayesian Filters [17], Condensation [16]; computational effort can be significant especially for real time analysis. 

A different approach is based on the use of  “blind” sensors: very popular are the inertial ones [20] as they are considered more respectful of intimacy than video cameras, require less computational effort for analysis, and can be easily worn on different parts of the body, connected to a net via a wireless communication [23]. Inertial sensor signals analysis is simpler because the outputted signal is monodimensional in time. Analysis is usually feature based, tries to recognize or predicts patterns and to associate recognized activities to predefined labels [21, 22, 25].
At the state of the art, as 
Inertial sensors returns less information than video cameras,  they are normally used to recognize  single event/state of human body or singular movements [21, 24]. Moreover they are not able to return the position of a person in respect to a global reference system, that must be estimated with other technologies and methods [26].
Many efforts were done to improve the quality of recognition, using multiple inertial sensors worn on the body [24] or inertial sensors used in association with others blind-sensors [22].
 Some researchers try to integrate inertial sensors with video cameras [17]; this approach can be useful in some specific area like rehabilitation where highly precise measures are required [27] , and a combination of the two techniques (blind sensors and video cameras), with some prediction/correction algorithm (Bayesan Filters), have proved to be helpful [4].
Another approach tries to insert body activities in a context of  “a priori” knowledge [29], that can be useful in order to reduce hypothesis, and are a basis for reasoning and “ontology-based” approaches. Reasoning and ontology-based methods  keep track of all consistent explanations of the observed actions [31]. 

My proposal is to use inertial/blind sensors to define a new behavioural “grammar” of the body movements, starting from a body framework [1, 2] simplified enough to represent the skeleton of the body, which the sensors are related to. Movements will be seen as reciprocal interaction of the body parts, evolving in time. 
In order to improve the quality of recognition and to reduce the dimension of hypothesis space, given by the lack of precision and ambiguities created by the nature of inertial/blind sensors, it is possible to envision body movements as an evolutionary and dynamic process similar to natural language, with tokens similar to “nouns”, “verbs”, “adverbs”, “subjects”, “objects” and temporal dependencies.

This idea is inspired by the works of authors [33,34] that consider body movements, and their representation, as a ground for natural languages and on the works of researchers that analyze motion in the framework of context free grammars [36].
The idea is to associate simple body action to “movement-tokens” (movement primitives), and human activities to “movement-phrases” (actions),  and to use a motion database to validate the “linguistic framework” (test and training set).

This can be useful not only to improve movements recognition with blind sensors, reducing the space of hypothesis, fitting data with a model, but it is also useful to create an activity database, ontologically well founded, for the reasoning approach to movement recognition and activity comprehension.
The association of simple body movements with “linguistic tokens” will require the use of many sensors mounted on the body, but after a training phase  it will be possible to use it with the minimum amount of blind sensors, worn in a natural way. The analysis and creation of an “emergent behavioral/movement grammar” is also interesting to cognitively understand movements, for medical and rehabilitation matters and in the sports area, to give a ground and context to movements comprenhension in recognition and motion capture, and to create a semantic common ground of comprehension to human-computer interaction, e.g. in robotic areas.
I will evaluate the quality of recognitions in a test bed of sport and rehabilitationa area tracing results within a confusion matrix, and comparing precision with state-of-the-art results. 

Current State

At the state of my studies, movements appear as an emerging reality of reciprocal and dynamically evolving related “emergent features” that try to survive to “a selection” created by the context.  Features appear with different weights for different populations and application domain (set of movements depending form the context). My actual work is to render the solution and analysis realizable in many different situation, where  varying and different semantic contexts must  to be considered and taken into account with great flexibility.  

References.

[1] Zhaohui Gan, Min Jiang: Articulated Body Tracking by Immune Particle Filter. ICWL 2006: 93-104. SEAL 2006 853-857
[2] S. Knoop, S. Vacek, and R. Dillmann, Sensor fusion for 3d human body tracking with an articulated 3d body model, in Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida, 2006.
[3] Y. Tao, H. Hu, and H. Zhou, Integration of Vision and Inertial Sensors for Home-based Rehabilitation, in IEEE International Conference on Robotics and Automation, 2005.
[4] H. Zhou, and H. Hu, A Survey - Human Movement Tracking and Stroke Rehabilitation, 2004.
[5] M. Hahn, L. Krüger, C. Wöhler:  3D Action Recognition and Long-Term Prediction of Human Motion. ICVS 2008, 23-32.
[6] Sangho Park, J.K. Aggarwal,   Semantic-level Understanding of Human Actions and Interactions using Event Hierarchy, in: Computer Vision and Pattern Recognition Workshop, June 2004
[7] D. Wilson, S. Consolvo, K. Fishkin, and M. Philipose, In-Home Assessment of the Activities of Daily Living of the Elderly, in CHI2005: Workshops - HCI Challenges in Health Assessment, April 2005.
[8] S. Katz, “Assessing Self-Maintenance: Activities of Daily Living, Mobility, and Instrumental Activities of Daily Living,” Journal of the American Geriatrics Society, vol. 31, no. 12, pp. 721–726, 1983.
[9] D. Gavrila, The Visual Analysis of Human Movement: A Survey, Computer Vision and Image Understanding, 73(1):82-98, Jan. 1999.
[10] T.B. Moeslund, E. Granum, A Survey of Computer Vision-Based Human Motion Capture,  Computer Vision and Image Understanding, 2001
[11] L. Wang, W.M. Hu and T.N. Tan, Recent Developments in Human Motion Analysis, Pattern Recognition, vol. 36, no. 3, pp. 585-601, 2003.[12] R.Okada, B.Stenger, A Single Camera Motion Capture System for Human-Computer Interaction,
ICICE(E91-D), No. 7, pp. 1855-1862. July 2008
[13] www.charndyn.com
[14] www.qualisys.com
[15] C. Sminchisescu, Estimation Algorithms for Ambiguous Visual Models--Three-Dimensional Human Modeling and Motion Reconstruction in Monocular Video Sequences, Ph.D. thesis,  Institute National Politechnique de Grenoble (INRIA). July 2002.
[16] J. Deutscher, A. Blake, and I Reid. Articulated body motion capture by annealed particle filtering. In: Proc. Conf. Computer Vision and Pattern Recognition, volume 2, pages 1144-1149, 2000.
[17] G. Welch, Hybrid Self-Tracker: An Inertial/Optical Hybrid Three-Dimensional Tracking System, tech. report TR95-048, pp. 52-61. Univ. of North Carolina at Chapel Hill, 1995
[18] Y. Bar-Shalom ,T. B. Fortman, Tracking and Data Association. New York: Academic, 1988.
[19]C.Wan, B.Yuan, Z.Miao, Markerless human body motion capture using Markov random field and dynamic graph cuts, Visual Computer (24), No. 5, May 2008
[20] www.xsens.com
[21] L. Bao, Physical Activity Recognition from Acceleration Data under Semi-Naturalistic Conditions, MIT master thesis,  2003.
[22] J. Lester, T. Choudhury, and G. Borriello A Practical Approach to Recognizing Physical Activities,  Pervasive 2006, LNCS 3968, pp. 1 – 16, 2006.
[23] Jain, Sushant, Shah, Rahul, Brunette, Waylon, Borriello, Gaetano, Roy, Sumit, Exploiting Mobility for Energy Efficient Data Collection in Wireless Sensor Networks Mobile Networks and Applications, Vol. 11, No. 3. (June 2006), pp. 327-339.
[24] J. Mantyjarvi, J. Himberg, T. Seppanen, Recognizing human motion with multiple acceleration sensors, Systems, Man, and Cybernetics, 2001 IEEE International Conference on, Vol. 2 (2001), pp. 747-752 vol.2.
[25] Nishkam Ravi and Nikhil Dandekar and Preetham Mysore and Michael L. Littman, Activity Recognition from Accelerometer Data, American Association for Artificial Intelligence (2005).
[26] H. Wang, H. Lenz, A. Szabo, J. Bamberger, Uwe D. Hanebeck WLAN-Based Pedestrian Tracking Using Particle Filters and Low-Cost MEMS Sensors, In: proceedings of 4th Workshop on Positioning, Navigation and Communication 2007 (WPNC 2007), Hannover, Germany (2007).
[27] A. M. Sabatini, Quaternion-based strap-down integration method for applications of inertial sensing to gait analysis, Med. Biol. Eng. Comput. 43(1):94-101, Jan 2005.
[28] L.Snidaro, G.Luca, F. Ruixin Niu, P.K. Varshney, Sensor Fusion for Video Surveillance (2004), 7th Int. Conf. on Information Fusion
[29] T.Choudhury, M.Philipose, D.Wyatt, J.Lester,Towards Activity Database: Using Sensor and Statistical Models to Summarize People’s Lives.
[30] Randell, C., and Muller, H. 2000. Context awareness by analyzing accelerometer data. In MacIntyre, B., and Iannucci, B., eds., The Fourth International Symposium on Wearable Computers, 175– 176. IEEE Computer Society.
[31] H. Kautz. A formal theory of plan recognition., PhD thesis, University of Rochester, 1987.
[32] V.Osmani, S.Balasubramaniam, D.Botvich. Human activity recognition in pervasive health-care: Supporting efficient remote collaboration. J. Netw. Comput. Appl. 31, 4 (Nov. 2008), 628-655., D. 2008.
[33] G. Guerra-Filho, Y.Aloimonos, A language for Human Action,  IEEE Computer Magazine, 40:60–69, 2007.
[34] G. Guerra-Filho, Y. Aloimonos, Discovering a language for human activity, AAAI Workshop on Anticipation in Cognitive Systems, October, 2005.
[35] Glenberg, A. M., & Kaschak, M. P. Grounding language in action. Psychonomic Bulletin & Review, 9, 558-565. 2002.
[36] M. S. Ryoo and J. K. Aggarwal, Recognition of Composite Human Activities through Context-Free Grammar based Representation Computer & Vision Research Center / Department of ECE University of Texas at Austin.
[37] M. Arbib, Schema Theory,  In: S. Shapiro (Ed), The Encyclopedia of Artificial Intelligence (2nd Ed), New York: Wiley Interscience, pp. 1427-1443, 1992.
[38] G. Guerra Filho, Y. Aloimonos. A sensory-motor linguistic framework for human activity understanding, January 2007, Doctoral Thesis, University of Maryland at College Park.
[39] Osmani V., Balasubramaniam S., and Botvich D. 2008. Human activity recognition in pervasive health-care: Supporting efficient remote collaboration. J. Netw. Comput. Appl. 31, 4 (Nov. 2008), 628-655
[40]Mihai Marin-Perianu, Clemens Lombriser, Oliver Amft, Paul J. M. Havinga, Gerhard Tröster: Distributed Activity Recognition with Fuzzy-Enabled Wireless Sensor Networks.  DCSS, volume 5067/June 2008, 296-313.
[41] E. F. Desserée, Calais, L. R. Legrand, First Results of a Complete Marker-Free Methodology for Human Gait Analysis Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference Shanghai, China, September 1-4, 2005
[42] J.Cameron, J. Lasenby, Estimating Human Skeleton Parameters and Configuration in Real-Time from Markered Optical Motion Capture, AMD08

Recent Publications

 -          A Logical Approach to Home Healthcare with Intelligent Sensor-Network Support  Alessandra Mileo; Davide Merico; Stefano Pinardi; Roberto Bisiani The Computer Journal 2009;
 -          Coherent Functions to Recognize Quality of Movements, S.Pinardi,(submitted) 

Collaborations:

-        University of Postdam Home-Healthcare  (Intelligence) 
-        Tecnical School of Turin Virtual Reality Group (Body Represntation) 
-         Fondazione Maugeri - Pavia (Movement’s Desease Rehabilitation Center) 
-         Microsoft Italy University Relation (Sensors Matters and Programming)  

Thesis

-         Activity Recognition with a Single Inertial sensor in Ambient Intelligence for Elder Person. D.Rigamonti (Master Thesis 2008)
-         Features Extraction Matters in Multisensory Analisys in not Free-Contexts.  L.Airoldi (Bachelor degree 2009) 
-         Intelligent Analysis with  Single Sensor for Free Movement Recognition. M Baiugini (Bachelor degree 2009) 
-         A Rapid Application to Pair Inertial and Video Sources for Movement Analysis. E.Sada (Bachelor degree 2009) 
-         Video Feature and Inertial Sensor Feature Pairing for Body Movement Reconstruction (F.Renzi - In progress)  
-         Adaptive Learning Paths In not Adaptive E-learning System (Moodle). F.Colleoni (Master Thesis 2009) 
-         E-learning communities as social communities of practice – Dynamical Adaptation and Personalization Matters in Moodle (G.Riva,  Master Thesis 2009)
-         Tracing Social Relations Using Windows Live with Moodle (S.Muggeo, Bachelor degree 2009)     

University Didactical Activities

-         2000-2003 “Windows in Action” 3rd year course of the computer science bachelor degree – pilot course 
-         2005  “Windows Operating Systems” 3rd year course of the computer science bachelor degree 
-         2007-2008 “dot Net programming and Windows Operating Systems”  3rd year course of the computer science bachelor degree
-         2009 “dot NET programming for Inertial and Mobile Sensors” 3rd year course of the computer science bachelor degree   

Workshops

-         2008 June. New AI Techniques and Methodologies for Dynamic, Knowledge-intensive Domains 
-         2009 Feb.  “Professional vs University Education in ICT Areas with Referrals to the EUCIP Standards”  (organizer and speaker), with the participation of the main representative Italian companies and industrial’s lobbies of ICT area.

Approfondimenti
  • Per informazioni

Google Translate
Translate to English Translate to French Translate to German Translate to Spanish Translate to Chinese Translate to Portuguese Translate to Arabic
Translate to Albanian Translate to Bulgarian Translate to Croatian Translate to Czech Translate to Danish Translate to Dutch Translate to Finnish Translate to Greek Translate to Hindi
Translate to Hungarian Translate to Irish Translate to Japanese Translate to Korean Translate to Norwegian Translate to Polish Translate to Romanian Translate to Russian Translate to Serbian
Translate to Slovenian Translate to Swedish Translate to Thai Translate to Turkish

(C) Copyright 2016 - Dipartimento Informatica Sistemistica e Comunicazione - Viale Sarca, 336
20126 Milano - Edificio U14
redazioneweb@disco.unimib.it - ultimo aggiornamento di questa pagina 11/07/2011