Inizio della pagina -
Visita la Versione ad elevata leggibilità
Vai al Contenuto della pagina
Vai alla Fine dei contenuti
Vai al Menu Principale
Vai alla Barra di navigazione (sei in)
Vai al Menu di navigazione (albero)
Vai alla Lista dei comandi
Vai alla Lista degli approfondimenti
Vai al Menu inferiore
Logo Ateneo
A Semi-formal Presentation of an Actual Artificial Intelligence Technique for Movement Classification with Inertial Sensors. From Sensors to Meaning

15 Dicembre 2011, ore 17:30

Sala Seminari  - Dipartimento di Informatica, Sistemistica e Comunicazione 
Relatore/i: Stefano Pinardi, Ph.D.

In modern computer science the boundaries between pure computers applications -  i.e. applications that are confined to computers and interact  with users solely through a keyboard or a mouse - and applications that naturally interact with people doing their usual daily activities, are gradually  fading away. 

If we do not use a keyboard or a mouse to interact with a computerized system, we need to obtain information about people and their status using sensors that are placed in the ambient or in mobile devices, like mobile phones. Sensor are becoming smaller, lighter and cheaper, that means the total number of sensor in the environment is continuously increasing making now possible what once was only conceivable. Mobile phones have GPS (Global Position System) that track where people or machines are situated. Accelerometers placed in the Nintendo Wii can give information about the tilting of a remote control device, and can give the intensity of forces and the direction of a gesture. The Microsoft Kinect can detect position of arms and legs in the space and use these information to interact with game applications. Piezo-devices placed in shoes can be used to count number and frequency of steps to support sport activities like running or jogging. In particular situations, it is also possible to place sensors in the ambient  - for example in hospitals, market centers, retirement places, etc. -  to obtain ambient information about temperature, light, pressure, humidity, or to track where a person is. Sensors can be also placed on the body or clothes to understand what a person is doing, or which movements is executing, or even  what emotion is expressing, analyzing his/her “body language”.

In this speech we will stress the accent about wearable sensors  - i.e. sensors that can be placed  on body segments - to classify movements and to understand what a person is doing in an environment. We will see not only that it is possible to use Information Retrieval techniques normally used for text classification, to understand what a person is doing  (i.e. that is possible to extract a “semantic of movements” from sensors data),  but also that the “semantic of  movements”  presents a close and somehow unexpected similarity with natural language semantics.  Movements appears to be organized by similarities and contraries like in a dictionary, resembling the first vestigial of a “language”:  a language of facts (“I am closing a door, and I am sitting on a chair”) and not metaphors (“your face is pale like the moon, your eyes are like stars”), but still a language.

Hence, are the mechanism of the word representation in languages surprisingly natural with a unique way to represent concrete concepts like movements, or is it simply a coincidence?



Stefano Pinardi after 8 years of experience in the IT technologies took his master graduation with  laude in 2006 with a dissertation about Latent Semantic Indexing. He worked for 4  years in the ambient sensors and inertial sensors area, with an original PhD thesis  on movement classification, using  a new method of classification that sharply improved the former results obtained in literature, far enhancing  results obtained at U.C. Berkeley. Now, he works on movements classification and recognition in the EasyReach project (Nomadis Lab) that has the aim to improve the quality of life of elder people with the help of artificial intelligence methodology, and also works in the human behavior comprehension in the social networks research area (Spinnet Lab) proposing a model of relationship between user and user generated data based on the “Prey and Predator” metaphor.



Per informazioni:

Seminario aperto agli studenti.

In archivio dal: 15/07/2012



Google Translate
Translate to English Translate to French Translate to German Translate to Spanish Translate to Chinese Translate to Portuguese Translate to Arabic
Translate to Albanian Translate to Bulgarian Translate to Croatian Translate to Czech Translate to Danish Translate to Dutch Translate to Finnish Translate to Greek Translate to Hindi
Translate to Hungarian Translate to Irish Translate to Japanese Translate to Korean Translate to Norwegian Translate to Polish Translate to Romanian Translate to Russian Translate to Serbian
Translate to Slovenian Translate to Swedish Translate to Thai Translate to Turkish

(C) Copyright 2016 - Dipartimento Informatica Sistemistica e Comunicazione - Viale Sarca, 336
20126 Milano - Edificio U14 - ultimo aggiornamento di questa pagina 18/07/2017