You are here:   Home > Description
Username

Password

automatic login

Description



Objective:

Mobility disabilities are prevalent in our ageing society and impede activities important for the independent living of elderly people and their quality of life. MOBOT project is about supporting human mobility and thus enforcing fitness and vitality by developing intelligent robotic platforms designed to provide user-centred and natural support for ambulating in indoor environments. We envision the design of cognitive mobile robotic systems that can monitor and understand specific forms of human activity, in order to deduce what the human needs are, in terms of mobility. The goal is to provide user- and context-adaptive active support and ambulation assistance to elderly users, and generally to individuals with specific forms of moderate to mild walking impairment.

To achieve such a goal for the development of an efficient and intelligent robotic assistant, a variety of multimodal interaction and cognitive control functionalities must be embedded, so that the robot can autonomously reason about how to provide optimal support to the user whenever and wherever needed. In particular, MOBOT aims to design and deliver a cognitive mobility-aid robot that can act both: (a) proactively, by realizing autonomous and context-specific monitoring of human activities and by subsequently reasoning on meaningful user behavioural patterns; and (b) adaptively and interactively, by analysing multi-sensory and physiological signals related to gait and postural stability, and by performing adaptive compliance control for optimal physical support and active fall prevention.

MOBOT will create advanced perception, reasoning and control modules, focusing particularly on: (a) Developing a multimodal human-action recognition system, fusing computer vision techniques with other sensory modalities, including range sensor images, haptic information, as well as command-level speech and gesture recognition; (b) Conducting data-driven multimodal human behaviour analysis to extract behavioural patterns of users, in order to design a multimodal human-robot communication system and to systemically synthesise mobility assistance models that take into consideration safety-critical requirements; and (c) Designing and implementing a behaviour-based, cognitive robot control architecture, which will incorporate contextual reasoning and planning, guiding the robot mechanisms to provide situation-adapted optimal assistance to users.

Direct involvement of end-user groups will ensure that actual user needs are addressed by the prototype platforms. Extensive user trials will be conducted to evaluate and benchmark the overall system and to demonstrate the vital role of MOBOT technologies for Europe’s service robotics.


MOBOT impact in terms of Advanced Robotics Functionalities:

MOBOT will create and deliver two robotic platforms that will be able to perform a set of advanced assistive actions currently not achieved by state-of-the-art approaches. These advanced robotic functionalities and assistance tasks that the MOBOT platforms will be able to achieve are centred along two main axes:

  • Proactive-autonomy ambulation assistance tasks, that is, invoking self-motivated robot behaviours based on context-specific human activity monitoring, including autonomously approaching a user, providing proper support for sit-to-stand transfers and assistance to sit-down actions, follow a person from a close (safe) distance to provide assistance whenever and wherever needed (e.g. when fatigue is observed), or waiting in a stand-by mode nearby a standing user to offer support when needed.

  • User-adaptive walking assistance tasks, including optimal postural support and active fall prevention, through an adaptation of active compliance and on-line optimisation for robot configuration control based on posture and gait analysis and recognition.

The two platforms will be equipped with a series of sensors, actuators, and human-system interfaces, including visual, laser scanner, force and tactile sensors. In the first half of MOBOT we will mainly focus on the design and control of the rollator-type mobility assistant. In the second half of the project our attention will shift progressively to the nurse-type mobility assistant, while the rollator-type assistant will undergo detailed user evaluation studies.

 

MOBOT Platforms:

To achieve MOBOTs targets and to reflect the increasing demand in walking assistance with increasing age or progressive diseases, MOBOT will target two different mobility assistance platforms, as shown in the drawings below.

  • An intelligent rollator-type mobility assistant that provides assistance to people who experience problems in stabilizing their body while walking and who require support in standing up and sitting down, but still have enough strength in their hands and arms to grasp the handles of the mobility assistant and to use it as a tool.
  • An intelligent nurse-type mobility assistant that helps people who have not enough strength anymore in their arms and are weak in standing up, walking and sitting down and consequently cannot use anymore a classical rollator-type assistant.

 

Expected Scientific Contribution:

To develop the MOBOT platforms, significant scientific and technological advances must be achieved in a number of research areas, particularly integrating a set of advanced perception, reasoning and robot control capabilities. Expected major scientific contributions of MOBOT in these areas are:

  • The development of a multimodal action recognition system that will capture and process multi-sensory data regarding human activity and motions, in order to detect and recognize human actions, with particular emphasis on limb localisation, body pose estimation and physical human-robot interaction. The main thrust of our approach will be the fusion of computer vision techniques with modalities that are broadly used in robotics, such as range images and haptic data, as well as the integration of specific verbal and non-verbal (gestural) commands in the considered human-robot interaction context.

  • The development of a system that will further analyse human behaviour, in order to perform identification of higher-level human intent, emphasising particularly on human walking pattern classification and on the synthesis of contextual mobility assistance models. For this purpose, human behaviour will be analysed and modelled particularly in settings involving the use of typical mobility aids by elderly people (and generally by people with specific forms of walking impairment), as well as during relevant interaction with carers and health-care professionals in typical supporting actions. Data-driven multimodal human behaviour analysis will be conducted and behavioural patterns of users (with mobility weaknesses) will be extracted, based on a multimodal corpus that will be especially created and annotated for the purposes of MOBOT. By analysing and modelling the semantics of human behaviour and human-robot interaction in similar settings, by classifying human walking patterns and by analysing the safety-critical requirements in the envisaged mobility assistance situations, dynamic statistical models will be synthesised that associate structures of human actions and user behavioural states with specific forms of mobility-assistance tasks.

  • The integration of the above models within a context-aware cognitive robot control architecture, which will incorporate contextual reasoning and planning regarding assistive robot actions and behaviours, guiding the robot mechanism to assume optimal configurations and to provide proactive assistance and situation-adapted mobility support. All the individual intelligent control modules will be integrated into a behaviour-based robot control framework, enabling the system to reason and adapt its operational behaviour based on contextual (environmental state, human behaviour and user-robot interaction) information, including the adaptation of the desired robot trajectory as well as the adaptation of the provided compliance during physical user interaction.



Powered by CMSimple| Template: ge-webdesign.de| Login