Personalised Assistive Dressing by Humanoid Robots using Multi-modal Information

Yixing Gao, Hyung Jin Chang, Yiannis Demiris

Research output: Contribution to conference (unpublished)Paperpeer-review

Abstract

In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement range models and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting to the force resistances.
Original languageEnglish
Number of pages4
Publication statusE-pub ahead of print - 16 May 2016
EventIEEE ICRA Workshop on Human-Robot Interfaces for Enhanced Physical Interactions - Stockholm, Sweden
Duration: 16 May 201616 May 2016

Conference

ConferenceIEEE ICRA Workshop on Human-Robot Interfaces for Enhanced Physical Interactions
Country/TerritorySweden
CityStockholm
Period16/05/1616/05/16

Fingerprint

Dive into the research topics of 'Personalised Assistive Dressing by Humanoid Robots using Multi-modal Information'. Together they form a unique fingerprint.

Cite this