In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement range models and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting to the force resistances.
|Number of pages||4|
|Publication status||E-pub ahead of print - 16 May 2016|
|Event||IEEE ICRA Workshop on Human-Robot Interfaces for Enhanced Physical Interactions - Stockholm, Sweden|
Duration: 16 May 2016 → 16 May 2016
|Conference||IEEE ICRA Workshop on Human-Robot Interfaces for Enhanced Physical Interactions|
|Period||16/05/16 → 16/05/16|