User modelling using multimodal information for personalised dressing assistance

Yixing Gao, Hyung Jin Chang, Yiannis Demiris

Research output: Contribution to journalArticlepeer-review

162 Downloads (Pure)

Abstract

Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human’s upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user’s comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users.
Original languageEnglish
Article number9024050
Pages (from-to)45700-45714
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 20 Mar 2020

Keywords

  • Multimodal user modelling
  • assistive dressing
  • human-robot interaction
  • vision and force fusion

Fingerprint

Dive into the research topics of 'User modelling using multimodal information for personalised dressing assistance'. Together they form a unique fingerprint.

Cite this