Learning by doing: A dual-loop implementation architecture of deep active learning and human-machine collaboration for smart robot vision

Wupeng Deng, Quan Liu, Feifan Zhao*, Duc Truong Pham, Jiwei Hu, Yongjing Wang, Zude Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To develop vision systems for autonomous robotic disassembly, this paper presents a dual-loop implementation architecture that enables a robot vision system to learn from human vision in disassembly tasks. The architecture leverages human visual knowledge through a collaborative scheme named ‘learning-by-doing’. In the dual-loop implementation architecture, a human-robot collaborative disassembly loop containing autonomous perception, human-robot interaction and autonomous execution processes is established to address perceptual challenges in disassembly tasks by introducing human operators wearing augmented reality (AR) glasses, while a deep active learning loop is designed to use human visual knowledge to develop robot vision through autonomous perception, human-robot interaction and model learning processes. Considering uncertainties in the conditions of products at the end of their service life, an objective ‘informativeness’ matrix integrating the label information and regional information is designed for autonomous perception, and AR technology is utilised to improve the operational accuracy and efficiency of the human-robot interaction process. By sharing the autonomous perception and human-robot interaction processes, the two loops are simultaneously executed. To validate the capability of the proposed architecture, a screw removal task was studied. The experiments demonstrated the capability to accomplish challenging perceptual tasks and develop the perceptual ability of robots accurately, stably, and efficiently in disassembly processes. The results highlight the potential of learning by doing in developing robot vision towards autonomous robotic disassembly through collaborative human-machine vision systems.

Original languageEnglish
Article number102673
Number of pages20
JournalRobotics and Computer-Integrated Manufacturing
Volume86
Early online date24 Oct 2023
DOIs
Publication statusPublished - Apr 2024

Bibliographical note

Funding Information:
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/N018524/1 and Grant EP/W00206X/1, the National Natural Science Foundation of China (NSFC) under Grant 52075404, and the China Scholarship Council under Grant 202006950054 .

Publisher Copyright:
© 2023 Elsevier Ltd

Keywords

  • Augmented reality
  • Deep active learning
  • Human-machine collaboration
  • Learning by doing
  • Robot vision
  • Robotic disassembly

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • General Mathematics
  • Computer Science Applications
  • Industrial and Manufacturing Engineering

Fingerprint

Dive into the research topics of 'Learning by doing: A dual-loop implementation architecture of deep active learning and human-machine collaboration for smart robot vision'. Together they form a unique fingerprint.

Cite this