Projects per year
Abstract
To develop vision systems for autonomous robotic disassembly, this paper presents a dual-loop implementation architecture that enables a robot vision system to learn from human vision in disassembly tasks. The architecture leverages human visual knowledge through a collaborative scheme named ‘learning-by-doing’. In the dual-loop implementation architecture, a human-robot collaborative disassembly loop containing autonomous perception, human-robot interaction and autonomous execution processes is established to address perceptual challenges in disassembly tasks by introducing human operators wearing augmented reality (AR) glasses, while a deep active learning loop is designed to use human visual knowledge to develop robot vision through autonomous perception, human-robot interaction and model learning processes. Considering uncertainties in the conditions of products at the end of their service life, an objective ‘informativeness’ matrix integrating the label information and regional information is designed for autonomous perception, and AR technology is utilised to improve the operational accuracy and efficiency of the human-robot interaction process. By sharing the autonomous perception and human-robot interaction processes, the two loops are simultaneously executed. To validate the capability of the proposed architecture, a screw removal task was studied. The experiments demonstrated the capability to accomplish challenging perceptual tasks and develop the perceptual ability of robots accurately, stably, and efficiently in disassembly processes. The results highlight the potential of learning by doing in developing robot vision towards autonomous robotic disassembly through collaborative human-machine vision systems.
Original language | English |
---|---|
Article number | 102673 |
Number of pages | 20 |
Journal | Robotics and Computer-Integrated Manufacturing |
Volume | 86 |
Early online date | 24 Oct 2023 |
DOIs | |
Publication status | Published - Apr 2024 |
Bibliographical note
Funding Information:This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/N018524/1 and Grant EP/W00206X/1, the National Natural Science Foundation of China (NSFC) under Grant 52075404, and the China Scholarship Council under Grant 202006950054 .
Publisher Copyright:
© 2023 Elsevier Ltd
Keywords
- Augmented reality
- Deep active learning
- Human-machine collaboration
- Learning by doing
- Robot vision
- Robotic disassembly
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- General Mathematics
- Computer Science Applications
- Industrial and Manufacturing Engineering
Fingerprint
Dive into the research topics of 'Learning by doing: A dual-loop implementation architecture of deep active learning and human-machine collaboration for smart robot vision'. Together they form a unique fingerprint.-
Self-learning robotics for industrial contact-rich tasks (ATARI): enabling smart learning in automated disassembly
Engineering & Physical Science Research Council
1/05/22 → 31/10/24
Project: Research Councils
-
Robotic disassembly technology as a key enabler of autonomous remanufacturing
Castellani, M., Essa, K., Saadat, M. & Pham, D.
Engineering & Physical Science Research Council
1/05/16 → 31/10/21
Project: Research