Nowadays, human-robot collaboration is playing a more important role in manufacturing. However, in order to collaborate efficiently and safely with human, it is quiet essential for the industrial robots used in production process to percept and acquire the condition information of human workers and environment surroundings, and the vision perception is definitely one of the most effective approaches to realize such functions since it is contactless and economical. In this paper, a multi-source heterogeneous vision perception framework is proposed to acquire the various condition information of the human workers and the working environment surroundings during the processes of human-robot collaboration in manufacturing. The multi-source data are captured from multiple heterogeneous vision sensors deployed in different physical positions, e.g. RGB-D cameras (cameras with color and depth information output) around working area which can produce 3D point cloud data, the binocular cameras on the workbench which can track worker’s hands, etc. Moreover, a novel data fusion method is developed to process the multi-source vision data to facilitate the perception function of the industrial robots. The experimental results demonstrate the proposed approaches can get a expansive and shadeless perception of environment and workers including workers’ hands status at a high average framerate.
Recommended citation: Sida Yang, Wenjun Xu, Zhihao Liu, Zude Zhou, Duc Truong Pham. (2018). "Multi-Source Vision Perception for Human-Robot Collaboration in Manufacturing." Networking, Sensing and Control (ICNSC), 2018 IEEE 15th International Conference on. IEEE. (1-6).