Tianyu Wang, Zongyang Hu, Yijie Wang, Mian Li, Zhihao Liu, Xi Vincent Wang
Advanced Engineering Informatics 2024
The product quality has become increasingly important for the modern manufacturing processes. Due to the measurement delay, data-driven soft sensor models are usually built for the quality prediction in advance. While most prior works develop the customized model for a specific scenario, some recent works explore the adaptive mechanisms for the model to tolerate the online changes. However, they either tackle the operational variations due to changing product specifications for market demands, or deal with the latent variations due to process uncertainties such as sensor degradation. To improve the generalization towards diverse processes with both variations, a novel slow-fast dual-branch method inspired by the complementary learning systems in neuroscience is proposed for the first time. The slow branch is composed of an enhanced multi-layer perceptron with attention-based embedding fusion and memory aware synapses to grasp and consolidate the long-term global knowledge under non-independent and identically distributed data samples. The fast branch contains a modified broad learning system with maximum correntropy criterion and adaptive sample weights to rapidly track the short-term time-varying patterns. The two branches are integrated via feature sharing and refined gradient boosting to mimic the interactions between neocortex and hippocampus of brain. Extensive experiments on three real-world manufacturing processes from distinct industries show the superior performance of proposed method over 15 state-of-the-art methods.
Tianyu Wang, Zongyang Hu, Yijie Wang, Mian Li, Zhihao Liu, Xi Vincent Wang
Advanced Engineering Informatics 2024
The product quality has become increasingly important for the modern manufacturing processes. Due to the measurement delay, data-driven soft sensor models are usually built for the quality prediction in advance. While most prior works develop the customized model for a specific scenario, some recent works explore the adaptive mechanisms for the model to tolerate the online changes. However, they either tackle the operational variations due to changing product specifications for market demands, or deal with the latent variations due to process uncertainties such as sensor degradation. To improve the generalization towards diverse processes with both variations, a novel slow-fast dual-branch method inspired by the complementary learning systems in neuroscience is proposed for the first time. The slow branch is composed of an enhanced multi-layer perceptron with attention-based embedding fusion and memory aware synapses to grasp and consolidate the long-term global knowledge under non-independent and identically distributed data samples. The fast branch contains a modified broad learning system with maximum correntropy criterion and adaptive sample weights to rapidly track the short-term time-varying patterns. The two branches are integrated via feature sharing and refined gradient boosting to mimic the interactions between neocortex and hippocampus of brain. Extensive experiments on three real-world manufacturing processes from distinct industries show the superior performance of proposed method over 15 state-of-the-art methods.
Sichao Liu, Zhihao Liu, Lihui Wang, Xi Vincent Wang
Conference on Robot Learning, Workshop on Language and Robot Learning: Language as an Interface 2024
Humans often use natural language instructions to control and interact with robots for task execution. This presents a significant challenge for robots, as they have to not only comprehend and link human commands with robot actions but also have a semantic understanding of operating scenes/environments. To address this challenge, we present a vision-language-conditioned learning policy for robotic manipulation. Given a scene image, we utilise a vision-language model (GPT-4o) to realise a semantic understanding of the scene and its constituent object detection and grounding. Our approach takes a limited set of images to reveal the spatial-temporal relationship of the objects. Then, we develop a GPT-o1-driven approach to performing logic reasoning behind language tasks and high-level control code generation. With the establishment of 6D pose estimates, a language-perception-action method is proposed to link language instructions with robot behaviours for robotic manipulation. The performance of the developed approach is experimentally validated through industrial object manipulation.
Sichao Liu, Zhihao Liu, Lihui Wang, Xi Vincent Wang
Conference on Robot Learning, Workshop on Language and Robot Learning: Language as an Interface 2024
Humans often use natural language instructions to control and interact with robots for task execution. This presents a significant challenge for robots, as they have to not only comprehend and link human commands with robot actions but also have a semantic understanding of operating scenes/environments. To address this challenge, we present a vision-language-conditioned learning policy for robotic manipulation. Given a scene image, we utilise a vision-language model (GPT-4o) to realise a semantic understanding of the scene and its constituent object detection and grounding. Our approach takes a limited set of images to reveal the spatial-temporal relationship of the objects. Then, we develop a GPT-o1-driven approach to performing logic reasoning behind language tasks and high-level control code generation. With the establishment of 6D pose estimates, a language-perception-action method is proposed to link language instructions with robot behaviours for robotic manipulation. The performance of the developed approach is experimentally validated through industrial object manipulation.
Tianyu Wang, Zhihao Liu, Lihui Wang, Mian Li, Xi Vincent Wang
Robotics and Computer-Integrated Manufacturing 2024
With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human-robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human-robot collaboration.
Tianyu Wang, Zhihao Liu, Lihui Wang, Mian Li, Xi Vincent Wang
Robotics and Computer-Integrated Manufacturing 2024
With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human-robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human-robot collaboration.
Quan Liu, Zhenrui Ji, Wenjun Xu, Zhihao Liu, Bitao Yao, Zude Zhou
Expert Systems with Applications 2023
Nowadays industrial robots have become the key equipment in the context of smart manufacturing and the assembly process is seen as one of the dominant fields of robotic applications. However, robotic assembly still greatly relies on manual programming and performs in a highly controlled and structured environment in a repetitive manner with weak generalization. Recent successes in robot learning show that endowing robot intelligence to obtain skills autonomously is a promising approach. The existing robot learning methods are difficult to apply due to the requirement of sufficient trial-and-error exploration which is hardware-cost and time-consuming. When encountering an unfamiliar task, it is natural for human to use their prior knowledge as guidance to derive the explorative action and then leaning the related skills from the accumulated experience. Inspired by that, this paper proposes a knowledge-guided robot learning method with predictive model to improve the safety and efficiency of assembly skills acquisition. Concretely, based on Cartesian compliance control, a knowledge-guided exploration strategy (KGES) using the fuzzy logic about position/force feedback is built to provide direction and limit the range of exploration in the early learning stages. Upon KGES, a predictive model-based reinforcement learning method is proposed to optimize the local searching trajectory, where the training data, generated from the trained ensemble predictive models with a knowledge-guided branched progressive rollout method, is used for policy optimization. Finally, the proposed method is tested in two peg-in-hole assembly tasks in MuJoCo environment, and the results show that the robot can learn the assembly skill faster and perform better in success rate than model-free and knowledge-free settings while maintaining the contact force within a safe range.
Quan Liu, Zhenrui Ji, Wenjun Xu, Zhihao Liu, Bitao Yao, Zude Zhou
Expert Systems with Applications 2023
Nowadays industrial robots have become the key equipment in the context of smart manufacturing and the assembly process is seen as one of the dominant fields of robotic applications. However, robotic assembly still greatly relies on manual programming and performs in a highly controlled and structured environment in a repetitive manner with weak generalization. Recent successes in robot learning show that endowing robot intelligence to obtain skills autonomously is a promising approach. The existing robot learning methods are difficult to apply due to the requirement of sufficient trial-and-error exploration which is hardware-cost and time-consuming. When encountering an unfamiliar task, it is natural for human to use their prior knowledge as guidance to derive the explorative action and then leaning the related skills from the accumulated experience. Inspired by that, this paper proposes a knowledge-guided robot learning method with predictive model to improve the safety and efficiency of assembly skills acquisition. Concretely, based on Cartesian compliance control, a knowledge-guided exploration strategy (KGES) using the fuzzy logic about position/force feedback is built to provide direction and limit the range of exploration in the early learning stages. Upon KGES, a predictive model-based reinforcement learning method is proposed to optimize the local searching trajectory, where the training data, generated from the trained ensemble predictive models with a knowledge-guided branched progressive rollout method, is used for policy optimization. Finally, the proposed method is tested in two peg-in-hole assembly tasks in MuJoCo environment, and the results show that the robot can learn the assembly skill faster and perform better in success rate than model-free and knowledge-free settings while maintaining the contact force within a safe range.
Wenjun Xu, Siqi Feng, Bitao Yao, Zhihao Liu, Zhenrui Ji
Journal of Manufacturing Science and Engineering 2023
Human-robot collaboration (HRC) combines the repeatability and strength of robots and human's ability of cognition and planning to enable a flexible and efficient production mode. The ideal HRC process is that robots can smoothly assist workers in complex environments. This means that robots need to know the process's turn-taking earlier, adapt to the operating habits of different workers, and make reasonable plans in advance to improve the fluency of HRC. However, many of the current HRC systems ignore the fluent turn-taking between robots and humans, which results in unsatisfactory HRC and affects productivity. Moreover, there are uncertainties in humans as different humans have different operating proficiency, resulting in different operating speeds. This requires the robots to be able to make early predictions of turn-taking even when human is uncertain. Therefore, in this paper, an early turn-taking prediction method in HRC assembly tasks with Izhi neuron model-based spiking neural networks (SNNs) is proposed. On this basis, dynamic motion primitives (DMP) are used to establish trajectory templates at different operating speeds. The length of the sequence sent to the SNN network is judged by the matching degree between the observed data and the template, so as to adjust to human uncertainty. The proposed method is verified by the gear assembly case. The results show that our method can shorten the human-robot turn-taking recognition time under human uncertainty.
Wenjun Xu, Siqi Feng, Bitao Yao, Zhihao Liu, Zhenrui Ji
Journal of Manufacturing Science and Engineering 2023
Human-robot collaboration (HRC) combines the repeatability and strength of robots and human's ability of cognition and planning to enable a flexible and efficient production mode. The ideal HRC process is that robots can smoothly assist workers in complex environments. This means that robots need to know the process's turn-taking earlier, adapt to the operating habits of different workers, and make reasonable plans in advance to improve the fluency of HRC. However, many of the current HRC systems ignore the fluent turn-taking between robots and humans, which results in unsatisfactory HRC and affects productivity. Moreover, there are uncertainties in humans as different humans have different operating proficiency, resulting in different operating speeds. This requires the robots to be able to make early predictions of turn-taking even when human is uncertain. Therefore, in this paper, an early turn-taking prediction method in HRC assembly tasks with Izhi neuron model-based spiking neural networks (SNNs) is proposed. On this basis, dynamic motion primitives (DMP) are used to establish trajectory templates at different operating speeds. The length of the sequence sent to the SNN network is judged by the matching degree between the observed data and the template, so as to adjust to human uncertainty. The proposed method is verified by the gear assembly case. The results show that our method can shorten the human-robot turn-taking recognition time under human uncertainty.
Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zhenrui Ji
Advanced Engineering Informatics 2023
Manual procedure recognition and prediction are essential for practical human-robot collaboration in industrial tasks, such as collaborative assembly. However, current research mostly focuses on diverse human motions, while the similar repetitive manual procedures that are prevalent in real production tasks are often overlooked. Furthermore, the dynamic uncertainty caused by human-robot interferences and the generalisation of individuals, scenarios, and multiple sensor deployments pose challenges for implementing manual procedure prediction and robotic procedure generation. To address these issues, this paper proposes a real-time, similar repetitive procedure-oriented human skeleton processing system that employs the human skeleton as a robust modality. It utilises an improved deep spatial-temporal graph convolutional network and a FIFO queue-based discriminator for real-time data processing, procedure prediction, and generation. The proposed method is validated on multiple datasets with tens of individuals engaged in a real dynamic and uncertain human-robot collaborative assembly cell and able to run on entry-level hardware. The results demonstrate competitive performance of handcraft feature-free, early prediction and generalisation on individual variance, environment background, camera position, lighting conditions, and stochastic interference in human-robot collaboration.
Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zhenrui Ji
Advanced Engineering Informatics 2023
Manual procedure recognition and prediction are essential for practical human-robot collaboration in industrial tasks, such as collaborative assembly. However, current research mostly focuses on diverse human motions, while the similar repetitive manual procedures that are prevalent in real production tasks are often overlooked. Furthermore, the dynamic uncertainty caused by human-robot interferences and the generalisation of individuals, scenarios, and multiple sensor deployments pose challenges for implementing manual procedure prediction and robotic procedure generation. To address these issues, this paper proposes a real-time, similar repetitive procedure-oriented human skeleton processing system that employs the human skeleton as a robust modality. It utilises an improved deep spatial-temporal graph convolutional network and a FIFO queue-based discriminator for real-time data processing, procedure prediction, and generation. The proposed method is validated on multiple datasets with tens of individuals engaged in a real dynamic and uncertain human-robot collaborative assembly cell and able to run on entry-level hardware. The results demonstrate competitive performance of handcraft feature-free, early prediction and generalisation on individual variance, environment background, camera position, lighting conditions, and stochastic interference in human-robot collaboration.
Mengyuan Ba, Zhenrui Ji, Zhihao Liu, Bitao Yao, Wenjun Xu, Yi Zhong
IEEE 19th International Conference on Automation Science and Engineering (CASE) 2023
With the development of human-robot collaboration technology, action detection, which can detect human actions during the assembly process and improve the fluency of collaboration, has a significant value in industrial assembly tasks. However, in practical application scenarios, the effect of action detection is affected by the low action difference of human-robot collaborative assembly tasks, single data modality, etc. In order to effectively solve this problem, a human-robot collaborative assembly framework based on action detection is proposed, in which we propose an action detection method Multi-Ad proposed with higher accuracy and generalization capability. Multi-Ad adopts the method of fusing multimodal features of RGB, optical flow, and skeleton sequences to enhance the extracted data information, which can improve the accuracy of action detection. Experimental results on the thumos14 dataset show that the proposed method is better than previous methods in terms of action detection accuracy.
Mengyuan Ba, Zhenrui Ji, Zhihao Liu, Bitao Yao, Wenjun Xu, Yi Zhong
IEEE 19th International Conference on Automation Science and Engineering (CASE) 2023
With the development of human-robot collaboration technology, action detection, which can detect human actions during the assembly process and improve the fluency of collaboration, has a significant value in industrial assembly tasks. However, in practical application scenarios, the effect of action detection is affected by the low action difference of human-robot collaborative assembly tasks, single data modality, etc. In order to effectively solve this problem, a human-robot collaborative assembly framework based on action detection is proposed, in which we propose an action detection method Multi-Ad proposed with higher accuracy and generalization capability. Multi-Ad adopts the method of fusing multimodal features of RGB, optical flow, and skeleton sequences to enhance the extracted data information, which can improve the accuracy of action detection. Experimental results on the thumos14 dataset show that the proposed method is better than previous methods in terms of action detection accuracy.
Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zude Zhou
Robotics and Computer-Integrated Manufacturing 2022
Robotic equipment has been playing a central role since the proposal of smart manufacturing. Since the beginning of the first integration of industrial robots into production lines, industrial robots have enhanced productivity and relieved humans from heavy workloads significantly. Towards the next generation of manufacturing, this review first introduces the comprehensive background of smart robotic manufacturing within robotics, machine learning, and robot learning. Definitions and categories of robot learning are summarised. Concretely, imitation learning, policy gradient learning, value function learning, actor-critic learning, and model-based learning as the leading technologies in robot learning are reviewed. Training tools, benchmarks, and comparisons amongst different robot learning methods are delivered. Typical industrial applications in robotic grasping, assembly, process control, and industrial human-robot collaboration are listed and discussed. Finally, open problems and future research directions are summarised.
Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zude Zhou
Robotics and Computer-Integrated Manufacturing 2022
Robotic equipment has been playing a central role since the proposal of smart manufacturing. Since the beginning of the first integration of industrial robots into production lines, industrial robots have enhanced productivity and relieved humans from heavy workloads significantly. Towards the next generation of manufacturing, this review first introduces the comprehensive background of smart robotic manufacturing within robotics, machine learning, and robot learning. Definitions and categories of robot learning are summarised. Concretely, imitation learning, policy gradient learning, value function learning, actor-critic learning, and model-based learning as the leading technologies in robot learning are reviewed. Training tools, benchmarks, and comparisons amongst different robot learning methods are delivered. Typical industrial applications in robotic grasping, assembly, process control, and industrial human-robot collaboration are listed and discussed. Finally, open problems and future research directions are summarised.
Siqi Feng, Wenjun Xu, Bitao Yao, Zhihao Liu, Zhenrui Ji
IEEE 18th International Conference on Automation Science and Engineering (CASE) 2022
In the context of industry 5.0, human-robot collaboration (HRC) in assembly, a flexible production mode, has been paid increasing attention. In the scenario of HRC in assembly, to perfect the efficiency and fluency of the whole assembly process, the leading point is to develop a more natural human-robot interaction (HRI). In that way, the robot has the access to predict the human's intention earlier. The single process's intention has been mainly focused on human intention prediction, however, is verified against the natural HRI, causing the robot insensitive to turn-taking among the successive process. Therefore, this paper enters a proposal that we can realize early prediction of turn-taking in HRC assembly tasks based on Izhikevich neuron model-based spiking neuron network (SNN). The proposal is also verified in a developed HRC gear assembly scenario. The results express that our method can greatly advance the recognition time of human-robot turn-taking, which improves the efficiency of human-robot collaborative assembly.
Siqi Feng, Wenjun Xu, Bitao Yao, Zhihao Liu, Zhenrui Ji
IEEE 18th International Conference on Automation Science and Engineering (CASE) 2022
In the context of industry 5.0, human-robot collaboration (HRC) in assembly, a flexible production mode, has been paid increasing attention. In the scenario of HRC in assembly, to perfect the efficiency and fluency of the whole assembly process, the leading point is to develop a more natural human-robot interaction (HRI). In that way, the robot has the access to predict the human's intention earlier. The single process's intention has been mainly focused on human intention prediction, however, is verified against the natural HRI, causing the robot insensitive to turn-taking among the successive process. Therefore, this paper enters a proposal that we can realize early prediction of turn-taking in HRC assembly tasks based on Izhikevich neuron model-based spiking neuron network (SNN). The proposal is also verified in a developed HRC gear assembly scenario. The results express that our method can greatly advance the recognition time of human-robot turn-taking, which improves the efficiency of human-robot collaborative assembly.
Quan Liu, Zhihao Liu, Bo Xiong, Wenjun Xu, Yang Liu
Advanced Engineering Informatics 2021
Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.
Quan Liu, Zhihao Liu, Bo Xiong, Wenjun Xu, Yang Liu
Advanced Engineering Informatics 2021
Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.
Zhihao Liu, Quan Liu, Lihui Wang, Wenjun Xu, Zude Zhou
International Journal of Advanced Manufacturing Technology 2021
Human-robot collaboration as a multidisciplinary research topic is still pursuing the robots’ enhanced intelligence to be more human-compatible and fit the dynamic and stochastic characteristics of human. However, the uncertainties brought by the human partner challenge the task-planning and decision-making of the robot. When aiming at industrial tasks like collaborative assembly, dynamics on temporal dimension and stochasticities on the order of procedures need to be further considered. In this work, we bring a new perspective and solution based on reinforcement learning, where the problem is regarded as training an agent towards tasks in dynamic and stochastic environments. Concretely, an adapted training approach based on the deep Q learning method is proposed. This method regards both the robot and the human as the agents in the interactive training environment for deep reinforcement learning. With the consideration of task-level industrial human-robot collaboration, the training logic and the agent-environment interaction have been proposed. For the human-robot collaborative assembly tasks in the case study, it is illustrated that our method could drive the robot represented by one agent to collaborate with the human partner even the human performs randomly on the task procedures.
Zhihao Liu, Quan Liu, Lihui Wang, Wenjun Xu, Zude Zhou
International Journal of Advanced Manufacturing Technology 2021
Human-robot collaboration as a multidisciplinary research topic is still pursuing the robots’ enhanced intelligence to be more human-compatible and fit the dynamic and stochastic characteristics of human. However, the uncertainties brought by the human partner challenge the task-planning and decision-making of the robot. When aiming at industrial tasks like collaborative assembly, dynamics on temporal dimension and stochasticities on the order of procedures need to be further considered. In this work, we bring a new perspective and solution based on reinforcement learning, where the problem is regarded as training an agent towards tasks in dynamic and stochastic environments. Concretely, an adapted training approach based on the deep Q learning method is proposed. This method regards both the robot and the human as the agents in the interactive training environment for deep reinforcement learning. With the consideration of task-level industrial human-robot collaboration, the training logic and the agent-environment interaction have been proposed. For the human-robot collaborative assembly tasks in the case study, it is illustrated that our method could drive the robot represented by one agent to collaborate with the human partner even the human performs randomly on the task procedures.
Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham
International Journal of Computer Integrated Manufacturing 2021
The digital twin-based manufacturing system is a typical representative of smart manufacturing and has a number of advantages beyond the state of the art. However, when a manufacturing system needs to be reconfigured to meet new requirements of production, manual reconfiguration is time-consuming and high labor cost because of the complexity of the digital twin-based manufacturing system and the imperfection of related models. This problem will be even worse if there are industrial robots with characteristics of complex functions and inflexible programming in the manufacturing system. This paper presents a five-dimensional fusion model of a digital twin virtual entity for robotics-based smart manufacturing systems to support automatic reconfiguration, which can not only realistically describes physical manufacturing resources, but also represents the capabilities and dependencies of the digital twins. Reconfigurable strategies based on service function blocks, which can improve the reusability of functions and algorithms, are proposed to make the robotics-based manufacturing system satisfy the various reconfigurable requirements of different granularities and goals. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks in a flexible way.
Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham
International Journal of Computer Integrated Manufacturing 2021
The digital twin-based manufacturing system is a typical representative of smart manufacturing and has a number of advantages beyond the state of the art. However, when a manufacturing system needs to be reconfigured to meet new requirements of production, manual reconfiguration is time-consuming and high labor cost because of the complexity of the digital twin-based manufacturing system and the imperfection of related models. This problem will be even worse if there are industrial robots with characteristics of complex functions and inflexible programming in the manufacturing system. This paper presents a five-dimensional fusion model of a digital twin virtual entity for robotics-based smart manufacturing systems to support automatic reconfiguration, which can not only realistically describes physical manufacturing resources, but also represents the capabilities and dependencies of the digital twins. Reconfigurable strategies based on service function blocks, which can improve the reusability of functions and algorithms, are proposed to make the robotics-based manufacturing system satisfy the various reconfigurable requirements of different granularities and goals. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks in a flexible way.
Xi Vincent Wang, Jaume Soriano Pinter, Zhihao Liu, Lihui Wang
54th CIRP Conference on Manufacturing Systems 2021
Due to the boost of machine learning research in recent years, advanced technologies bring new possibilities to robotic assembly systems. The machine learning-based image processing methods show promising potential to tackle the challenges in the assembly process, e.g. object recognition, locating and trajectory planning. Accurate and robust methodologies are needed to guarantee the performance of the assembly tasks. In this research, a machine learning-based image processing method is proposed for the robotic assembly system. It is capable of detecting and locating assembly components based on low-cost image inputs, and manipulate the industrial robot automatically. A geometry library is also developed, which is an optional hybrid method towards accurate recognition results when needed. The proposed approach is validated and evaluated via case studies.
Xi Vincent Wang, Jaume Soriano Pinter, Zhihao Liu, Lihui Wang
54th CIRP Conference on Manufacturing Systems 2021
Due to the boost of machine learning research in recent years, advanced technologies bring new possibilities to robotic assembly systems. The machine learning-based image processing methods show promising potential to tackle the challenges in the assembly process, e.g. object recognition, locating and trajectory planning. Accurate and robust methodologies are needed to guarantee the performance of the assembly tasks. In this research, a machine learning-based image processing method is proposed for the robotic assembly system. It is capable of detecting and locating assembly components based on low-cost image inputs, and manipulate the industrial robot automatically. A geometry library is also developed, which is an optional hybrid method towards accurate recognition results when needed. The proposed approach is validated and evaluated via case studies.
Zhenrui Ji, Quan Liu, Wenjun Xu, Zhihao Liu, Bitao Yao, Bo Xiong
IEEE 16th International Conference on Automation Science and Engineering (CASE) 2020
Industrial human-robot collaboration (HRC) is a promising production mode that enables humans and robots complete a joint set of tasks in a shared workplace. In this context, to facilitate efficient and safe collaboration, an industrial robot needs to understand its human teammate's behavior and develop human-aware motion planning. However, the systematic theoretical explanation on this subject is limited. Shared autonomy allows the human intervention in the control loop of the autonomous robot to achieve human-robot common goals. In this paper, the framework of shared autonomy into industrial HRC context is presented. In the sight of shared autonomy, considering the intention of human behavior is partially observable, we formalize the human-aware motion planning as a Partially Observable Markov Decision Process (POMDP), where the robot addresses the sequential decision making problems under the uncertainty of human's intention. Moreover, the shared autonomy framework and its detailed systematic enabling approaches for human-aware motion planning is presented. The feasibility of the presented framework and approaches is also validated by the case study of a HRC assembly scenario, which could accomplish more fluent and safe collaboration.
Zhenrui Ji, Quan Liu, Wenjun Xu, Zhihao Liu, Bitao Yao, Bo Xiong
IEEE 16th International Conference on Automation Science and Engineering (CASE) 2020
Industrial human-robot collaboration (HRC) is a promising production mode that enables humans and robots complete a joint set of tasks in a shared workplace. In this context, to facilitate efficient and safe collaboration, an industrial robot needs to understand its human teammate's behavior and develop human-aware motion planning. However, the systematic theoretical explanation on this subject is limited. Shared autonomy allows the human intervention in the control loop of the autonomous robot to achieve human-robot common goals. In this paper, the framework of shared autonomy into industrial HRC context is presented. In the sight of shared autonomy, considering the intention of human behavior is partially observable, we formalize the human-aware motion planning as a Partially Observable Markov Decision Process (POMDP), where the robot addresses the sequential decision making problems under the uncertainty of human's intention. Moreover, the shared autonomy framework and its detailed systematic enabling approaches for human-aware motion planning is presented. The feasibility of the presented framework and approaches is also validated by the case study of a HRC assembly scenario, which could accomplish more fluent and safe collaboration.
Wenjun Xu, Quan Tang, Jiayi Liu, Zhihao Liu, Zude Zhou
Robotics and Computer-Integrated Manufacturing 2020
Remanufacturing helps to improve the resource utilization rate and reduce the manufacturing cost. Disassembly is a key step of remanufacturing and is always finished by either manual labor or robots. Manual disassembly has low efficiency and high labor cost while robotic disassembly is not flexible enough to handle complex disassembly tasks. Therefore, human-robot collaboration for disassembly (HRCD) is proposed to flexibly and efficiently finish the disassembly process in remanufacturing. Before the execution of the disassembly process, disassembly sequence planning (DSP), which is to find the optimal disassembly sequence, helps to improve the disassembly efficiency. In this paper, DSP for human-robot collaboration (HRC) is solved by the modified discrete Bees algorithm based on Pareto (MDBA-Pareto). Firstly, the disassembly model is built to generate feasible disassembly sequences. Then, the disassembly tasks are classified according to the disassembly difficulty. Afterward, the solutions of DSP for HRC are generated and evaluated. To minimize the disassembly time, disassembly cost and disassembly difficulty, MDBA-Pareto is proposed to search the optimal solutions. Based on a simplified computer case, case studies are conducted to verify the proposed method. The results show the proposed method can solve DSP for HRC in remanufacturing and outperforms the other three optimization algorithms in solution quality.
Wenjun Xu, Quan Tang, Jiayi Liu, Zhihao Liu, Zude Zhou
Robotics and Computer-Integrated Manufacturing 2020
Remanufacturing helps to improve the resource utilization rate and reduce the manufacturing cost. Disassembly is a key step of remanufacturing and is always finished by either manual labor or robots. Manual disassembly has low efficiency and high labor cost while robotic disassembly is not flexible enough to handle complex disassembly tasks. Therefore, human-robot collaboration for disassembly (HRCD) is proposed to flexibly and efficiently finish the disassembly process in remanufacturing. Before the execution of the disassembly process, disassembly sequence planning (DSP), which is to find the optimal disassembly sequence, helps to improve the disassembly efficiency. In this paper, DSP for human-robot collaboration (HRC) is solved by the modified discrete Bees algorithm based on Pareto (MDBA-Pareto). Firstly, the disassembly model is built to generate feasible disassembly sequences. Then, the disassembly tasks are classified according to the disassembly difficulty. Afterward, the solutions of DSP for HRC are generated and evaluated. To minimize the disassembly time, disassembly cost and disassembly difficulty, MDBA-Pareto is proposed to search the optimal solutions. Based on a simplified computer case, case studies are conducted to verify the proposed method. The results show the proposed method can solve DSP for HRC in remanufacturing and outperforms the other three optimization algorithms in solution quality.
Zhihao Liu, Xinran Wang, Yijie Cai, Wenjun Xu, Quan Liu, Zude Zhou, Duc Truong Pham
Computers & Industrial Engineering 2020
To enhance flexibility and sustainability, human-robot collaboration is becoming a major feature of next-generation robots. The safety assessment strategy is the first and crucial issue that needs to be considered due to the removal of the safety barrier. This paper determined the set of safety indicators and established an assessment model based on the latest safety-related ISO standards and manufacturing conditions. A dynamic modified SSM (speed and separation monitoring) method is presented for ensuring the safety of human-robot collaboration while maintaining productivity as high as possible. A prototype system including dynamic risk assessment and safe motion control is developed based on the virtual model of the robot and human skeleton point data from the vision sensor. The real-time risk status of the working robot can be known and the risk field around the robot which is visualized in an augmented reality environment so as to ensure safe human-robot collaboration. This system is experimentally validated on a human-robot collaboration cell using an industrial robot with six degrees of freedom.
Zhihao Liu, Xinran Wang, Yijie Cai, Wenjun Xu, Quan Liu, Zude Zhou, Duc Truong Pham
Computers & Industrial Engineering 2020
To enhance flexibility and sustainability, human-robot collaboration is becoming a major feature of next-generation robots. The safety assessment strategy is the first and crucial issue that needs to be considered due to the removal of the safety barrier. This paper determined the set of safety indicators and established an assessment model based on the latest safety-related ISO standards and manufacturing conditions. A dynamic modified SSM (speed and separation monitoring) method is presented for ensuring the safety of human-robot collaboration while maintaining productivity as high as possible. A prototype system including dynamic risk assessment and safe motion control is developed based on the virtual model of the robot and human skeleton point data from the vision sensor. The real-time risk status of the working robot can be known and the risk field around the robot which is visualized in an augmented reality environment so as to ensure safe human-robot collaboration. This system is experimentally validated on a human-robot collaboration cell using an industrial robot with six degrees of freedom.
Lan Li, Wenjun Xu, Zhihao Liu, Bitao Yao, Zude Zhou, Duc Truong Pham
International Manufacturing Science and Engineering Conference (MSEC) 2019
Industrial robots can be mechanical intelligent agents by integrating programs, intelligent algorithms and facilitating intelligent manufacturing models from cyber world into physical entities. After introducing the concept of cloud, their storage, computing, knowledge sharing and evolution capabilities are further strengthened. Digital twin is an effective means to achieve the fusion of physics and information. Therefore, it is feasible to introduce the digital twin to the industrial cloud robotics (ICR), in order to facilitate the control optimization of robots’ running state. The traditional manufacturing task-oriented service composition is limited to execution in the cloud, and it is separated from the underlying robot equipment control, which greatly reduces the real-time performance and accuracy of the underlying service response, such as Robotic Control as a Cloud Service (RCaaCS). Therefore, this paper proposes a digital twin-based control approach for ICR. At the manufacturing cell level, robots’ control instruction service modeling is conducted, and then the control service in the digital world is mapped to the robot action control in the physical world through the concept of digital twin. The accumulated operational data in the physical world can be fed back to the digital world as a reference for simulation and control strategy adjustment, finally achieving the integration of cloud services and robot control. A case study based on workpiece disassembly is presented to verify the availability and effectiveness of the proposed control approach.
Lan Li, Wenjun Xu, Zhihao Liu, Bitao Yao, Zude Zhou, Duc Truong Pham
International Manufacturing Science and Engineering Conference (MSEC) 2019
Industrial robots can be mechanical intelligent agents by integrating programs, intelligent algorithms and facilitating intelligent manufacturing models from cyber world into physical entities. After introducing the concept of cloud, their storage, computing, knowledge sharing and evolution capabilities are further strengthened. Digital twin is an effective means to achieve the fusion of physics and information. Therefore, it is feasible to introduce the digital twin to the industrial cloud robotics (ICR), in order to facilitate the control optimization of robots’ running state. The traditional manufacturing task-oriented service composition is limited to execution in the cloud, and it is separated from the underlying robot equipment control, which greatly reduces the real-time performance and accuracy of the underlying service response, such as Robotic Control as a Cloud Service (RCaaCS). Therefore, this paper proposes a digital twin-based control approach for ICR. At the manufacturing cell level, robots’ control instruction service modeling is conducted, and then the control service in the digital world is mapped to the robot action control in the physical world through the concept of digital twin. The accumulated operational data in the physical world can be fed back to the digital world as a reference for simulation and control strategy adjustment, finally achieving the integration of cloud services and robot control. A case study based on workpiece disassembly is presented to verify the availability and effectiveness of the proposed control approach.
Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham
11th CIRP Conference on Industrial Product-Service Systems 2019 Best Application Paper Oral
The emergence of digital twin enables real-time interaction and integration between the physical world and the information world. Digital twin-based manufacturing systems, as a typical representative of smart manufacturing, have a set of advantages beyond the traditional ones, such as verifying and predicting the manufacturing system performance based on the operation of a virtual one. This paper presents a five-dimensional digital twin modeling approach for manufacturing systems, which can not only realize the mapping between the physical and virtual twins, but also some of the capabilities and dependencies of the digital twins can be derived. A reconfigurable strategy, based on the expandable model structure and the reserved interfaces of objective functions and optimization algorithms, is proposed to make the digital twin-based manufacturing system satisfy the various reconfigurable requirements of different granularities and targets. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks.
Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham
11th CIRP Conference on Industrial Product-Service Systems 2019 Best Application Paper Oral
The emergence of digital twin enables real-time interaction and integration between the physical world and the information world. Digital twin-based manufacturing systems, as a typical representative of smart manufacturing, have a set of advantages beyond the traditional ones, such as verifying and predicting the manufacturing system performance based on the operation of a virtual one. This paper presents a five-dimensional digital twin modeling approach for manufacturing systems, which can not only realize the mapping between the physical and virtual twins, but also some of the capabilities and dependencies of the digital twins can be derived. A reconfigurable strategy, based on the expandable model structure and the reserved interfaces of objective functions and optimization algorithms, is proposed to make the digital twin-based manufacturing system satisfy the various reconfigurable requirements of different granularities and targets. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks.
Yiwen Ding, Wenjun Xu, Zhihao Liu, Zude Zhou, Duc Truong Pham
11th CIRP Conference on Industrial Product-Service Systems 2019
Traditional disassembly methods, such as manual and robotic disassembly, are no longer competent for the requirement of the complexity of the disassembly product. Therefore, the human-robot collaboration concept can be introduced to realize a novel disassembly system, towards increasing the flexibility and adaptability of them. In order to facilitate the efficient and smooth human-robot collaboration in disassembly, it is necessary to make the disassembly system more intelligent. In this paper, a robotic knowledge graph is proposed to provide an assistant for those who lack the relevant knowledge to complete the disassembly task. By natural language processing method, this paper extracts entities and relationships from the disassembly data to build a knowledge base in the form of knowledge graph. Combining graph-based knowledge representation, a prototype system is developed for human to acquire, analyze and manage the disassembly knowledge. Finally, a case study demonstrates that the proposed robotic knowledge graph has savings in terms of disassembly time, idle time and human workload, and it can be applied to assist human operator in disassembly by providing human and robots with various kinds of the needed knowledge.
Yiwen Ding, Wenjun Xu, Zhihao Liu, Zude Zhou, Duc Truong Pham
11th CIRP Conference on Industrial Product-Service Systems 2019
Traditional disassembly methods, such as manual and robotic disassembly, are no longer competent for the requirement of the complexity of the disassembly product. Therefore, the human-robot collaboration concept can be introduced to realize a novel disassembly system, towards increasing the flexibility and adaptability of them. In order to facilitate the efficient and smooth human-robot collaboration in disassembly, it is necessary to make the disassembly system more intelligent. In this paper, a robotic knowledge graph is proposed to provide an assistant for those who lack the relevant knowledge to complete the disassembly task. By natural language processing method, this paper extracts entities and relationships from the disassembly data to build a knowledge base in the form of knowledge graph. Combining graph-based knowledge representation, a prototype system is developed for human to acquire, analyze and manage the disassembly knowledge. Finally, a case study demonstrates that the proposed robotic knowledge graph has savings in terms of disassembly time, idle time and human workload, and it can be applied to assist human operator in disassembly by providing human and robots with various kinds of the needed knowledge.
Zitong Liu, Quan Liu, Wenjun Xu, Zhihao Liu, Zude Zhou, Jia Chen
11th CIRP Conference on Industrial Product-Service Systems 2019
The interest of human-robot collaboration (HRC) for intelligent manufacturing service system is gradually increasing. Fluent human-robot coexistence in manufacturing requires accurate estimation of the human motion intention so that the efficiency and safety of HRC can be guaranteed. Human motion is mainly defined as the sequential positions of the joints of human skeletons among traditional motion prediction solutions, which lead to a deficiency of tools or product components holding in hand. Context awareness based temporal processing is the key to evaluating human motion before the accomplishment of it, so as to save time as well as recognize the intention of the human. In this paper, a deep learning system combing convolutional neural network (CNN) and long short-term memory network (LSTM) towards vision signals is explored to predict human motion accurately. Creatively, this paper utilizes LSTM to extract temporal patterns of human motion automatically outputting the prediction result before motion takes place. Not only does it avoid complex feature extraction due to its end-to-end characteristic, but provide a natural interaction between human and robot without wearable devices or tags that may become a burden for the former. A case study of desktop computer product disassembly is executed to demonstrate the feasibility of the recommended method. Experimental performance proves that our method outperforms the other three optimization algorithms on the prediction accuracy.
Zitong Liu, Quan Liu, Wenjun Xu, Zhihao Liu, Zude Zhou, Jia Chen
11th CIRP Conference on Industrial Product-Service Systems 2019
The interest of human-robot collaboration (HRC) for intelligent manufacturing service system is gradually increasing. Fluent human-robot coexistence in manufacturing requires accurate estimation of the human motion intention so that the efficiency and safety of HRC can be guaranteed. Human motion is mainly defined as the sequential positions of the joints of human skeletons among traditional motion prediction solutions, which lead to a deficiency of tools or product components holding in hand. Context awareness based temporal processing is the key to evaluating human motion before the accomplishment of it, so as to save time as well as recognize the intention of the human. In this paper, a deep learning system combing convolutional neural network (CNN) and long short-term memory network (LSTM) towards vision signals is explored to predict human motion accurately. Creatively, this paper utilizes LSTM to extract temporal patterns of human motion automatically outputting the prediction result before motion takes place. Not only does it avoid complex feature extraction due to its end-to-end characteristic, but provide a natural interaction between human and robot without wearable devices or tags that may become a burden for the former. A case study of desktop computer product disassembly is executed to demonstrate the feasibility of the recommended method. Experimental performance proves that our method outperforms the other three optimization algorithms on the prediction accuracy.
Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou, Duc Truong Pham
International Journal of Production Research 2019
Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society. In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method, and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of the systematic framework.
Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou, Duc Truong Pham
International Journal of Production Research 2019
Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society. In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method, and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of the systematic framework.
Lixue Jin, Wenjun Xu, Zhihao Liu, Junwei Yan, Zude Zhou, Duc Truong Pham
International Manufacturing Science and Engineering Conference (MSEC) 2018
Industrial Cloud Robotics (ICR), with the characteristics of resource sharing, lower cost and convenient access, etc., can realize the knowledge interaction and coordination among cloud Robotics (CR) through the knowledge sharing mechanism. However, the current researches mainly focus on the knowledge sharing of service-oriented robots and the knowledge updating of a single robot. The interaction and collaboration among robots in a cloud environment still have challenges, such as the improper updating of knowledge, the inconvenience of online data processing and the inflexibility of sharing mechanism. In addition, the industrial robot (IR) also lacks a well-developed knowledge management framework in order to facilitate the knowledge evolution of industrial robots. In this paper, a knowledge evolution mechanism of ICR based on the approach of knowledge acquisition - interactive sharing - iterative updating is established, and a novel architecture of ICR knowledge sharing is also developed. Moreover, the semantic knowledge in the robot system can encapsulate knowledge of manufacturing tasks, robot model and scheme decision into the cloud manufacturing process. As new manufacturing tasks arrived, the robot platform downloads task-oriented knowledge models from the cloud service platform, and then selects the optimal service composition and updates the cloud knowledge by simulation iterations. Finally, the feasibility and effectiveness of the proposed architecture and approaches are demonstrated through the case studies.
Lixue Jin, Wenjun Xu, Zhihao Liu, Junwei Yan, Zude Zhou, Duc Truong Pham
International Manufacturing Science and Engineering Conference (MSEC) 2018
Industrial Cloud Robotics (ICR), with the characteristics of resource sharing, lower cost and convenient access, etc., can realize the knowledge interaction and coordination among cloud Robotics (CR) through the knowledge sharing mechanism. However, the current researches mainly focus on the knowledge sharing of service-oriented robots and the knowledge updating of a single robot. The interaction and collaboration among robots in a cloud environment still have challenges, such as the improper updating of knowledge, the inconvenience of online data processing and the inflexibility of sharing mechanism. In addition, the industrial robot (IR) also lacks a well-developed knowledge management framework in order to facilitate the knowledge evolution of industrial robots. In this paper, a knowledge evolution mechanism of ICR based on the approach of knowledge acquisition - interactive sharing - iterative updating is established, and a novel architecture of ICR knowledge sharing is also developed. Moreover, the semantic knowledge in the robot system can encapsulate knowledge of manufacturing tasks, robot model and scheme decision into the cloud manufacturing process. As new manufacturing tasks arrived, the robot platform downloads task-oriented knowledge models from the cloud service platform, and then selects the optimal service composition and updates the cloud knowledge by simulation iterations. Finally, the feasibility and effectiveness of the proposed architecture and approaches are demonstrated through the case studies.
Sida Yang, Wenjun Xu, Zhihao Liu, Zude Zhou, Duc Truong Pham
IEEE 15th International Conference on Networking, Sensing and Control (ICNSC) 2018 Oral
Nowadays, human-robot collaboration is playing a more important role in manufacturing. However, in order to collaborate efficiently and safely with human, it is quiet essential for the industrial robots used in production process to percept and acquire the condition information of human workers and environment surroundings, and the vision perception is definitely one of the most effective approaches to realize such functions since it is contactless and economical. In this paper, a multi-source heterogeneous vision perception framework is proposed to acquire the various condition information of the human workers and the working environment surroundings during the processes of human-robot collaboration in manufacturing. The multi-source data are captured from multiple heterogeneous vision sensors deployed in different physical positions, e.g. RGB-D cameras (cameras with color and depth information output) around working area which can produce 3D point cloud data, the binocular cameras on the workbench which can track worker's hands, etc. Moreover, a novel data fusion method is developed to process the multi-source vision data to facilitate the perception function of the industrial robots. The experimental results demonstrate the proposed approaches can get an expansive and shadeless perception of environment and workers including workers' hands status at a high average framerate.
Sida Yang, Wenjun Xu, Zhihao Liu, Zude Zhou, Duc Truong Pham
IEEE 15th International Conference on Networking, Sensing and Control (ICNSC) 2018 Oral
Nowadays, human-robot collaboration is playing a more important role in manufacturing. However, in order to collaborate efficiently and safely with human, it is quiet essential for the industrial robots used in production process to percept and acquire the condition information of human workers and environment surroundings, and the vision perception is definitely one of the most effective approaches to realize such functions since it is contactless and economical. In this paper, a multi-source heterogeneous vision perception framework is proposed to acquire the various condition information of the human workers and the working environment surroundings during the processes of human-robot collaboration in manufacturing. The multi-source data are captured from multiple heterogeneous vision sensors deployed in different physical positions, e.g. RGB-D cameras (cameras with color and depth information output) around working area which can produce 3D point cloud data, the binocular cameras on the workbench which can track worker's hands, etc. Moreover, a novel data fusion method is developed to process the multi-source vision data to facilitate the perception function of the industrial robots. The experimental results demonstrate the proposed approaches can get an expansive and shadeless perception of environment and workers including workers' hands status at a high average framerate.
Zhihao Liu, Quan Liu, Wenjun Xu, Zude Zhou, Duc Truong Pham
51st CIRP Conference on Manufacturing Systems 2018 Honorary Paper Oral
Human-robot collaborative manufacturing (HRC-Mfg) is an innovative production mode, however currently the theoretical explanation for the collaboration mechanisms is limited. Considering the dynamics and uncertainties in manufacturing environment, it is also crucial for both task allocation and decision-making. In the sight of cyber-physical production system, based on bilateral game and clan game, this paper presents the characteristics of HRC-Mfg and demonstrates the applicability of cooperative game in such system. Moreover, we also develop a framework and approach to describe how the mechanism works in detail. The case study shows it can dynamically arrange procedures and maximize the production benefit.
Zhihao Liu, Quan Liu, Wenjun Xu, Zude Zhou, Duc Truong Pham
51st CIRP Conference on Manufacturing Systems 2018 Honorary Paper Oral
Human-robot collaborative manufacturing (HRC-Mfg) is an innovative production mode, however currently the theoretical explanation for the collaboration mechanisms is limited. Considering the dynamics and uncertainties in manufacturing environment, it is also crucial for both task allocation and decision-making. In the sight of cyber-physical production system, based on bilateral game and clan game, this paper presents the characteristics of HRC-Mfg and demonstrates the applicability of cooperative game in such system. Moreover, we also develop a framework and approach to describe how the mechanism works in detail. The case study shows it can dynamically arrange procedures and maximize the production benefit.
Zhihao Liu, Quan Liu, Wenjun Xu, Aiming Liu, Zude Zhou, Duc Truong Pham
International Conference on Innovative Design and Manufacturing 2017
Industrial robots have been becoming the one of focus among all manufacturing equipments. However, under many existing producing environments, robots are not able to fully replace human operators. The study of high-efficiency human-robot collaborative manufacturing, thus, has been increasingly important. In this context, this paper presents a multi-mode perception framework for human-robot collaborative manufacturing, so as to realize the multi-source data collection, processing and analysis for the internal kinetic parameters of industrial robots, collaborative manufacturing tasks and status of producing job flow, etc. Moreover, the enabling technologies including smart energy consumption sensing, machine vision based contactless time of flight (TOF) camera and multi-source data processing which are crucial to the proposed framework are discussed, following with introductions for multi-mode data fusion and analysis, finally to provide support for human-robot collaborative job control and task scheduling in production processes.
Zhihao Liu, Quan Liu, Wenjun Xu, Aiming Liu, Zude Zhou, Duc Truong Pham
International Conference on Innovative Design and Manufacturing 2017
Industrial robots have been becoming the one of focus among all manufacturing equipments. However, under many existing producing environments, robots are not able to fully replace human operators. The study of high-efficiency human-robot collaborative manufacturing, thus, has been increasingly important. In this context, this paper presents a multi-mode perception framework for human-robot collaborative manufacturing, so as to realize the multi-source data collection, processing and analysis for the internal kinetic parameters of industrial robots, collaborative manufacturing tasks and status of producing job flow, etc. Moreover, the enabling technologies including smart energy consumption sensing, machine vision based contactless time of flight (TOF) camera and multi-source data processing which are crucial to the proposed framework are discussed, following with introductions for multi-mode data fusion and analysis, finally to provide support for human-robot collaborative job control and task scheduling in production processes.