Zhihao Liu
Logo Postdoctoral Researhcer, KTH Royal Institute of Technology
Logo Postdoc Fellow, Centre of Excellence in Production Research, Sweden

Zhihao Liu is currently a Postdoctoral Researcher at KTH Royal Institute of Technology, and a Postdoc Fellow at XPRES. Before joining KTH, he earned his Ph.D. degree in information technology and communication engineering from Wuhan University of Technology in 2023. During his Ph.D. study, he was a guest doctoral student at KTH from 2019 to 2021. He finished his master’s and bachelor’s degrees at WUT in 2018 and 2016 respectively. His research interests include Industry 5.0, digital twin and metaverse, embodied AI, human-robot collaboration, robot learning, neural information processing, human-compatible AI, etc.

Zhihao Liu is the winner of the Outstanding Doctoral Thesis Award and National Scholarship, which is the highest scholarship award for a Ph.D. student in China.

He is a member of IEEE and ACM and a member of ACM Europe Technology Policy Committee and IEEE Technical Committee on Robot Learning.

Curriculum Vitae

Education
  • Wuhan University of Technology
    Wuhan University of Technology
    School of Information Engineering
    Ph.D., Information & Communication Engineering
    Sep. 2018 - Jun. 2023
  • KTH Royal Institute of Technology
    KTH Royal Institute of Technology
    Department of Production Engineering
    Guest Ph.D., Robotics & Artificial Intelligence
    Sep. 2019 - Aug. 2021
  • Wuhan University of Technology
    Wuhan University of Technology
    School of Information Engineering
    M.Eng, Information & Communication Engineering
    Sep. 2016 - Jun. 2018
  • Wuhan University of Technology
    Wuhan University of Technology
    School of Information Engineering
    B.Eng, Communication Engineering
    Sep. 2012 - Jun. 2016
Experience
  • KTH Royal Institute of Technology
    KTH Royal Institute of Technology
    Postdoctoral Researhcer
    Aug. 2023 - now
  • Centre of Excellence in Production Research, Sweden
    Centre of Excellence in Production Research, Sweden
    Postdoc Fellow
    Aug. 2023 - now
  • Wuhan University of Technology
    Wuhan University of Technology
    Research Assistant
    Sep. 2015 - Jun. 2023
Honors & Awards
  • Outstanding Doctoral Thesis Award
    2023
  • National Scholarship for Ph.D.
    2021
  • 1st-class Scholarship for Ph.D.
    2021
  • Best Application Paper Award
    2019
  • Honorary Paper
    2018
  • 1st-class Scholarship for Ph.D. Freshmen
    2018
  • 1st-class Scholarship for Master Students
    2017
  • Outstanding Master Students
    2017
  • Outstanding Scholarship (above the 1st-class) for Master Freshmen
    2016
  • Outstanding Bachelor Graduate
    2016
  • Outstanding Bachelor Student with Additional Extracurricular Raise Plan)
    2016
  • Merit Bachelor Student
    2015
  • Special Prize (above 1st Prize), TI (Texas Instruments) Undergraduate Electronics Design Contest
    2014
News
2024
Our paper A human-inspired slow-fast dual-branch method for product quality prediction of complex manufacturing processes with hierarchical variations got accepted to AEI!
Dec 03
Our paper Vision-Language-Conditioned Learning Policy for Robotic Manipulation got accepted to CORL 2024 Workshop!
Nov 19
Our work Data-efficient multimodal human action recognition for proactive human-robot collaborative assembly: A cross-domain few-shot learning approach got accepted to RCIM!
Mar 22
I won the Outstanding Doctoral Thesis Award!
Jan 30
2023
Our work Adaptive real-time similar repetitive manual procedure prediction and robotic procedure generation for human-robot collaboration gets accepted to AEI! Featured
Aug 05
Selected Publications (view all )
Vision-Language-Conditioned Learning Policy for Robotic Manipulation
Vision-Language-Conditioned Learning Policy for Robotic Manipulation

Sichao Liu, Zhihao Liu, Lihui Wang, Xi Vincent Wang

Conference on Robot Learning, Workshop on Language and Robot Learning: Language as an Interface 2024

Humans often use natural language instructions to control and interact with robots for task execution. This presents a significant challenge for robots, as they have to not only comprehend and link human commands with robot actions but also have a semantic understanding of operating scenes/environments. To address this challenge, we present a vision-language-conditioned learning policy for robotic manipulation. Given a scene image, we utilise a vision-language model (GPT-4o) to realise a semantic understanding of the scene and its constituent object detection and grounding. Our approach takes a limited set of images to reveal the spatial-temporal relationship of the objects. Then, we develop a GPT-o1-driven approach to performing logic reasoning behind language tasks and high-level control code generation. With the establishment of 6D pose estimates, a language-perception-action method is proposed to link language instructions with robot behaviours for robotic manipulation. The performance of the developed approach is experimentally validated through industrial object manipulation.

Vision-Language-Conditioned Learning Policy for Robotic Manipulation

Sichao Liu, Zhihao Liu, Lihui Wang, Xi Vincent Wang

Conference on Robot Learning, Workshop on Language and Robot Learning: Language as an Interface 2024

Humans often use natural language instructions to control and interact with robots for task execution. This presents a significant challenge for robots, as they have to not only comprehend and link human commands with robot actions but also have a semantic understanding of operating scenes/environments. To address this challenge, we present a vision-language-conditioned learning policy for robotic manipulation. Given a scene image, we utilise a vision-language model (GPT-4o) to realise a semantic understanding of the scene and its constituent object detection and grounding. Our approach takes a limited set of images to reveal the spatial-temporal relationship of the objects. Then, we develop a GPT-o1-driven approach to performing logic reasoning behind language tasks and high-level control code generation. With the establishment of 6D pose estimates, a language-perception-action method is proposed to link language instructions with robot behaviours for robotic manipulation. The performance of the developed approach is experimentally validated through industrial object manipulation.

Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach
Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach

Tianyu Wang, Zhihao Liu, Lihui Wang, Mian Li, Xi Vincent Wang

Robotics and Computer-Integrated Manufacturing 2024

With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human-robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human-robot collaboration.

Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach

Tianyu Wang, Zhihao Liu, Lihui Wang, Mian Li, Xi Vincent Wang

Robotics and Computer-Integrated Manufacturing 2024

With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human-robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human-robot collaboration.

Adaptive real-time similar repetitive manual procedure prediction and robotic procedure generation for human-robot collaboration
Adaptive real-time similar repetitive manual procedure prediction and robotic procedure generation for human-robot collaboration

Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zhenrui Ji

Advanced Engineering Informatics 2023

Manual procedure recognition and prediction are essential for practical human-robot collaboration in industrial tasks, such as collaborative assembly. However, current research mostly focuses on diverse human motions, while the similar repetitive manual procedures that are prevalent in real production tasks are often overlooked. Furthermore, the dynamic uncertainty caused by human-robot interferences and the generalisation of individuals, scenarios, and multiple sensor deployments pose challenges for implementing manual procedure prediction and robotic procedure generation. To address these issues, this paper proposes a real-time, similar repetitive procedure-oriented human skeleton processing system that employs the human skeleton as a robust modality. It utilises an improved deep spatial-temporal graph convolutional network and a FIFO queue-based discriminator for real-time data processing, procedure prediction, and generation. The proposed method is validated on multiple datasets with tens of individuals engaged in a real dynamic and uncertain human-robot collaborative assembly cell and able to run on entry-level hardware. The results demonstrate competitive performance of handcraft feature-free, early prediction and generalisation on individual variance, environment background, camera position, lighting conditions, and stochastic interference in human-robot collaboration.

Adaptive real-time similar repetitive manual procedure prediction and robotic procedure generation for human-robot collaboration

Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zhenrui Ji

Advanced Engineering Informatics 2023

Manual procedure recognition and prediction are essential for practical human-robot collaboration in industrial tasks, such as collaborative assembly. However, current research mostly focuses on diverse human motions, while the similar repetitive manual procedures that are prevalent in real production tasks are often overlooked. Furthermore, the dynamic uncertainty caused by human-robot interferences and the generalisation of individuals, scenarios, and multiple sensor deployments pose challenges for implementing manual procedure prediction and robotic procedure generation. To address these issues, this paper proposes a real-time, similar repetitive procedure-oriented human skeleton processing system that employs the human skeleton as a robust modality. It utilises an improved deep spatial-temporal graph convolutional network and a FIFO queue-based discriminator for real-time data processing, procedure prediction, and generation. The proposed method is validated on multiple datasets with tens of individuals engaged in a real dynamic and uncertain human-robot collaborative assembly cell and able to run on entry-level hardware. The results demonstrate competitive performance of handcraft feature-free, early prediction and generalisation on individual variance, environment background, camera position, lighting conditions, and stochastic interference in human-robot collaboration.

Robot learning towards smart robotic manufacturing: A review
Robot learning towards smart robotic manufacturing: A review

Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zude Zhou

Robotics and Computer-Integrated Manufacturing 2022

Robotic equipment has been playing a central role since the proposal of smart manufacturing. Since the beginning of the first integration of industrial robots into production lines, industrial robots have enhanced productivity and relieved humans from heavy workloads significantly. Towards the next generation of manufacturing, this review first introduces the comprehensive background of smart robotic manufacturing within robotics, machine learning, and robot learning. Definitions and categories of robot learning are summarised. Concretely, imitation learning, policy gradient learning, value function learning, actor-critic learning, and model-based learning as the leading technologies in robot learning are reviewed. Training tools, benchmarks, and comparisons amongst different robot learning methods are delivered. Typical industrial applications in robotic grasping, assembly, process control, and industrial human-robot collaboration are listed and discussed. Finally, open problems and future research directions are summarised.

Robot learning towards smart robotic manufacturing: A review

Zhihao Liu, Quan Liu, Wenjun Xu, Lihui Wang, Zude Zhou

Robotics and Computer-Integrated Manufacturing 2022

Robotic equipment has been playing a central role since the proposal of smart manufacturing. Since the beginning of the first integration of industrial robots into production lines, industrial robots have enhanced productivity and relieved humans from heavy workloads significantly. Towards the next generation of manufacturing, this review first introduces the comprehensive background of smart robotic manufacturing within robotics, machine learning, and robot learning. Definitions and categories of robot learning are summarised. Concretely, imitation learning, policy gradient learning, value function learning, actor-critic learning, and model-based learning as the leading technologies in robot learning are reviewed. Training tools, benchmarks, and comparisons amongst different robot learning methods are delivered. Typical industrial applications in robotic grasping, assembly, process control, and industrial human-robot collaboration are listed and discussed. Finally, open problems and future research directions are summarised.

Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward functions
Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward functions

Quan Liu, Zhihao Liu, Bo Xiong, Wenjun Xu, Yang Liu

Advanced Engineering Informatics 2021

Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.

Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward functions

Quan Liu, Zhihao Liu, Bo Xiong, Wenjun Xu, Yang Liu

Advanced Engineering Informatics 2021

Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.

Task-level decision-making for dynamic and stochastic human-robot collaboration based on dual agents deep reinforcement learning
Task-level decision-making for dynamic and stochastic human-robot collaboration based on dual agents deep reinforcement learning

Zhihao Liu, Quan Liu, Lihui Wang, Wenjun Xu, Zude Zhou

International Journal of Advanced Manufacturing Technology 2021

Human-robot collaboration as a multidisciplinary research topic is still pursuing the robots’ enhanced intelligence to be more human-compatible and fit the dynamic and stochastic characteristics of human. However, the uncertainties brought by the human partner challenge the task-planning and decision-making of the robot. When aiming at industrial tasks like collaborative assembly, dynamics on temporal dimension and stochasticities on the order of procedures need to be further considered. In this work, we bring a new perspective and solution based on reinforcement learning, where the problem is regarded as training an agent towards tasks in dynamic and stochastic environments. Concretely, an adapted training approach based on the deep Q learning method is proposed. This method regards both the robot and the human as the agents in the interactive training environment for deep reinforcement learning. With the consideration of task-level industrial human-robot collaboration, the training logic and the agent-environment interaction have been proposed. For the human-robot collaborative assembly tasks in the case study, it is illustrated that our method could drive the robot represented by one agent to collaborate with the human partner even the human performs randomly on the task procedures.

Task-level decision-making for dynamic and stochastic human-robot collaboration based on dual agents deep reinforcement learning

Zhihao Liu, Quan Liu, Lihui Wang, Wenjun Xu, Zude Zhou

International Journal of Advanced Manufacturing Technology 2021

Human-robot collaboration as a multidisciplinary research topic is still pursuing the robots’ enhanced intelligence to be more human-compatible and fit the dynamic and stochastic characteristics of human. However, the uncertainties brought by the human partner challenge the task-planning and decision-making of the robot. When aiming at industrial tasks like collaborative assembly, dynamics on temporal dimension and stochasticities on the order of procedures need to be further considered. In this work, we bring a new perspective and solution based on reinforcement learning, where the problem is regarded as training an agent towards tasks in dynamic and stochastic environments. Concretely, an adapted training approach based on the deep Q learning method is proposed. This method regards both the robot and the human as the agents in the interactive training environment for deep reinforcement learning. With the consideration of task-level industrial human-robot collaboration, the training logic and the agent-environment interaction have been proposed. For the human-robot collaborative assembly tasks in the case study, it is illustrated that our method could drive the robot represented by one agent to collaborate with the human partner even the human performs randomly on the task procedures.

Digital twin-enabled reconfigurable modeling for smart manufacturing systems
Digital twin-enabled reconfigurable modeling for smart manufacturing systems

Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham

International Journal of Computer Integrated Manufacturing 2021

The digital twin-based manufacturing system is a typical representative of smart manufacturing and has a number of advantages beyond the state of the art. However, when a manufacturing system needs to be reconfigured to meet new requirements of production, manual reconfiguration is time-consuming and high labor cost because of the complexity of the digital twin-based manufacturing system and the imperfection of related models. This problem will be even worse if there are industrial robots with characteristics of complex functions and inflexible programming in the manufacturing system. This paper presents a five-dimensional fusion model of a digital twin virtual entity for robotics-based smart manufacturing systems to support automatic reconfiguration, which can not only realistically describes physical manufacturing resources, but also represents the capabilities and dependencies of the digital twins. Reconfigurable strategies based on service function blocks, which can improve the reusability of functions and algorithms, are proposed to make the robotics-based manufacturing system satisfy the various reconfigurable requirements of different granularities and goals. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks in a flexible way.

Digital twin-enabled reconfigurable modeling for smart manufacturing systems

Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham

International Journal of Computer Integrated Manufacturing 2021

The digital twin-based manufacturing system is a typical representative of smart manufacturing and has a number of advantages beyond the state of the art. However, when a manufacturing system needs to be reconfigured to meet new requirements of production, manual reconfiguration is time-consuming and high labor cost because of the complexity of the digital twin-based manufacturing system and the imperfection of related models. This problem will be even worse if there are industrial robots with characteristics of complex functions and inflexible programming in the manufacturing system. This paper presents a five-dimensional fusion model of a digital twin virtual entity for robotics-based smart manufacturing systems to support automatic reconfiguration, which can not only realistically describes physical manufacturing resources, but also represents the capabilities and dependencies of the digital twins. Reconfigurable strategies based on service function blocks, which can improve the reusability of functions and algorithms, are proposed to make the robotics-based manufacturing system satisfy the various reconfigurable requirements of different granularities and goals. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks in a flexible way.

Dynamic risk assessment and active response strategy for industrial human-robot collaboration
Dynamic risk assessment and active response strategy for industrial human-robot collaboration

Zhihao Liu, Xinran Wang, Yijie Cai, Wenjun Xu, Quan Liu, Zude Zhou, Duc Truong Pham

Computers & Industrial Engineering 2020

To enhance flexibility and sustainability, human-robot collaboration is becoming a major feature of next-generation robots. The safety assessment strategy is the first and crucial issue that needs to be considered due to the removal of the safety barrier. This paper determined the set of safety indicators and established an assessment model based on the latest safety-related ISO standards and manufacturing conditions. A dynamic modified SSM (speed and separation monitoring) method is presented for ensuring the safety of human-robot collaboration while maintaining productivity as high as possible. A prototype system including dynamic risk assessment and safe motion control is developed based on the virtual model of the robot and human skeleton point data from the vision sensor. The real-time risk status of the working robot can be known and the risk field around the robot which is visualized in an augmented reality environment so as to ensure safe human-robot collaboration. This system is experimentally validated on a human-robot collaboration cell using an industrial robot with six degrees of freedom.

Dynamic risk assessment and active response strategy for industrial human-robot collaboration

Zhihao Liu, Xinran Wang, Yijie Cai, Wenjun Xu, Quan Liu, Zude Zhou, Duc Truong Pham

Computers & Industrial Engineering 2020

To enhance flexibility and sustainability, human-robot collaboration is becoming a major feature of next-generation robots. The safety assessment strategy is the first and crucial issue that needs to be considered due to the removal of the safety barrier. This paper determined the set of safety indicators and established an assessment model based on the latest safety-related ISO standards and manufacturing conditions. A dynamic modified SSM (speed and separation monitoring) method is presented for ensuring the safety of human-robot collaboration while maintaining productivity as high as possible. A prototype system including dynamic risk assessment and safe motion control is developed based on the virtual model of the robot and human skeleton point data from the vision sensor. The real-time risk status of the working robot can be known and the risk field around the robot which is visualized in an augmented reality environment so as to ensure safe human-robot collaboration. This system is experimentally validated on a human-robot collaboration cell using an industrial robot with six degrees of freedom.

A Reconfigurable Modeling Approach for Digital Twin-based Manufacturing System
A Reconfigurable Modeling Approach for Digital Twin-based Manufacturing System

Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham

11th CIRP Conference on Industrial Product-Service Systems 2019 Best Application Paper Oral

The emergence of digital twin enables real-time interaction and integration between the physical world and the information world. Digital twin-based manufacturing systems, as a typical representative of smart manufacturing, have a set of advantages beyond the traditional ones, such as verifying and predicting the manufacturing system performance based on the operation of a virtual one. This paper presents a five-dimensional digital twin modeling approach for manufacturing systems, which can not only realize the mapping between the physical and virtual twins, but also some of the capabilities and dependencies of the digital twins can be derived. A reconfigurable strategy, based on the expandable model structure and the reserved interfaces of objective functions and optimization algorithms, is proposed to make the digital twin-based manufacturing system satisfy the various reconfigurable requirements of different granularities and targets. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks.

A Reconfigurable Modeling Approach for Digital Twin-based Manufacturing System

Chenyuan Zhang, Wenjun Xu, Jiayi Liu, Zhihao Liu, Zude Zhou, Duc Truong Pham

11th CIRP Conference on Industrial Product-Service Systems 2019 Best Application Paper Oral

The emergence of digital twin enables real-time interaction and integration between the physical world and the information world. Digital twin-based manufacturing systems, as a typical representative of smart manufacturing, have a set of advantages beyond the traditional ones, such as verifying and predicting the manufacturing system performance based on the operation of a virtual one. This paper presents a five-dimensional digital twin modeling approach for manufacturing systems, which can not only realize the mapping between the physical and virtual twins, but also some of the capabilities and dependencies of the digital twins can be derived. A reconfigurable strategy, based on the expandable model structure and the reserved interfaces of objective functions and optimization algorithms, is proposed to make the digital twin-based manufacturing system satisfy the various reconfigurable requirements of different granularities and targets. Finally, a prototype system is developed to demonstrate the performance of the reconfigurable digital twin-based manufacturing system, which can improve the operation efficiency of such systems for carrying out the reconfiguring production tasks.

Human-robot collaboration in disassembly for sustainable manufacturing
Human-robot collaboration in disassembly for sustainable manufacturing

Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou, Duc Truong Pham

International Journal of Production Research 2019

Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society. In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method, and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of the systematic framework.

Human-robot collaboration in disassembly for sustainable manufacturing

Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou, Duc Truong Pham

International Journal of Production Research 2019

Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society. In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method, and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of the systematic framework.

Human-Robot Collaborative Manufacturing Using Cooperative Game: Framework and Implementation
Human-Robot Collaborative Manufacturing Using Cooperative Game: Framework and Implementation

Zhihao Liu, Quan Liu, Wenjun Xu, Zude Zhou, Duc Truong Pham

51st CIRP Conference on Manufacturing Systems 2018 Honorary Paper Oral

Human-robot collaborative manufacturing (HRC-Mfg) is an innovative production mode, however currently the theoretical explanation for the collaboration mechanisms is limited. Considering the dynamics and uncertainties in manufacturing environment, it is also crucial for both task allocation and decision-making. In the sight of cyber-physical production system, based on bilateral game and clan game, this paper presents the characteristics of HRC-Mfg and demonstrates the applicability of cooperative game in such system. Moreover, we also develop a framework and approach to describe how the mechanism works in detail. The case study shows it can dynamically arrange procedures and maximize the production benefit.

Human-Robot Collaborative Manufacturing Using Cooperative Game: Framework and Implementation

Zhihao Liu, Quan Liu, Wenjun Xu, Zude Zhou, Duc Truong Pham

51st CIRP Conference on Manufacturing Systems 2018 Honorary Paper Oral

Human-robot collaborative manufacturing (HRC-Mfg) is an innovative production mode, however currently the theoretical explanation for the collaboration mechanisms is limited. Considering the dynamics and uncertainties in manufacturing environment, it is also crucial for both task allocation and decision-making. In the sight of cyber-physical production system, based on bilateral game and clan game, this paper presents the characteristics of HRC-Mfg and demonstrates the applicability of cooperative game in such system. Moreover, we also develop a framework and approach to describe how the mechanism works in detail. The case study shows it can dynamically arrange procedures and maximize the production benefit.

All publications