CN114789450A - Robot motion trajectory digital twinning method based on machine vision - Google Patents

Robot motion trajectory digital twinning method based on machine vision Download PDF

Info

Publication number
CN114789450A
CN114789450A CN202210619761.XA CN202210619761A CN114789450A CN 114789450 A CN114789450 A CN 114789450A CN 202210619761 A CN202210619761 A CN 202210619761A CN 114789450 A CN114789450 A CN 114789450A
Authority
CN
China
Prior art keywords
robot
track
processed object
result
physical environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210619761.XA
Other languages
Chinese (zh)
Inventor
韦卫
周卫国
刘铮
刘伟华
申平伟
林娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smarteye Tech Ltd
Original Assignee
Smarteye Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smarteye Tech Ltd filed Critical Smarteye Tech Ltd
Priority to CN202210619761.XA priority Critical patent/CN114789450A/en
Publication of CN114789450A publication Critical patent/CN114789450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot motion track digital twinning method based on machine vision, aiming at the problem that the robot track is difficult to adapt to the change quickly due to the fact that a processed object in a physical environment is changeable, the method designs a robot physical environment of a remote virtual environment for mapping the processed object and a 3D modeling and robot track automatic generation system of the processed object based on 3D machine vision, the robot execution result feedback of the physical environment based on machine vision in a local physical environment is obtained remotely in the virtual environment, the virtual environment completes the optimization and simulation execution of the robot track, and the robot in the physical environment can be controlled in real time to complete the operation of the processed object.

Description

Robot motion trajectory digital twinning method based on machine vision
Technical Field
The invention belongs to the technical field of machine vision, artificial intelligence, intelligent manufacturing equipment and intelligent medical equipment.
Background
At present, in the fields of intelligent manufacturing, intelligent medical treatment and the like, aiming at complex and changeable processed objects, how to reduce the technical intervention and operation of workers under the field physical environment, the technical labor cost is reduced, and the robot can quickly generate a correct track, which become important factors for hindering the development of intelligent manufacturing.
Aiming at flexible automatic manufacturing of a complex and changeable processed object, firstly, 3D digital modeling is carried out on the processed object, such as: the invention relates to a device for automatically generating 3D coordinates of an object with a complex shape. The working motion track of the robot is automatically generated on the 3D digital model of the processed object by manual teaching or certain rules. However, in general, in the actual operation of the robot, the robot trajectory generated by the above method has different trajectory optimization algorithms for robots produced by different manufacturers, and a deviation between the actual motion trajectory and the trajectory generated by the above method occurs, which may cause an unqualified result of the processed object. Current solutions require teaching or reprogramming of the robot trajectory by an engineer in the field of the physical environment. For a generalized scale of intelligent manufacturing equipment, this may result in an increase in the enterprise labor cost of the physical environment, or an increase in the technical support cost of the intelligent manufacturing equipment.
Therefore, in order to solve the above problems, it is necessary to use a remote virtual physical environment to plan and optimally adjust the trajectory of the robot in the physical environment, that is, a digital twin intelligent manufacturing technology, to design a mapping between the remote virtual environment and each device in the physical environment, and to provide quick support and service for the intelligent manufacturing device using customers in the physical environment in an industrial manufacturing cloud manner.
Disclosure of Invention
The invention discloses a robot motion track digital twinning method based on machine vision, aiming at the problem that the robot track is difficult to adapt to the change quickly due to the fact that a processed object in a physical environment is changeable, the method designs a robot physical environment of a remote virtual environment for mapping the processed object and a 3D modeling and robot track automatic generation system of the processed object based on 3D machine vision, the robot execution result feedback of the physical environment based on machine vision in a local physical environment is obtained remotely in the virtual environment, the virtual environment completes the optimization and simulation execution of the robot track, and the robot in the physical environment can be controlled in real time to complete the operation of the processed object. Enterprises in the intelligent manufacturing physical environment can improve the production efficiency of intelligent manufacturing equipment on different processed products, and reduce the technical support cost of the intelligent manufacturing equipment.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a robot motion track digital twin method based on machine vision is composed of a 3D camera digital modeling and track generating system, a local physical environment, a remote virtualization environment and a manufacturing cloud service, and is characterized in that the 3D camera digital modeling track generating system performs 3D point cloud extraction and robot working track information generation on a processed object, a robot system in the local physical environment performs processing operation on the processed object according to the robot track information, an operation result is detected and fed back to the remote virtualization environment, an optimized robot track generated by a track planning and simulation system in the virtual environment is sent to a robot system in the physical environment through an interaction module of the physical environment and the virtual environment, and the robot system in the physical environment completes correct operation on the processed object. Still further, the main work of the present invention includes the steps of:
a) the method comprises the following steps that a 3D camera shoots a processed object, 3D point cloud information is extracted, a track generation system generates a robot track according to a robot processing rule, and a remote virtual environment is subjected to simulation verification and robot track optimization;
b) the local physical environment track planning system starts a robot system according to the robot track information in the step a) to operate a processed object;
c) detecting the operation result of the processed object, if the operation result of the processed object is correct, optimizing the robot track by machine learning, uploading robot track information to manufacture cloud service, continuing the operation of the processed object in the step b), and otherwise, feeding back the operation result to a remote virtual environment, and turning to the step d);
d) the remote virtual environment generates an optimized track of the robot through a track planning system according to the feedback result of the physical environment, the optimized track enters a simulation system for verification and is transmitted to a local physical environment robot system, the processed object is operated, and the step c is carried out;
further, the local physical environment of this patent comprises at least: the robot system, the track planning system, the vision device and the interaction module are used for completing the automatic processing operation of the processed object. The operation of the local physical environment comprises the following steps:
a) obtaining 3D digital model parameters and a robot track of a processed object from a 3D camera digital modeling and track generating system;
b) the interaction module converts the 3D digital model parameters into a 3D image of the processed object, and superposes and displays the robot track and the 3D image of the processed object;
c) the track planning system locally adjusts the track of the robot according to actual conditions;
d) the robot system processes the processed object according to the robot track;
e) the vision device images the operation result of the processed object in the step d), detects the result, if the result is correct, the machine learns the optimized track, uploads the current track information to the manufacturing cloud service, the step d) is carried out, and the processing operation is continued, otherwise, the result of the processed object and the corresponding robot track are transmitted to a remote virtual environment;
furthermore, the remote virtual environment of the present patent is characterized in that the remote virtual environment directly maps the processing function of the local physical environment to the processed object, and the composition at least comprises: the system comprises a trajectory planning system, a robot simulation execution system and an interaction module. The working method of the remote virtual environment comprises the following steps:
a) receiving an operation result of a processed object in a local physical environment, a robot track and 3D digital model information;
b) the interaction module converts the 3D digital model parameters into a 3D image of the processed object, and superposes and displays the robot track and the 3D image of the processed object;
c) the track planning system adjusts the track of the robot;
d) the simulation system runs the robot track, detects the simulation result, if the result is correct, the robot track is transmitted to the robot system in the local physical environment, and the robot in the physical environment completes the operation on the processed object by the track generated in the virtual environment.
The method for adjusting the robot track by the track planning system adopts manual teaching to adjust the robot track; preferably, a robot track needing to be adjusted is automatically obtained according to the model and the characteristics of the current processed object by adopting a machine learning method.
A method for optimizing trajectory entry simulation system verification in a remote virtual environment at least comprises the following steps: 1) carrying out simulation execution according to the track parameters of the robot; 2) simulating the physical behaviors and characteristics of a robot terminal machining mechanism; 3) 3D graph output of a simulation result; 4) automatically verifying the simulation result according to a machine learning method; 5) and manually checking the 3D graph to verify the simulation result.
In the physical environment, after the robot operates the processed object, the operation result is detected and fed back and the processed object result is visually detected, and a 2D camera is adopted to shoot the processed object result;
preferably, 3D point cloud information of the processed object is obtained by a 3D camera, 3D model reconstruction is carried out, and the result of the processed object is output;
preferably, the 2D image result and the 3D image result of the processed object are superposed, and the result is output;
for the result detection of the processed object, the result of manually checking the processed object can be adopted;
preferably, the result of the processed object is detected by a machine learning method.
Further, the 3D camera digital modeling and trajectory generating system and method for robot processing rules, which creates 3D digital model by 3D camera generating 3D point cloud coordinate information of processed object and processed mark thereof, and generates robot trajectory according to processed mark 3D coordinate and processing rules, the method at least comprises: 1) Manually setting a robot track on the 3D digital model by using an interaction module; 2) a machine learning method for automatically generating a robot trajectory for a processed object; 3) a robot path is generated according to a position and an action of a robot to be processed on a processed object, which are set in advance.
The interactive module of the virtual environment is a virtual mapping of each working module and system of the remote local physical environment, and its main functions include but are not limited to: a) taking over trajectory planning, the robot system and the vision device controlling the local physical environment; b) generating, managing and operating independent threads according to different processed objects; c) and carrying out image superposition display on the result of the processed object acquired by the physical environment vision device and the robot track generated in a remote virtual mode.
Furthermore, in order to enable the remote virtual environment to simultaneously support a plurality of physical environments to work, the remote virtual environment creates an independent thread for each virtual physical environment, and each independent thread is software for realizing a robot motion track digital twin method for the robot vision of a certain processed object.
The manufacturing cloud service provides online services for a plurality of physical environments, including but not limited to: a) the remote virtual environment runs on the cloud; b) establishing a corresponding robot track database according to the product model information of the processed object, wherein the robot track information of the processed object can be obtained in the local physical environment according to the requirement; c) a training system for robot trajectory machine learning.
Drawings
FIG. 1 is a schematic diagram of a principle of applying the present invention
FIG. 2 shows an embodiment of the operation process of the present invention
FIG. 3 shows a first embodiment to which the present invention is applied
FIG. 4 shows a second embodiment of the present invention
Detailed Description
The idea of the invention is as follows:
a robot motion track digital twinning method based on machine vision is composed of a 3D camera digital modeling and track generating system, a local physical environment, a remote virtual environment and a manufacturing cloud service and is characterized in that the 3D camera digital modeling track generating system carries out 3D point cloud extraction and robot working track information generation on a processed object, a robot system in the local physical environment carries out processing operation on the processed object according to the robot track information, an operation result is detected and fed back to the remote virtual environment, an optimized robot track generated by a track planning and simulation system in the virtual environment is sent to a robot system in the physical environment through an interaction module of the physical environment and the virtual environment, and the robot system in the physical environment completes correct operation on the processed object. The method can be used for digital twin simulation of the trajectories of robots such as industrial robots, service robots, cooperation robots, logistics robots and AGV in the field of intelligent manufacturing, and can be used for digital twin optimization generation of the motion trajectories of surgical robots in the field of medical treatment.
FIG. 1 is a schematic diagram of the present invention
The invention comprises the following components: the system consists of a 3D camera digital modeling and track generation system, a local physical environment, a remote virtual environment and a manufacturing cloud service.
The 3D camera digital modeling and track generating system consists of one or more 3D cameras, 3D scanning is carried out on a processed object and an identifier of the processed object needing robot operation, the identifier of the robot operation can be a point, a line or a plane, the processed object and a 3D point cloud of the processed object needing robot operation are generated, and the track generating system generates a robot operation track according to the 3D point cloud coordinate of the robot operation identifier and the operation rule of the processed object.
The local physical environment consists of actual production and processing equipment and comprises a robot system, a trajectory planning system, a visual device and an interaction module. A software system of a track planning system running on a computer mainly obtains a 3D digital model of a processed object and a robot track from a 3D camera digital modeling and track generating system, provides a function of directly adjusting the robot track locally, and simultaneously sends the robot track to a robot system. The robot system at least comprises a robot body and a robot control device and finishes the processing operation of the processed object. The vision device is composed of a group of 3D cameras and 2D cameras, the computer controls the image acquisition of the result of the processed object, and whether the detection result is correct or not, the detection method can be a method for manually checking the result image displayed by the computer, and can also be a method for realizing automatic detection by using machine learning; and if the detection result is not qualified, calling the remote virtual environment to optimize and generate the robot track again.
The remote virtual environment is characterized in that the remote virtual environment directly maps the processing function of the local physical environment to the processed object, and the remote virtual environment at least comprises the following components: the system comprises a track planning system, a robot simulation execution system and an interaction module, wherein the robot simulation execution system corresponds to a robot system in a physical environment. The virtual environment track planning system acquires a 3D digital model and a robot track of a processed object from the 3D camera digital modeling and track generating system, and simultaneously acquires an actual processing result and an actually used robot track from a visual device of a local physical system; the virtual environment trajectory planning system can be used for carrying out optimization adjustment on the received robot trajectory of the processed object by an engineer, and the optimization adjustment method at least comprises the following steps: 1) manual optimization; 2) and calling a track optimization software algorithm for optimization. The simulation system of the remote virtual environment corresponds to the robot system of the local physical environment in a one-to-one mode, the virtual processed object can be processed according to the optimized track, a virtual processing result is generated on the basis of the 3D digital model of the processed object, and the virtual processing result is displayed and output by the interaction module.
The manufacturing cloud service main functions at least comprise: 1) providing a robot track cloud service for acquiring a processed object for a local physical environment; 2) providing training calculation power for machine learning; 3) runtime environment support is provided for software of the remote virtual environment.
FIG. 2 shows an embodiment of the working process of the present invention
When the digital twin system of the invention is actually operated, the digital twin system works according to the following steps
2013D camera shooting the processed object and the robot processing mark thereof, and extracting 3D point cloud information;
202, generating a robot track by the track generation system according to the extracted 3D point cloud information of the processing mark and the robot processing rule, and carrying out simulation verification and optimization on the robot track in a remote virtual environment;
203, the local physical environment track planning system starts the robot system according to the robot track information in the step 202 to operate the processed object;
204, the vision device detects the operation result of the robot and acquires the image of the processed object;
205 detecting the operation result of the processed object, if the operation result of the processed object is correct, go to 209;
206, feeding back an operation result to the remote virtual environment, and optimizing the incorrect track by the remote virtual environment track planning system;
207 the simulation system runs the optimized trajectory,
208, the virtual environment transmits the track after the simulation verification optimization to the physical environment, and the physical environment processes the processed object according to the track after the optimization in step 203;
209, entering the correct robot track and result into the learning training of a machine learning system;
210, uploading the correct robot track and result to a cloud manufacturing service for storage;
FIG. 3 shows a first embodiment to which the present invention is applied
The embodiment relates to a digital twin optimization method for a motion track of a surgical robot in the field of intelligent medical treatment. In the physical scene of the surgical robot, the positions and the shapes of patient characteristics and affected parts are changeable and complex, the robot track is difficult to generate in advance under most conditions, and when a treatment scheme is determined before surgery, the robot track in the surgical process can be generated by a digital twin virtual environment simulation robot. The following is the digital twinning implementation of this embodiment.
301 a surgical physical environment;
302, a robot track planning system in a physical surgery environment generates motion tracks of a robot and a surgical instrument according to human body 3D modeling information and coordinate information of a treatment control panel acquired by a vision device;
303 a medical robot, an entity executing the motion trail, the tail end of which carries a surgical instrument;
304 surgical instruments, medical surgical instruments mounted on the end of the robot, ion beam needles;
305 vision device, mounting a plurality of 3D cameras, wherein: 1) the large-visual-field 3D camera provides global images and position coordinate information of the treatment control panel for the track planning system and generates a track from the medical robot to the position near the treatment control panel; 2) the high-precision 3D camera provides the high-precision treatment control panel coordinate information for the trajectory planning system after the robot moves to the vicinity of the treatment control panel so as to generate a high-precision robot trajectory to perform operation on a human body;
306 human body 3D scanning digital modeling, modeling the human body and treatment control panel with a 3D camera, the functions of which include: 1) generating 3D coordinates of the treatment control panel; 2) and generating human body 3D coordinates, and providing data for safe touch of the robot.
307, in a remote virtual environment, a surgical robot can be simulated to generate a robot track before operation, and a doctor checks the effect of the virtual simulation operation through an interactive interface. During the operation, the virtual environment is in communication interaction with the local medical physical environment through a digital communication network;
308, interacting an operation interface, outputting a simulation result, remotely observing the physical environment of the operation, controlling the operation robot, and providing optimized track data;
309, simulating a surgical process according to the treatment scheme of the doctor by using a simulation system to form a surgical robot track; meanwhile, in the operation process, real-time simulation optimization is carried out, and the optimized medical robot track is sent to the medical physical environment;
310, optimizing a track, and planning and generating a motion track of a next robot according to human body 3D model information, position information of a medical robot surgical instrument and a treatment control panel acquired by a physical environment vision device in real time and robot track information of a medical operation cloud similar operation;
311 digital communication network providing digital communication services for physical environment, 3D modeling system and remote virtual environment;
312 treatment control board, which is arranged on the affected part of human body to support the medical robot surgical instrument;
313 human body, patient himself
314, recording the robot track information of different diseased positions and providing reference data for a track planning system;
FIG. 4 shows a second embodiment of the present invention
The embodiment is a track digital twinning method of a spraying operation robot for the shoe upper surface of a shoe manufacturing production robot. Firstly, marking vamp molding processing rubber lines manually, carrying out 3D modeling on the vamp by a 3D camera of a 3D digital modeling system, and generating a robot track on a marked track line according to rules. The robot trajectory information includes 3D coordinates and poses.
The local physical environment consists of a computer system of a running track planning and interaction module, a robot of a glue spraying mechanism arranged at the tail end of the robot, a robot control cabinet and a 3D camera vision close-side device. The processed object result information transmitted to the remote virtual environment includes: the vision device collects the processed object image, the mechanical parameters of the glue spraying mechanism, and the glue path pressure and flow information of the glue spraying mechanism.
The simulation system of the remote virtual environment simulates the track of the robot while simulating the spraying result of the glue flow and the pressure parameter of the glue spraying mechanism at the tail end of the robot. The result after the virtual environment optimization comprises the following steps: 1) a robot trajectory; 2) parameter information of a glue spraying mechanism;
a 401 shoe robot physical environment;
the 402 physical environment robot track planning system generates a robot motion track according to the shoe 3D modeling information and the coordinate information of the treatment control panel acquired by the vision device;
403 robot, executing the entity of the motion trail, the end of which carries the glue spraying mechanism;
404 glue spraying mechanism, glue valve and its motion mechanism installed at the end of the robot, and glue barrel connected by pipeline;
405 a vision device, 2D and 3D cameras, imaging the glue spraying effect;
406, performing 3D scanning digital modeling on the shoe, collecting 3D point cloud information of the shoe and coordinates of a shoe upper surface processing line identification line by using a 3D camera, modeling the shoe upper surface of the shoe last, identifying the shoe upper surface processing line identification, namely a rubber line, and generating a robot processing track.
407 remote virtual environment, before the physical environment is changed, the robot can be simulated to generate a robot track, and the effect of virtual simulation processing is checked through an interactive interface. In the manufacturing process, the virtual environment is in communication interaction with a local physical environment through a digital communication network, and the robot track in the manufacturing process is optimized;
408, an interactive operation interface is used for outputting a simulation result, remotely observing the manufacturing physical environment, controlling the shoe-making robot and providing optimized track data;
409 simulation system, which simulates and executes the spraying process of the robot according to the processing rule and the parameters of the glue spraying mechanism specified by the engineer to form the track of the robot; meanwhile, in the process of manufacturing the physical environment, real-time simulation optimization is carried out, and the optimized robot track is sent to the physical environment;
410, optimizing the track, planning and generating the motion track of the next robot according to the 3D model information of the shoes, the robot spraying effect acquired by the physical environment vision device in real time and the robot track information for manufacturing the cloud and the same type of shoe types;
411 digital communication network providing digital communication service for physical environment, 3D modeling system and remote virtual environment;
412 upper surface processing line identification, marking the position of the rubber line for robot processing during 3D modeling;
413 shoe tree, mechanism for supporting upper
414 make cloud services that provide verified robot trajectories for different shoe types.

Claims (10)

1. A robot motion track digital twin method based on machine vision is composed of a 3D camera digital modeling and track generating system, a local physical environment, a remote virtualization environment and a manufacturing cloud service, and is characterized in that the 3D camera digital modeling track generating system carries out 3D point cloud extraction modeling and robot working track information generation on a processed object, a robot system in the local physical environment carries out processing operation on the processed object according to the robot track information, an operation result is detected and fed back to the remote virtualization environment, an optimized robot track generated by a track planning and simulation system in the virtual environment is sent to a robot system in the physical environment through an interaction module of the physical environment and the virtual environment, and the robot system in the physical environment completes correct operation on the processed object. The method further comprises the steps of:
a) the method comprises the following steps that a 3D camera shoots a processed object, 3D point cloud information is extracted, a track generation system generates a robot track according to a robot processing rule, and a remote virtual environment is subjected to simulation verification and robot track optimization;
b) the local physical environment track planning system starts a robot system according to the robot track information in the step a) to operate a processed object;
c) detecting the operation result of the processed object, if the operation result of the processed object is correct, optimizing the robot track by machine learning, uploading robot track information to manufacture cloud service, continuing the operation of the processed object in the step b), and otherwise, feeding back the operation result to a remote virtual environment, and turning to the step d);
d) and c) generating an optimized track of the robot by the remote virtual environment through a track planning system according to a feedback result of the physical environment, entering the simulation system for verification and transmitting the optimized track to the local physical environment robot system, operating the processed object, and turning to the step c).
2. The method of claim 1, wherein the local physical environment comprises at least: the robot system, the track planning system, the vision device and the interaction module are used for completing the automatic processing operation of the processed object. The method comprises the following steps:
a) obtaining 3D digital model parameters and a robot track of a processed object from a 3D camera digital modeling and track generating system;
b) the interaction module converts the 3D digital model parameters into a 3D image of the processed object, and superposes and displays the robot track and the 3D image of the processed object;
c) the track planning system locally adjusts the track of the robot according to the actual situation;
d) the robot system processes the processed object according to the robot track;
e) and d) imaging the operation result of the processed object in the step d) by the vision device, detecting the result, if the result is correct, learning and optimizing the track by the machine, uploading current track information to a manufacturing cloud service, and turning to the step d) to continue the processing operation, otherwise, transmitting the result of the processed object and the corresponding robot track to a remote virtual environment.
3. The method of claim 1, wherein the remote virtual environment directly maps the processing function of the local physical environment on the processed object, and comprises at least: the system comprises a track planning system, a simulation system and an interaction module. The method comprises the following steps:
a) receiving an operation result of a processed object in a local physical environment, a robot track and 3D digital model information;
b) the interaction module converts the 3D digital model parameters into a 3D image of the processed object, and superposes and displays the robot track and the 3D image of the processed object;
c) the track planning system adjusts the track of the robot;
d) and the simulation system operates and executes the robot track to process the virtual processed object, detects the simulation result, and transmits the robot track to the robot system in the local physical environment if the result is correct.
4. The method of claim 1 and claim 3, wherein the method of optimizing trajectory entry simulation system verification comprises at least: 1) simulating according to the track parameters of the robot; 2) simulating the physical behavior and characteristics of the robot end machining mechanism in the physical environment; 3) 3D graphical output of a simulation result; 4) automatically verifying the simulation result according to a machine learning method; 5) and manually checking the 3D graph to verify the simulation result.
5. The method of claim 1, wherein the method of detecting the operation result and feeding back the operation result and visually detecting the result of the processed object comprises at least: 1) shooting the result of the processed object by using a 2D camera; 2) obtaining 3D point cloud information of a processed object by using a 3D camera; 3) detecting the result of the processed object by using a machine learning method; 4) and manually checking the result of the processed object.
6. The method of claim 1, wherein the 3D camera digital modeling and trajectory generating system and the method of robot processing rules establish the 3D digital model by using the 3D camera to generate the 3D point cloud coordinate information of the processed object and the robot processing mark thereof, and the method of generating the robot trajectory at least comprises: 1) manually setting a robot track on the 3D digital model by using an interaction module; 2) a machine learning method for automatically generating a robot trajectory for a processed object; 3) a robot path is generated according to a position and an action of a robot to be processed on a processed object, which are set in advance.
7. The apparatus of claim 1 and claim 4, wherein the interaction module of the virtual environment includes but is not limited to: a) taking over trajectory planning, the robot system and the vision device controlling the local physical environment; b) generating, managing and operating independent threads according to different processed objects; c) and performing image superposition display on the result of the processed object acquired by the physical environment vision device and the robot track.
8. The method according to claim 1 and claim 7, characterized in that the respective independent thread is software for implementing a robot motion trajectory digital twinning method for the robot vision of a certain object to be processed.
9. The method of claim 1, wherein manufacturing cloud services include but are not limited to: a) the remote virtual environment runs on the cloud; b) establishing a corresponding robot track database according to the product model information of the processed object; c) a robot track machine learning training system.
10. The apparatus as claimed in claim 2 and claim 3, wherein the method for the trajectory planning system to adjust the trajectory of the robot comprises at least: 1) automatically acquiring a robot track needing to be adjusted by a machine learning method; 2) and (5) manually teaching and adjusting the track of the robot.
CN202210619761.XA 2022-06-02 2022-06-02 Robot motion trajectory digital twinning method based on machine vision Pending CN114789450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210619761.XA CN114789450A (en) 2022-06-02 2022-06-02 Robot motion trajectory digital twinning method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210619761.XA CN114789450A (en) 2022-06-02 2022-06-02 Robot motion trajectory digital twinning method based on machine vision

Publications (1)

Publication Number Publication Date
CN114789450A true CN114789450A (en) 2022-07-26

Family

ID=82463661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210619761.XA Pending CN114789450A (en) 2022-06-02 2022-06-02 Robot motion trajectory digital twinning method based on machine vision

Country Status (1)

Country Link
CN (1) CN114789450A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115070780A (en) * 2022-08-24 2022-09-20 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115981178A (en) * 2022-12-19 2023-04-18 广东若铂智能机器人有限公司 Simulation system and method for fish and aquatic product slaughtering
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN113954066A (en) * 2021-10-14 2022-01-21 国电南瑞科技股份有限公司 Distribution network operation robot control method and device based on digital twin system
CN114460904A (en) * 2022-01-25 2022-05-10 燕山大学 Digital twin system facing gantry robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN113954066A (en) * 2021-10-14 2022-01-21 国电南瑞科技股份有限公司 Distribution network operation robot control method and device based on digital twin system
CN114460904A (en) * 2022-01-25 2022-05-10 燕山大学 Digital twin system facing gantry robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115070780A (en) * 2022-08-24 2022-09-20 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115070780B (en) * 2022-08-24 2022-11-18 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115981178A (en) * 2022-12-19 2023-04-18 广东若铂智能机器人有限公司 Simulation system and method for fish and aquatic product slaughtering
CN115981178B (en) * 2022-12-19 2024-05-24 广东若铂智能机器人有限公司 Simulation system for slaughtering fish and aquatic products
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration

Similar Documents

Publication Publication Date Title
CN114789450A (en) Robot motion trajectory digital twinning method based on machine vision
US11440179B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
US11813749B2 (en) Robot teaching by human demonstration
WO2024027647A1 (en) Robot control method and system and computer program product
CN109434870A (en) A kind of virtual reality operation system for robot livewire work
CN110047150A (en) It is a kind of based on augmented reality complex device operation operate in bit emulator system
CN104858876A (en) Visual debugging of robotic tasks
CN108908298B (en) Master-slave type spraying robot teaching system fusing virtual reality technology
Fu et al. Active learning-based grasp for accurate industrial manipulation
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
CN115346413A (en) Assembly guidance method and system based on virtual-real fusion
CN115213890B (en) Grabbing control method, grabbing control device, grabbing control server, electronic equipment and storage medium
CN110942083A (en) Imaging device and imaging system
CN111381514A (en) Robot testing system and method based on semi-physical simulation technology
CN114332421A (en) Augmented reality auxiliary assembly system considering human factors
CN117507287A (en) Display control system and method of injection molding machine
CN112085223A (en) Guidance system and method for mechanical maintenance
Ninomiya et al. Automatic calibration of industrial robot and 3D sensors using real-time simulator
CN107368188B (en) Foreground extraction method and system based on multiple spatial positioning in mediated reality
CN112381925B (en) Whole body tracking and positioning method and system based on laser coding
CN113823129A (en) Method and device for guiding disassembly and assembly of turning wheel equipment based on mixed reality
JPWO2022180801A5 (en)
Liu et al. AR-Driven Industrial Metaverse for the Auxiliary Maintenance of Machine Tools in IoT-Enabled Manufacturing Workshop
CN117549486A (en) Equipment operation correction system based on virtual reality
Ho et al. Supervised control for robot-assisted surgery using augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220726

WD01 Invention patent application deemed withdrawn after publication