CN110561450A - Robot assembly offline example learning system and method based on dynamic capture - Google Patents

Robot assembly offline example learning system and method based on dynamic capture Download PDF

Info

Publication number
CN110561450A
CN110561450A CN201910816236.5A CN201910816236A CN110561450A CN 110561450 A CN110561450 A CN 110561450A CN 201910816236 A CN201910816236 A CN 201910816236A CN 110561450 A CN110561450 A CN 110561450A
Authority
CN
China
Prior art keywords
assembly
robot
track
motion
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910816236.5A
Other languages
Chinese (zh)
Other versions
CN110561450B (en
Inventor
楼云江
胡浩鹏
曹芷琪
赵智龙
杨先声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201910816236.5A priority Critical patent/CN110561450B/en
Publication of CN110561450A publication Critical patent/CN110561450A/en
Application granted granted Critical
Publication of CN110561450B publication Critical patent/CN110561450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement

Abstract

the invention relates to an off-line example learning system and method for robot assembly, which utilizes an optical dynamic capturing device to collect action data of a hand of a user during manual precision assembly, synthesizes an assembly track off line and then guides a robot to finish precision assembly action. The scheme of the invention carries out off-line processing based on the assembly action data of the arms and hands of skilled operators, also carries out exception elimination on the extracted data and carries out track optimization suitable for an actual robot system, so that the obtained high-precision example learning motion track has excellent mobility.

Description

Robot assembly offline example learning system and method based on dynamic capture
Technical Field
The invention relates to an off-line example learning system and method for robot assembly, in particular to an example learning system and method for collecting motion data of a hand of a user during manual precision assembly by using an optical dynamic capturing device, synthesizing an assembly track off line and then guiding a robot to complete precision assembly motion.
Background
industrial robots have been widely used in various production lines in the industrial production field, but have not been widely used for precision assembly lines represented by 3C assembly. At present, the assembly task in the 3C manufacturing field is still mainly completed manually, and the assembly process is time-consuming and labor-consuming. With the shortage of labor and the gradual rise of labor cost, the demand of precision assembly lines for automation is increasing. The main reasons that industrial robots are difficult to apply are that the update iteration frequency of 3C products (such as mobile phones, tablet computers and notebook computers) is high, and the life cycle of the products is short, which directly results in frequent line replacement of the 3C assembly production line; meanwhile, the 3C assembly process is complex, the requirement on precision is high, the working space is small, and high robot programming difficulty is brought to robot technicians. In response to such problems, the conventional robot programming control method is too time-consuming and inflexible, which not only limits the speed of the robot to adapt to new tasks, but also limits the application of the robot in the assembly field. The technology capable of quickly and flexibly transferring the assembly skills of workers on the assembly production line to the robot is significant in expanding the application range of the robot and improving the automation level of the precision assembly production line.
Example learning provides an efficient way to simplify the robot programming process, which is currently an important way to improve the ease of use of robots. Compared to traditional robot motion planning and control methods that rely on manual programming, example learning has two major advantages: firstly, a simple and intuitive mode is provided for a user to transmit task information to the robot, so that the requirement of professional programming knowledge of the robot user can be greatly reduced, and the non-robot expert user is facilitated to participate in the robot assembly production line; secondly, by means of an example learning method, a user can easily transfer a complex motion trajectory to the robot, so that the robot can flexibly complete complex assembly work. In conclusion, the off-line example learning method is applied to a robot precision assembly link represented by 3C assembly, so that the defects that the time cost is too high and the time cost is not flexible in the traditional programming method can be effectively overcome, the robot on the assembly production line can be rapidly programmed, and the automatic transformation of the assembly production line is promoted.
Yuxin YI, xu Tuo, Bai Ji and the like provide a teaching robot data acquisition system based on optical motion capture (Yuxin YI, xu Cheng, Bai Ji and the like. the teaching robot data acquisition system based on optical motion capture is Chinese, 109848964A [ P ]. 2019-06-07). though the accuracy is improved, because the light reflection points are assembled on a specially-made demonstration tool and are calibrated in the same working area, the acquired data is poor in universality and is limited by the environment and equipment; a robot teaching method, device and system are provided for Zuo, Yanxin and Nixin (Zuo, Yanxin, Nixin, a robot teaching method, device and system: China, 106182003P, 2016-12-07), an inertial measurement unit IMU is used for collecting data in the teaching action process, the IMU is susceptible to other factors, the position error is large, and the precision cannot be guaranteed. The xijundu light scene xuguanghua et al proposes a Leap Motion-based virtual assembly teaching method, which can better simulate hand gestures by adopting Leap Motion, but has poor stability and low virtual assembly mobility.
Disclosure of Invention
Aiming at the problem that a solution facing narrow space and industrial high-precision assembly is lacked in the prior art, the invention provides an optical motion capture-based robot assembly offline example learning system, which collects assembly motion data of arms and hands of skilled operators through optical motion capture equipment to perform offline processing, the demonstration process is flexible and efficient, and the obtained high-precision example learning motion trail has excellent mobility.
The first aspect of the technical scheme of the invention is an off-line example learning system for robot assembly based on dynamic capture, which comprises an optical motion capture platform, a data acquisition module, a data preprocessing module, an off-line robot motion track generation module and a simulation verification module, wherein:
The optical motion capture platform comprises a plurality of optical motion capture cameras, network equipment, computing equipment and a plurality of reflective mark points arranged on the hands of assembly demonstration personnel, wherein the optical motion capture cameras are symmetrically arranged around the assembly demonstration workbench, and each optical motion capture camera is connected to the computing equipment through the network equipment;
The data acquisition module is connected with the optical motion capture platform and is used for acquiring demonstration motion tracks of arms and hands of assembling demonstration personnel;
the data preprocessing module is connected with the data acquisition module and is used for performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing assembly tracks demonstrated for multiple times and obtaining an assembly motion model modeled by a Gaussian mixture model;
The off-line robot motion track generation module is connected with the data preprocessing module and used for outputting a smooth task space robot assembly motion track by a Gaussian mixture regression method;
The simulation verification module is used for transferring the track learned by the offline example to a simulation environment and controlling the robot in the simulation platform to simulate assembly actions.
according to some embodiments, the data acquisition module is configured to:
Calibrating the optical motion capture camera through an application program of a visual calibration algorithm;
After calibration is finished, sending an acquisition command to the optical motion capture camera through computing equipment, and acquiring position and posture data of reflective mark points adhered to the arms and hands of skilled assembly workers;
The data transmission is carried out between the switchboard and the optical motion capture camera, the position and posture data of the motion of the arm and the hand of the user collected by the optical motion capture camera are received, the received data are analyzed in an off-line mode, and the motion trail information which can be executed by the robot is generated.
According to some embodiments, the data preprocessing module comprises:
The anomaly detection unit is used for calculating an anomaly factor of each reflective marker point at each sampling moment, and taking the point with the value of the anomaly factor larger than a given threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set;
The track segmentation unit is used for clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering method;
A trajectory fusion unit configured to: firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
According to some embodiments, the offline robot motion trajectory generation module is configured to:
Obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly track;
Calculating a cost value of the current track through a cost function;
Calculating the selection probability of each frame under the track through a frame selection strategy, selecting partial frames in the joint angle sequence according to the frame selection strategy to form a new sequence, and calculating the cost value again;
Calculating a reward value and updating the probability of each frame being selected in the joint angle sequence according to the reward value;
Updating the whole joint angle trajectory sequence, deleting the frames with the probability lower than a certain threshold value, and forming the rest frames into a new joint angle sequence;
And if the absolute value of the reward value is lower than a first threshold or the iteration number is larger than a second threshold, outputting the current residual frame sequence, performing speed planning interpolation with the shortest time in the joint space, and outputting the motion track with the least motion consumption as the target track.
according to some embodiments, the simulation verification module is configured to:
Importing the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform V-REP;
And matching a robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate assembly along an off-line assembly track in a simulation environment, and verifying whether the assembled part reaches an expected position and posture.
According to some embodiments, the robot assembly off-line example learning system further comprises an off-line assembly platform comprising at least one robot body, a tool storage area of the robot gripper, an assembly fixture for positioning the part to be assembled, and an assembly storage area.
The second aspect of the technical scheme of the invention is an off-line robot assembly method based on dynamic capture, which comprises the following steps:
A. collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device;
B. Performing off-line preprocessing on the acquired motion tracks of the arm and the hand, processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire the position information of each reflective marker point at each sampling moment in each assembly demonstration, calculating the anomaly factor of each reflective marker point at each sampling moment, regarding the point with the anomaly factor larger than a preset threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set, and then fusing the assembly tracks for multiple times to acquire an assembly motion model modeled by a Gaussian mixture model;
C. Generating a robot assembling motion track according to the obtained assembling motion model, and the actual part initial pose and target pose of robot assembling, wherein the motion information of the hand marking points is analyzed to obtain the assembling clamp pose and state information required by robot assembling, and the generated robot assembling track is adjusted through a post-processing algorithm to obtain the motion track with the shortest time suitable for robot assembling;
D. and guiding the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform, matching a robot system model, guiding a three-dimensional model of an assembled part, controlling the robot to simulate assembly along the off-line assembly track in the simulation environment, and verifying whether the assembled part reaches an expected position and posture.
According to some embodiments, said step B further comprises the steps of:
Providing a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as the reachable distance, where x(k)is the set { xithe kth sample closest to x in the } k is a manually selected integer parameter;
providing local reachability density
If the sample x therein is such that the local anomaly is a factor
if the rising value exceeds the preset value, the corresponding sample x is eliminated from the demonstration data.
according to some embodiments, said step C further comprises the steps of:
c1, obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly trackwherein the joint angle q at each time pointiDefining a frame, and making k equal to 1;
c2, passing cost functioncalculating the current track xi(0)cost value C ofk
c3 selection strategy by framecalculating the selected probability P of each frame under the trackiSelecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence and calculating the cost value again;
c4 calculating a prize value Rk=Ck-1-Ck
c5 updating the probability P that each frame in the sequence of joint angles is selected according to the reward valuei
C6, updating the whole joint angle track sequence, deleting frames with the probability lower than a certain threshold value, forming a new joint angle sequence by the rest frames, and enabling k to be k + 1;
C7, if the absolute value of the reward value is lower than the first threshold or the iteration number is larger than the second threshold, outputting the current remaining frame sequence, otherwise, returning to execute the step C2.
Drawings
FIG. 1 is a schematic illustration of a kinetic capture platform for a robotic assembly offline example learning system, in an embodiment.
FIG. 2 is a block diagram of software modules of a robot assembly offline example learning system, in an embodiment.
FIG. 3 is a general flow diagram of a robot assembly offline example learning system in an embodiment.
Fig. 4 is a flowchart of a robot assembly trajectory optimization method for offline example learning in an embodiment.
fig. 5 is a motion trajectory curve of the robot tip before trajectory optimization.
Fig. 6 is a trajectory curve of the robot end motion after trajectory optimization.
Fig. 7 depicts an illustrative application example of the present invention.
Detailed Description
the conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention.
FIG. 1 is a diagram illustrating an optical motion capture platform, according to one embodiment. The optical motion capture platform 10 comprises: a set of optical motion capture cameras 11 (e.g., high precision infrared cameras), network devices 12 (e.g., routers, ethernet switches, etc.), and computing devices 13 (e.g., industrial controllers, industrial personal computers, PCs, etc.). As shown, a plurality of (e.g., six) optical motion capture cameras 11 are arranged above a workspace where a presentation is assembled by a support stand, and one workspace is obliquely photographed from a plurality of angles (e.g., 0.4m × 0.6m × 1m in length, width, and height, respectively). Key positions of the joints of the arms, the fingertips of the hands, etc. of an assembler (e.g., a skilled assembler) affix a plurality of retro-reflective marker points 14. Preferably, the cameras are arranged around the workbench in a symmetrical mode shown in fig. 1, the optimized position layout of the cameras in space can ensure the capture precision of fine motions of both hands, and the data fusion of the multi-angle cameras can avoid the problem of mutual shielding of the motions of the hands, thereby ensuring the consistency of the capture motions. Thus, within the workspace, all of the cameras are configured to simultaneously capture visual information of the real-time position and pose reflected by the retro-reflective marker points of the arms and hands of the assembler during the assembly process. Further, each camera 11 may be attached with a light source.
Referring to fig. 2, in an embodiment, a robot assembly offline example learning system includes a data acquisition module, a data preprocessing module, an offline robot motion trajectory generation module, and a simulation verification module. These modules may be integrated in the computer device 13 running in the optical motion capture platform, or may be integrated in other computing devices running in the robot-assembled offline example learning system.
The data acquisition module is configured to: calibrating the optical motion capture camera 11 by an application program of a visual calibration algorithm; after calibration is completed, sending a collecting command to the optical motion capture camera through computing equipment, and collecting position and posture data of the reflective mark points 14 attached to the arms and hands of skilled assembly workers; the switch 12 performs data transmission with the optical motion capture camera 11, receives position and posture data of the motion of the user's arm and hand collected by the optical motion capture camera, and performs off-line analysis on the received data to generate motion trajectory information executable by the robot. In addition, the position change data of the light reflecting mark points captured by the camera is sent to an upper computer, and the upper computer stores double-hand assembling action data.
The data preprocessing module is used for performing off-line processing on the acquired motion data and removing noise so as to reduce redundant data. In one embodiment, the data pre-processing module is configured to perform anomaly detection, trajectory segmentation, and trajectory fusion.
and the abnormality detection is to process the data acquisition module through an abnormality detection algorithm based on local abnormality factors to acquire the position information of each reflective mark point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each adopted moment of each light reflecting mark point. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency.
and the track segmentation is to perform clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm. By the method, the track can be segmented, only the track data related to assembly is reserved, and irrelevant track data (such as the motion track from any position of the arm and the hand of the user to the position of the part to be assembled, the motion track of the arm and the hand of the user away from the part after assembly and the like) are eliminated. To simplify the late learning process.
The track fusion method comprises the following steps: firstly, a Gaussian Mixture Model (GMM) is adopted to Model an assembly track of multiple assembly demonstration obtained by a data preprocessing module, and the number of Gaussian kernels is specified according to a method of maximizing Bayesian Information Criterion (BIC); learning multiple sections of assembly tracks by using an Expectation-Maximization (EM) method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
The off-line robot motion track generation module is used for generating a smooth assembly track. The offline robot motion trajectory generation module is configured to: and outputting a smooth and efficient task space (Cartesian space) robot assembly motion track by a Gaussian Mixture Regression (GMR) method according to the assembly motion modeled by the Gaussian Mixture model obtained by the data preprocessing module. The track generation can be carried out according to the specific initial position and the target position during the robot assembly, so that the robot assembly is not limited to the initial position and the target position of the parts during manual demonstration. The assembly track has high mobility, is not limited by assembly sites and equipment, and can be rapidly deployed in robot assembly systems of different models and different configurations.
And the simulation verification module migrates the trajectory learned by the offline example to a simulation environment, and controls the robot in the simulation platform to complete the same assembly action. The offline example learning trajectory can be migrated to any robot system that meets the assembly freedom and workspace requirements, so that the example learning result is independent of the specific robot system. In addition, the simulation verification module provides a virtual environment to simulate running the assembly instance, wherein the steps comprise:
1) importing the track optimized by the track generation module in an off-line mode into a robot simulation platform (for example, a simulation environment of V-REP);
2) selecting a robot system model, wherein any robot system meeting the assembly freedom degree and the working space requirement can be selected, and the robot system comprises but is not limited to a serial robot, a parallel robot, a single-arm robot system or a double-arm robot system;
3) Through example learning, the robot system can efficiently complete high-precision assembly tasks of a working space in the same assembly scene as a skilled worker.
And when the virtual assembly simulation of the robot passes the verification, the simulation verification module transmits the motion instruction and data of the robot after the debugging is passed to the robot controller for controlling the motion of the robot in the actual assembly.
FIG. 3 is a block flow diagram of a method of a robot assembly offline example learning system, in an embodiment. The method comprises the following steps:
s1: the demonstration motion tracks of the arms and the hands of the assembling demonstration personnel are collected through the optical motion capture device.
s2: and performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks, and obtaining an assembly motion model modeled by adopting a Gaussian mixture model.
S3: and generating an assembling motion track of the robot according to the obtained assembling motion model and the actual part initial pose and target pose of the robot assembly. And analyzing the motion information of the hand mark points to obtain the pose and state information of the assembly fixture required by the robot assembly. And a post-processing algorithm is also applied to reprocess the generated robot assembly track to obtain the motion track with the shortest time suitable for robot assembly.
S4: and controlling the robot to carry out field assembly according to the obtained assembly track and the assembly fixture information, and finishing the whole assembly demonstration learning process.
It can be understood that the above steps S1 and S2 mainly relate to robot assembly demonstration track extraction, and the step S3 mainly relates to robot assembly track optimization. Details of each step are described further below.
In some embodiments, step S1 further includes:
s1.1: and (5) building a dynamic catching platform.
firstly, fixing a camera around an experimental platform, connecting one end of an Ethernet cable with the camera, and connecting the other end of the Ethernet cable with a switch; the switch provides power supply and data communication for the camera, the output acquisition module, the data processing module, the off-line robot motion track generation module and the simulation verification module form a local area network, and the computing equipment sends a control command.
s1.2: user assembly demonstration data is collected.
the key positions (finger tips and finger joints) of the arms and the hands of the user are pasted with light-reflecting mark points, and the same assembly action is repeatedly demonstrated in the appointed assembly working area. The camera is a high-precision infrared motion capturing camera, captures the position and posture information of reflective mark points attached to the two arms and the two hands of a user, emits the mark points to reflect light of a camera flash unit, collects the reflected light in a scene by a camera lens to form an image focusing on a camera sensor plane, and analyzes the position information of each reflective mark point. The information is transmitted to the upper computer through the switch for storage.
In some embodiments, step S2 further includes:
S2.1: provided is an abnormality detection method.
the anomaly detection method comprises the step of processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire position information of each reflective marker point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each adopted moment of each light reflecting mark point. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency. The specific implementation mode is as follows:
Given a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||) (1)
as the reachable distance, where x(k)is the set { xithe k-th sample closest to x in the lattice, k being a manually selected integer parameter.
This is called local reachability density. The local anomaly factor is defined as two definitions (1) and (2)
As the LOF rises, the likelihood of the sample x becoming an outlier rises and is eliminated from the presentation data.
S2.2: a track segmentation method.
the track segmentation method carries out density clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm.
S2.3: a trajectory fusion method.
firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
in some embodiments, step S2 further includes:
S2.4: and generating an offline track.
And the off-line robot motion trail generation module outputs a smooth and efficient task space (Cartesian space) robot assembly motion trail through a Gaussian mixture regression method according to the assembly action modeled by the Gaussian mixture model obtained by the data preprocessing module. The method can generate the track according to the specific initial and target positions of the robot during assembly, so that the assembly of the robot is not limited by the initial and target positions of parts during manual demonstration.
S2.5: and (5) assembling and analyzing a clamp.
and analyzing the tracks of the light reflecting mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
with respect to step S3
Due to the structural difference between the human arm and the robot, the assembly track of the human hand is not very suitable for the robot to execute, and in order to improve the assembly efficiency of the robot, a track post-processing module is added. The introduced post-processing flow is as shown in fig. 4, the assembly track of the human hand is optimized, noise and irrelevant actions in the hand movement process are removed, and the conversion from the hand assembly movement track to the movement track suitable for robot assembly is completed.
thus, in some embodiments, step S3 further includes:
S3.1: firstly, all joint angle sequences of the robot in the motion process are obtained through inverse kinematics solution of the artificial assembly track obtained by the off-line processing moduleJoint angle q at each time pointiDefined as a frame. Let k equal to 1.
S3.2: the cost function is defined as (here, 6-degree-of-freedom robot example):Calculating the current track xi(0)Cost C ofk
s3.3: defining a frame selection policyCalculating to obtain the selected probability P of each frame under the tracki. And selecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence, and calculating the cost function of the new sequence again.
S3.4: calculating a reward value Rk=Ck-1-Ck,CkNamely the cost function of the manual assembly track.
S3.5: updating the probability P that each frame in the sequence of joint angles is selected according to a reward valuei. The update strategy is: each round of selected frames is updated toUnselected frames are updated towherein 0<α<1 is the update rate. And S (xi) is determined according to the iteration result of each time, if the assembly task is well completed, the S (xi) is 1, and if not, the S (xi) is 0.
s3.6: and updating the whole joint angle track sequence. The method is to delete the frames with the probability lower than a certain threshold, and form the rest frames into a new joint angle sequence, and let k be k + 1.
S3.7: and judging whether the absolute value of the reward value is lower than a small constant or the iteration number is larger than a certain threshold, if not, returning to the step S3.2 to continue execution, and if so, outputting the current remaining frame sequence.
s3.8: and (4) taking the frame sequence output in the step (S3.7) as a path, performing speed planning interpolation in the joint space in the shortest time, and outputting a motion track in the shortest time.
As can be seen from the figures 5 and 6, the assembly track obtained by the post-processing module is simpler and more efficient than the assembly track manually demonstrated, and is suitable for being executed by a robot.
In some embodiments, step S4 further includes:
S4.1: and (5) verifying the assembly of the robot. Firstly, selecting a proper assembly fixture according to an assembly task by using fixture state data obtained through an offline data processing module, automatically replacing the fixture through a robot tail end quick-change device, then carrying out field assembly by using a shortest-time assembly track obtained through a post-processing module, and evaluating the assembly effect.
an exemplary rail mounting application of the present invention is described in conjunction with fig. 7 and 1. In one example, in the optical motion capture platform 10 shown in FIG. 1, a skilled assembler picks up the guide rails from the assembly storage area 26, then mounts them to the ready-to-assemble 24 of the assembly jig table 25 and screws them. The process can be repeated for multiple times, so that a data acquisition module of the system can acquire enough movement positions and posture values of arms, palms and fingers, and then a manual assembly track is generated.
For the extracted manual assembly trajectory data, the above step S2 is executed by the computing device 13 to perform exception data elimination, trajectory segmentation, trajectory fusion, offline trajectory generation, and assembly jig analysis. For the manual rail assembly example, the end of the movement path (such as the position and posture of the finger) is limited to the range of the assembly jig table 25 and the assembly storage area 26, and some path data beyond the movement range or path data causing assembly interference may be excluded as abnormal data. In addition, for different assembly steps, the manual assembly track can be segmented, such as a guide rail picking and placing step, a guide rail and to-be-assembled piece positioning step, a screw mounting step and the like, which can be divided by capturing the speed and the motion type of the reflective marker point. For example, when the guide rail is picked and placed by hand, the motion of the fingers is mainly in space translation and the elbow and the shoulder have obvious motion, so the group of motion tracks can be divided into the picking and placing tracks. In addition, when the fingers and the arms clamp the guide rail and keep the guide rail still for a given time, the group of motion tracks can be divided into positioning tracks; when the collected trajectory data indicates that only the palm and fingers operate the wrench tool for partial torquing, the set of motion trajectories is divided into fastener installation trajectories. Then, in the trajectory fusion method, the generated assembly motion model reflects assembly motion key information, such as key points of a reasonable conveying path of the guide rail, the precise direction of installation of the guide rail, the type of the fastener and the installation position thereof, and the like.
Then, in the computing device 13, the above step S3 is executed, the above preprocessed data is matched with a suitable robot (for example, the tandem robot 21 shown in fig. 7 may be adopted), and the joint angle corresponding to each motion frame of the robot is solved inversely by the position of the end, and meanwhile, the velocity planning interpolation is performed in the joint space in the shortest time, and the motion track in the shortest time is output. In addition, the gripper 23 of the robot tip is also configured according to different assembly steps, such as air claw configuration for pick-and-place step and positioning step, and electric screw driver configuration for screw mounting step. Further, by synchronizing with the data of the robot controller and its jig library, it is possible to utilize the shortest assembly trajectory in time obtained through the post-processing module when the above-described step S4 is executed with respect to the computing device 13, and then configure jig nodes for matching or replacing jigs between each assembly step. For example, as shown in fig. 7, before carrying the guide rail from the mount storage area 26 to the to-be-mounted package 24 on the mount table 25, a pneumatic claw gripper for gripping the guide rail to be mounted is arranged before the robot is switched from the previous step to the step of picking up the guide rail. With the gripper deployed at this node, the path of the robot tip to move to the tool storage area 22 to install or replace a gripper can be introduced through the robot controller and the gripper management device of the tool storage area 22. Therefore, the assembly track demonstrated manually can be transited to the robot off-line assembly track (including the clamp) to realize the practical application scene operation of the off-line assembly platform 20.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, a mini-computing device, a mainframe, a workstation, a networked or distributed computing environment, a separate or integrated computing device platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it is readable by a programmable computing device, which when read by the storage medium or device is operative to configure and operate the computing device to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computing device-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computing device itself when programmed according to the methods and techniques described herein.
The computing device program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.

Claims (9)

1. The robot assembly off-line example learning system based on dynamic capture comprises an optical motion capture platform, a data acquisition module, a data preprocessing module, an off-line robot motion track generation module and a simulation verification module, and is characterized in that:
The optical motion capture platform comprises a plurality of optical motion capture cameras, network equipment, computing equipment and a plurality of reflective mark points arranged on the hands of assembly demonstration personnel, wherein the optical motion capture cameras are symmetrically arranged around the assembly demonstration workbench, and each optical motion capture camera is connected to the computing equipment through the network equipment;
The data acquisition module is connected with the optical motion capture platform and is used for acquiring demonstration motion tracks of arms and hands of assembling demonstration personnel;
the data preprocessing module is connected with the data acquisition module and is used for performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing assembly tracks demonstrated for multiple times and obtaining an assembly motion model modeled by a Gaussian mixture model;
The off-line robot motion track generation module is connected with the data preprocessing module and used for outputting a smooth task space robot assembly motion track by a Gaussian mixture regression method;
The simulation verification module is used for transferring the track learned by the offline example to a simulation environment and controlling the robot in the simulation platform to simulate assembly actions.
2. The system of claim 1, wherein the data acquisition module is configured to:
Calibrating the optical motion capture camera through an application program of a visual calibration algorithm;
After calibration is finished, sending an acquisition command to the optical motion capture camera through computing equipment, and acquiring position and posture data of reflective mark points adhered to the arms and hands of skilled assembly workers;
The data transmission is carried out between the switchboard and the optical motion capture camera, the position and posture data of the motion of the arm and the hand of the user collected by the optical motion capture camera are received, the received data are analyzed in an off-line mode, and the motion trail information which can be executed by the robot is generated.
3. The system of claim 1, wherein the data preprocessing module comprises:
the anomaly detection unit is used for calculating an anomaly factor of each reflective marker point at each sampling moment, and taking the point with the value of the anomaly factor larger than a given threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set;
The track segmentation unit is used for clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering method;
A trajectory fusion unit configured to: firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
4. The robot assembly offline example learning system of claim 1, wherein the offline robot motion trajectory generation module is configured to:
Obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly track;
Calculating a cost value of the current track through a cost function;
Calculating the selection probability of each frame under the track through a frame selection strategy, selecting partial frames in the joint angle sequence according to the frame selection strategy to form a new sequence, and calculating the cost value again;
Calculating a reward value and updating the probability of each frame being selected in the joint angle sequence according to the reward value;
Updating the whole joint angle trajectory sequence, deleting the frames with the probability lower than a certain threshold value, and forming the rest frames into a new joint angle sequence;
and if the absolute value of the reward value is lower than a first threshold or the iteration number is larger than a second threshold, outputting the current residual frame sequence, performing speed planning interpolation with the shortest time in the joint space, and outputting the motion track with the least motion consumption as the target track.
5. the robotic assembly offline example learning system of claim 1, wherein the simulation verification module is configured to:
importing the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform V-REP;
and matching a robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate assembly along an off-line assembly track in a simulation environment, and verifying whether the assembled part reaches an expected position and posture.
6. The robotic assembly offline example learning system of claim 1, further comprising an offline assembly platform comprising at least one robot body, a tool storage area of a robot gripper, an assembly gripper for positioning a part to be assembled, and an assembly storage area.
7. A robot offline assembly method based on dynamic capture is characterized by comprising the following steps:
A. Collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device;
B. performing off-line preprocessing on the acquired motion tracks of the arm and the hand, processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire the position information of each reflective marker point at each sampling moment in each assembly demonstration, calculating the anomaly factor of each reflective marker point at each sampling moment, regarding the point with the anomaly factor larger than a preset threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set, and then fusing the assembly tracks for multiple times to acquire an assembly motion model modeled by a Gaussian mixture model;
C. Generating a robot assembling motion track according to the obtained assembling motion model, and the actual part initial pose and target pose of robot assembling, wherein the motion information of the hand marking points is analyzed to obtain the assembling clamp pose and state information required by robot assembling, and the generated robot assembling track is adjusted through a post-processing algorithm to obtain the motion track with the shortest time suitable for robot assembling;
D. And guiding the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform, matching a robot system model, guiding a three-dimensional model of an assembled part, controlling the robot to simulate assembly along the off-line assembly track in the simulation environment, and verifying whether the assembled part reaches an expected position and posture.
8. The method of claim 7, wherein step B further comprises the steps of:
Providing a set of sample sets xi1,2,3 …, defining:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as the reachable distance, where x(k)Is a setxiThe kth sample closest to x in the } k is a manually selected integer parameter;
Providing local reachability density
if the sample x therein is such that the local anomaly is a factor
If the rising value exceeds the preset value, the corresponding sample x is eliminated from the demonstration data.
9. the method of claim 7, wherein step C further comprises the steps of:
c1, obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly trackWherein the joint angle q at each time pointidefining a frame, and making k equal to 1;
C2 passing through cost function C (ξ) ═ ωTΔq,ωj>0,Calculating the current track xi(0)cost value C ofk
c3 selection strategy by framecalculating the selected probability P of each frame under the trackiselecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence and calculating the cost value again;
C4 calculating a prize value Rk=Ck-1-Ck
C5, update according to the prize valueprobability P that each frame in the new joint angle sequence is selectedi
C6, updating the whole joint angle track sequence, deleting frames with the probability lower than a certain threshold value, forming a new joint angle sequence by the rest frames, and enabling k to be k + 1;
C7, if the absolute value of the reward value is lower than the first threshold or the iteration number is larger than the second threshold, outputting the current remaining frame sequence, otherwise, returning to execute the step C2.
CN201910816236.5A 2019-08-30 2019-08-30 Robot assembly offline example learning system and method based on dynamic capture Active CN110561450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910816236.5A CN110561450B (en) 2019-08-30 2019-08-30 Robot assembly offline example learning system and method based on dynamic capture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910816236.5A CN110561450B (en) 2019-08-30 2019-08-30 Robot assembly offline example learning system and method based on dynamic capture

Publications (2)

Publication Number Publication Date
CN110561450A true CN110561450A (en) 2019-12-13
CN110561450B CN110561450B (en) 2021-09-07

Family

ID=68777108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910816236.5A Active CN110561450B (en) 2019-08-30 2019-08-30 Robot assembly offline example learning system and method based on dynamic capture

Country Status (1)

Country Link
CN (1) CN110561450B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111002315A (en) * 2019-12-27 2020-04-14 深圳市越疆科技有限公司 Trajectory planning method and device and robot
CN111832456A (en) * 2020-07-01 2020-10-27 四川大学 Optical motion capture experimental system for animals
CN112116663A (en) * 2020-08-20 2020-12-22 太仓中科信息技术研究院 Offline programming method and system for camera robot and electronic equipment
CN112621749A (en) * 2020-12-04 2021-04-09 上海钧控机器人有限公司 Method for acquiring and reproducing moxibustion manipulation track
CN113358383A (en) * 2021-05-08 2021-09-07 中国标准化研究院 Air conditioner outdoor unit ergonomics experiment system and ergonomics test method
CN113391598A (en) * 2021-06-28 2021-09-14 哈尔滨工业大学 Virtual assembly simulation method and system
IT202000020956A1 (en) * 2020-09-03 2022-03-03 Sir Soc Italiana Resine Spa METHOD IMPLEMENTED BY COMPUTER FOR SELF-LEARNING OF AN ANTHROPOMORPHIC ROBOT AND RELATED SELF-LEARNING SYSTEM
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration
WO2023142215A1 (en) * 2022-01-27 2023-08-03 苏州大学 Method for automatically picking up nanowires by micro-nano operation robot on basis of dynamic motion primitives
CN117162116A (en) * 2023-11-03 2023-12-05 合肥探奥自动化有限公司 Human action imitation robot integrating artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
US20190122436A1 (en) * 2017-10-23 2019-04-25 Sony Interactive Entertainment Inc. Vr body tracking without external sensors
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN110142770A (en) * 2019-05-07 2019-08-20 中国地质大学(武汉) A kind of robot teaching system and method based on head-wearing display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190122436A1 (en) * 2017-10-23 2019-04-25 Sony Interactive Entertainment Inc. Vr body tracking without external sensors
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN108563995A (en) * 2018-03-15 2018-09-21 西安理工大学 Human computer cooperation system gesture identification control method based on deep learning
CN110142770A (en) * 2019-05-07 2019-08-20 中国地质大学(武汉) A kind of robot teaching system and method based on head-wearing display device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡晋: "机械臂运动的示教学习方法与应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111002315A (en) * 2019-12-27 2020-04-14 深圳市越疆科技有限公司 Trajectory planning method and device and robot
CN111002315B (en) * 2019-12-27 2022-04-15 深圳市越疆科技有限公司 Trajectory planning method and device and robot
CN111832456A (en) * 2020-07-01 2020-10-27 四川大学 Optical motion capture experimental system for animals
CN112116663A (en) * 2020-08-20 2020-12-22 太仓中科信息技术研究院 Offline programming method and system for camera robot and electronic equipment
IT202000020956A1 (en) * 2020-09-03 2022-03-03 Sir Soc Italiana Resine Spa METHOD IMPLEMENTED BY COMPUTER FOR SELF-LEARNING OF AN ANTHROPOMORPHIC ROBOT AND RELATED SELF-LEARNING SYSTEM
WO2022049500A1 (en) * 2020-09-03 2022-03-10 Sir S.P.A. Computer-implemented method for self-learning of an anthropomorphic robot and related self-learning system
CN112621749A (en) * 2020-12-04 2021-04-09 上海钧控机器人有限公司 Method for acquiring and reproducing moxibustion manipulation track
CN113358383A (en) * 2021-05-08 2021-09-07 中国标准化研究院 Air conditioner outdoor unit ergonomics experiment system and ergonomics test method
CN113391598A (en) * 2021-06-28 2021-09-14 哈尔滨工业大学 Virtual assembly simulation method and system
WO2023142215A1 (en) * 2022-01-27 2023-08-03 苏州大学 Method for automatically picking up nanowires by micro-nano operation robot on basis of dynamic motion primitives
CN115990891A (en) * 2023-03-23 2023-04-21 湖南大学 Robot reinforcement learning assembly method based on visual teaching and virtual-actual migration
CN117162116A (en) * 2023-11-03 2023-12-05 合肥探奥自动化有限公司 Human action imitation robot integrating artificial intelligence
CN117162116B (en) * 2023-11-03 2024-01-12 合肥探奥自动化有限公司 Human action imitation robot integrating artificial intelligence

Also Published As

Publication number Publication date
CN110561450B (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN110561450B (en) Robot assembly offline example learning system and method based on dynamic capture
CN110561430B (en) Robot assembly track optimization method and device for offline example learning
Tang et al. A framework for manipulating deformable linear objects by coherent point drift
US20220009100A1 (en) Software Interface for Authoring Robotic Manufacturing Process
Bagnell et al. An integrated system for autonomous robotics manipulation
CN104067781B (en) Based on virtual robot and integrated picker system and the method for real machine people
EP3272473B1 (en) Teaching device and method for generating control information
EP3166084B1 (en) Method and system for determining a configuration of a virtual robot in a virtual environment
CN109397285B (en) Assembly method, assembly device and assembly equipment
CN104457566A (en) Spatial positioning method not needing teaching robot system
CN109483534B (en) Object grabbing method, device and system
CN112207835B (en) Method for realizing double-arm cooperative work task based on teaching learning
CN109531577B (en) Mechanical arm calibration method, device, system, medium, controller and mechanical arm
JP7387920B2 (en) Method and robot controller for controlling a robot
CN112638596B (en) Autonomous learning robot device and method for generating operation of autonomous learning robot device
Klingensmith et al. Closed-loop servoing using real-time markerless arm tracking
CN114474106A (en) Method for controlling a robot device and robot control device
Zhang et al. Industrial robot programming by demonstration
Brecher et al. Towards anthropomorphic movements for industrial robots
CN110561431B (en) Robot assembly demonstration track extraction method and device for offline example learning
Ligutan et al. Adaptive robotic arm control using artificial neural network
Winiarski et al. Automated generation of component system for the calibration of the service robot kinematic parameters
CN109531579B (en) Mechanical arm demonstration method, device, system, medium, controller and mechanical arm
Scharfe et al. Hybrid physics simulation of multi-fingered hands for dexterous in-hand manipulation
JP7376318B2 (en) annotation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Lou Yunjiang

Inventor after: Hu Haopeng

Inventor after: Zhang Jinmin

Inventor after: Cao Zhiqi

Inventor after: Zhao Zhilong

Inventor after: Yang Xiansheng

Inventor before: Lou Yunjiang

Inventor before: Hu Haopeng

Inventor before: Cao Zhiqi

Inventor before: Zhao Zhilong

Inventor before: Yang Xiansheng

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant