CN110561431B - Robot assembly demonstration track extraction method and device for offline example learning - Google Patents

Robot assembly demonstration track extraction method and device for offline example learning Download PDF

Info

Publication number
CN110561431B
CN110561431B CN201910817859.4A CN201910817859A CN110561431B CN 110561431 B CN110561431 B CN 110561431B CN 201910817859 A CN201910817859 A CN 201910817859A CN 110561431 B CN110561431 B CN 110561431B
Authority
CN
China
Prior art keywords
assembly
motion
robot
data
tracks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910817859.4A
Other languages
Chinese (zh)
Other versions
CN110561431A (en
Inventor
楼云江
曹芷琪
胡浩鹏
赵智龙
杨先声
张近民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201910817859.4A priority Critical patent/CN110561431B/en
Publication of CN110561431A publication Critical patent/CN110561431A/en
Application granted granted Critical
Publication of CN110561431B publication Critical patent/CN110561431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a robot assembly demonstration track extraction method for offline example learning, which comprises the following steps of: A. collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device; B. performing off-line preprocessing on the acquired motion tracks of the arm and the hand, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks to obtain an assembly motion model modeled by adopting a Gaussian mixture model, and matching a clamp for assembly motion; C. and providing the information to the robot for simulation assembly verification according to the obtained assembly track and the assembly fixture information. The invention also relates to an apparatus comprising a memory and a processor, which when executing a program stored in the memory performs the above-mentioned method steps. In the scheme of the invention, the extracted assembly track can be rapidly deployed on different robots and other equipment, and the application range is wide.

Description

Robot assembly demonstration track extraction method and device for offline example learning
Technical Field
The invention relates to a robot assembly demonstration track extraction method and device, in particular to a method for collecting motion data of a hand of a user during manual precision assembly by using an optical dynamic capturing device, and the method is used for synthesizing and extracting an assembly track in an off-line manner.
Background
Industrial robots have been widely used in various production lines in the industrial production field, but have not been widely used for precision assembly lines represented by 3C assembly. At present, the assembly task in the 3C manufacturing field is still mainly completed manually, and the assembly process is time-consuming and labor-consuming. With the shortage of labor and the gradual rise of labor cost, the demand of precision assembly lines for automation is increasing. The main reasons that industrial robots are difficult to apply are that the update iteration frequency of 3C products (such as mobile phones, tablet computers and notebook computers) is high, and the life cycle of the products is short, which directly results in frequent line replacement of the 3C assembly production line; meanwhile, the 3C assembly process is complex, the requirement on precision is high, the working space is small, and high robot programming difficulty is brought to robot technicians. In response to such problems, the conventional robot programming control method is too time-consuming and inflexible, which not only limits the speed of the robot to adapt to new tasks, but also limits the application of the robot in the assembly field. The technology capable of quickly and flexibly transferring the assembly skills of workers on the assembly production line to the robot is significant in expanding the application range of the robot and improving the automation level of the precision assembly production line.
Example learning provides an efficient way to simplify the robot programming process, which is currently an important way to improve the ease of use of robots. Compared to traditional robot motion planning and control methods that rely on manual programming, example learning has two major advantages: firstly, a simple and intuitive mode is provided for a user to transmit task information to the robot, so that the requirement of professional programming knowledge of the robot user can be greatly reduced, and the non-robot expert user is facilitated to participate in the robot assembly production line; secondly, by means of an example learning method, a user can easily transfer a complex motion trajectory to the robot, so that the robot can flexibly complete complex assembly work. In conclusion, the off-line example learning method is applied to a robot precision assembly link represented by 3C assembly, so that the defects that the time cost is too high and the time cost is not flexible in the traditional programming method can be effectively overcome, the robot on the assembly production line can be rapidly programmed, and the automatic transformation of the assembly production line is promoted.
Yuxin YI, xu Tuo, Bai Ji and the like provide a teaching robot data acquisition system based on optical motion capture (Yuxin YI, xu Cheng, Bai Ji and the like. the teaching robot data acquisition system based on optical motion capture is Chinese, 109848964A [ P ]. 2019-06-07). though the accuracy is improved, because the light reflection points are assembled on a specially-made demonstration tool and are calibrated in the same working area, the acquired data is poor in universality and is limited by the environment and equipment; a robot teaching method, device and system are provided for Zuo, Yanxin and Nixin (Zuo, Yanxin, Nixin, a robot teaching method, device and system: China, 106182003P, 2016-12-07), an inertial measurement unit IMU is used for collecting data in the teaching action process, the IMU is susceptible to other factors, the position error is large, and the precision cannot be guaranteed. The xijundu light scene xuguanghua et al proposes a Leap Motion-based virtual assembly teaching method, which can better simulate hand gestures by adopting Leap Motion, but has poor stability and low virtual assembly mobility.
Disclosure of Invention
Aiming at the problem that the prior art is lack of a solution facing narrow space and industrial high-precision assembly, the invention provides a robot assembly demonstration track extraction method and device for offline example learning.
The first aspect of the technical scheme of the invention is a robot assembly demonstration track extraction method for offline example learning, which comprises the following steps:
A. collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device;
B. performing off-line preprocessing on the acquired motion tracks of the arm and the hand, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks to obtain an assembly motion model modeled by adopting a Gaussian mixture model, and matching a clamp for assembly motion;
C. and providing the information to the robot for simulation assembly verification according to the obtained assembly track and the assembly fixture information.
In some embodiments according to the invention, said step a further comprises the steps of:
a1, establishing data connection channels between a plurality of infrared motion capture cameras in the optical motion capture device and the computing equipment;
a2, configuring the field of view of each infrared motion capture camera to be concentrated in the same three-dimensional area, and calibrating each infrared motion capture camera;
a3, collecting images of light-reflecting mark points on key positions of arms and hands of assembling demonstration personnel in the three-dimensional area;
and A4, triggering all the infrared motion capture cameras to read the position data of each reflective marker point in real time.
In some embodiments according to the invention, said step B further comprises the steps of:
b1, processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire the position information of each reflective marker point at each sampling moment in each assembly demonstration, calculating the anomaly factor of each reflective marker point at each sampling moment, and regarding the points with the anomaly factors larger than a preset threshold value as sampling noise points and removing the sampling noise points from the demonstration data set;
b2, performing density clustering by taking the speed of each light reflection mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm;
b3, modeling the assembly tracks of multiple assembly demonstration obtained by the data preprocessing module by adopting a Gaussian mixture model, appointing the number of Gaussian kernels according to a method for maximizing a Bayesian information criterion, learning multiple sections of assembly tracks by adopting an expectation maximization method, and obtaining the parameters of each Gaussian kernel, thereby obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of assembly actions.
In some embodiments according to the invention, said step B1 further comprises the steps of:
providing a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as an achievable distance, whereinx(k)Is the set { xiThe kth sample closest to x in the } k is a manually selected integer parameter;
providing local reachability density
Figure GDA0003060098550000031
If the sample x therein is such that the local anomaly is a factor
Figure GDA0003060098550000032
If the rising value exceeds the preset value, the corresponding sample x is eliminated from the demonstration data.
In some embodiments according to the invention, said step B further comprises the steps of:
b4, analyzing the tracks of the light reflection mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and then performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
In some embodiments according to the invention, said step B4 further comprises the steps of:
identifying the divided assembly trajectory as one or more off-line assembly steps;
configuring a fixture node between each off-line assembly step for matching or replacing a fixture;
in synchronism with the data from the robot controller and its gripper library, a path is introduced for movement of the robot tip to the tool storage area for mounting or replacing a gripper.
In some embodiments according to the invention, said step C further comprises the steps of:
c1, importing the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform V-REP;
c2, matching the robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate the assembly along an off-line assembly track in the simulation environment, and verifying whether the assembled part reaches the expected position and posture.
A second aspect of the present invention is a computing apparatus including a memory and a processor. The processor implements the above method when executing a program stored in the memory.
The third aspect of the technical scheme of the invention is a robot offline track extraction system. The system comprises an optical motion capture platform and the computing device, wherein the optical motion capture platform comprises a plurality of optical motion capture cameras, network equipment, computing equipment and a plurality of reflective marker points arranged on the hands of an assembly demonstration worker, the optical motion capture cameras are symmetrically arranged around the assembly demonstration workbench, and each optical motion capture camera is connected to the computing equipment through the network equipment.
The invention has the beneficial effects that:
the assembly motion tracks of the arm and the hand of the user, which are acquired by the optical motion capture equipment, have high precision, and the light-reflecting mark points do not influence the assembly motion of the user, so that the manual experience migration is simpler and more flexible; the offline data preprocessing can reduce redundant data in the demonstration data and keep key information; the example learning method enables the assembly process to have high mobility, can be rapidly deployed on different assembly production lines, robots and other equipment, and is wide in application range.
Drawings
FIG. 1 is a schematic illustration of a kinetic capture platform for a robotic assembly offline example learning system, in an embodiment.
FIG. 2 is a block diagram of software modules of a robot assembly offline example learning system, in an embodiment.
Fig. 3 is a block diagram of an overall flow of an example off-line learning system for robotic assembly in an embodiment, including the flow of an assembly demonstration trajectory extraction method.
Fig. 4 is a flowchart of a robot assembly trajectory optimization method for offline example learning in an embodiment.
Fig. 5 is a motion trajectory curve of the robot tip before trajectory optimization.
Fig. 6 is a trajectory curve of the robot end motion after trajectory optimization.
Fig. 7 depicts an illustrative application example of the present invention.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention.
FIG. 1 is a diagram illustrating an optical motion capture platform, according to one embodiment. The optical motion capture platform 10 comprises: a set of optical motion capture cameras 11 (e.g., high precision infrared cameras), network devices 12 (e.g., routers, ethernet switches, etc.), and computing devices 13 (e.g., industrial controllers, industrial personal computers, PCs, etc.). As shown, a plurality of (e.g., six) optical motion capture cameras 11 are arranged above a workspace where a presentation is assembled by a support stand, and one workspace is obliquely photographed from a plurality of angles (e.g., 0.4m × 0.6m × 1m in length, width, and height, respectively). Key positions of the joints of the arms, the fingertips of the hands, etc. of an assembler (e.g., a skilled assembler) affix a plurality of retro-reflective marker points 14. Preferably, the cameras are arranged around the workbench in a symmetrical mode shown in fig. 1, the optimized position layout of the cameras in space can ensure the capture precision of fine motions of both hands, and the data fusion of the multi-angle cameras can avoid the problem of mutual shielding of the motions of the hands, thereby ensuring the consistency of the capture motions. Thus, within the workspace, all of the cameras are configured to simultaneously capture visual information of the real-time position and pose reflected by the retro-reflective marker points of the arms and hands of the assembler during the assembly process. Further, each camera 11 may be attached with a light source.
Referring to fig. 2, in an embodiment, a robot assembly offline example learning system includes a data acquisition module, a data preprocessing module, an offline robot motion trajectory generation module, and a simulation verification module. These modules may be integrated in the computer device 13 running in the optical motion capture platform, or may be integrated in other computing devices running in the robot-assembled offline example learning system.
The data acquisition module is configured to: calibrating the optical motion capture camera 11 by an application program of a visual calibration algorithm; after calibration is completed, sending a collecting command to the optical motion capture camera through computing equipment, and collecting position and posture data of the reflective mark points 14 attached to the arms and hands of skilled assembly workers; the switch 12 performs data transmission with the optical motion capture camera 11, receives position and posture data of the motion of the user's arm and hand collected by the optical motion capture camera, and performs off-line analysis on the received data to generate motion trajectory information executable by the robot. In addition, the position change data of the light reflecting mark points captured by the camera is sent to an upper computer, and the upper computer stores double-hand assembling action data.
The data preprocessing module is used for performing off-line processing on the acquired motion data and removing noise so as to reduce redundant data. In one embodiment, the data pre-processing module is configured to perform anomaly detection, trajectory segmentation, and trajectory fusion.
And the abnormality detection is to process the data acquisition module through an abnormality detection algorithm based on local abnormality factors to acquire the position information of each reflective mark point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each reflective marker point at each sampling moment. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency.
And the track segmentation is to perform clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm. By the method, the track can be segmented, only the track data related to assembly is reserved, and irrelevant track data (such as the motion track from any position of the arm and the hand of the user to the position of the part to be assembled, the motion track of the arm and the hand of the user away from the part after assembly and the like) are eliminated. To simplify the late learning process.
The track fusion method comprises the following steps: firstly, a Gaussian Mixture Model (GMM) is adopted to Model an assembly track of multiple assembly demonstration obtained by a data preprocessing module, and the number of Gaussian kernels is specified according to a method of maximizing Bayesian Information Criterion (BIC); learning multiple sections of assembly tracks by using an Expectation-Maximization (EM) method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
The off-line robot motion track generation module is used for generating a smooth assembly track. The offline robot motion trajectory generation module is configured to: and outputting a smooth and efficient task space (Cartesian space) robot assembly motion track by a Gaussian Mixture Regression (GMR) method according to the assembly motion modeled by the Gaussian Mixture model obtained by the data preprocessing module. The track generation can be carried out according to the specific initial position and the target position during the robot assembly, so that the robot assembly is not limited to the initial position and the target position of the parts during manual demonstration. The assembly track has high mobility, is not limited by assembly sites and equipment, and can be rapidly deployed in robot assembly systems of different models and different configurations.
And the simulation verification module migrates the trajectory learned by the offline example to a simulation environment, and controls the robot in the simulation platform to complete the same assembly action. The offline example learning trajectory can be migrated to any robot system that meets the assembly freedom and workspace requirements, so that the example learning result is independent of the specific robot system. In addition, the simulation verification module provides a virtual environment to simulate running the assembly instance, wherein the steps comprise:
1) importing the track optimized by the track generation module in an off-line mode into a robot simulation platform (for example, a simulation environment of V-REP);
2) selecting a robot system model, wherein any robot system meeting the assembly freedom degree and the working space requirement can be selected, and the robot system comprises but is not limited to a serial robot, a parallel robot, a single-arm robot system or a double-arm robot system;
3) through example learning, the robot system can efficiently complete high-precision assembly tasks of a working space in the same assembly scene as a skilled worker.
And when the virtual assembly simulation of the robot passes the verification, the simulation verification module transmits the motion instruction and data of the robot after the debugging is passed to the robot controller for controlling the motion of the robot in the actual assembly.
FIG. 3 is a block flow diagram of a method of a robot assembly offline example learning system, in an embodiment. The method comprises the following steps:
s1: the demonstration motion tracks of the arms and the hands of the assembling demonstration personnel are collected through the optical motion capture device.
S2: and performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks, and obtaining an assembly motion model modeled by adopting a Gaussian mixture model.
S3: and generating an assembling motion track of the robot according to the obtained assembling motion model and the actual part initial pose and target pose of the robot assembly. And analyzing the motion information of the hand mark points to obtain the pose and state information of the assembly fixture required by the robot assembly. And a post-processing algorithm is also applied to reprocess the generated robot assembly track to obtain the motion track with the shortest time suitable for robot assembly.
S4: and controlling the robot to carry out field assembly according to the obtained assembly track and the assembly fixture information, and finishing the whole assembly demonstration learning process.
It can be understood that the above steps S1 and S2 mainly relate to robot assembly demonstration track extraction, and the step S3 mainly relates to robot assembly track optimization. Details of each step are described further below.
In some embodiments, step S1 further includes:
s1.1: and (5) building a dynamic catching platform.
Firstly, fixing a camera around an experimental platform, connecting one end of an Ethernet cable with the camera, and connecting the other end of the Ethernet cable with a switch; the switch provides power supply and data communication for the camera, the output acquisition module, the data processing module, the off-line robot motion track generation module and the simulation verification module form a local area network, and the computing equipment sends a control command.
S1.2: user assembly demonstration data is collected.
The key positions (finger tips and finger joints) of the arms and the hands of the user are pasted with light-reflecting mark points, and the same assembly action is repeatedly demonstrated in the appointed assembly working area. The camera is a high-precision infrared motion capturing camera, captures the position and posture information of reflective mark points attached to the two arms and the two hands of a user, emits the mark points to reflect light of a camera flash unit, collects the reflected light in a scene by a camera lens to form an image focusing on a camera sensor plane, and analyzes the position information of each reflective mark point. The information is transmitted to the upper computer through the switch for storage.
In some embodiments, step S2 further includes:
s2.1: provided is an abnormality detection method.
The anomaly detection method comprises the step of processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire position information of each reflective marker point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each reflective marker point at each sampling moment. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency. The specific implementation mode is as follows:
given a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||) (1)
as the reachable distance, where x(k)Is the set { xiThe k-th sample closest to x in the lattice, k being a manually selected integer parameter.
Figure GDA0003060098550000071
This is called local reachability density. The local anomaly factor is defined as two definitions (1) and (2)
Figure GDA0003060098550000072
As the LOF rises, the likelihood of the sample x becoming an outlier rises and is eliminated from the presentation data.
S2.2: a track segmentation method.
The track segmentation method carries out density clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm.
S2.3: a trajectory fusion method.
Firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
In some embodiments, step S2 further includes:
s2.4: and generating an offline track.
And the off-line robot motion trail generation module outputs a smooth and efficient task space (Cartesian space) robot assembly motion trail through a Gaussian mixture regression method according to the assembly action modeled by the Gaussian mixture model obtained by the data preprocessing module. The method can generate the track according to the specific initial and target positions of the robot during assembly, so that the assembly of the robot is not limited by the initial and target positions of parts during manual demonstration.
S2.5: and (5) assembling and analyzing a clamp.
And analyzing the tracks of the light reflecting mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
With respect to step S3
Due to the structural difference between the human arm and the robot, the assembly track of the human hand is not very suitable for the robot to execute, and in order to improve the assembly efficiency of the robot, a track post-processing module is added. The introduced post-processing flow is as shown in fig. 4, the assembly track of the human hand is optimized, noise and irrelevant actions in the hand movement process are removed, and the conversion from the hand assembly movement track to the movement track suitable for robot assembly is completed.
Thus, in some embodiments, referring to fig. 4, step S3 further includes:
s3.1: firstly, all joint angle sequences of the robot in the motion process are obtained through inverse kinematics solution of the artificial assembly track obtained by the off-line processing module
Figure GDA0003060098550000081
Joint angle q at each time pointiDefined as a frame. Let k equal to 1.
S3.2: the cost function is defined as (here, 6-degree-of-freedom robot example):
Figure GDA0003060098550000082
calculating the current track xi(0)Cost C ofk
S3.3: defining a frame selection policy
Figure GDA0003060098550000083
Calculating to obtain the selected probability P of each frame under the tracki. And selecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence, and calculating the cost function of the new sequence again.
S3.4: calculating a reward value Rk=Ck-1-Ck,CkNamely the cost function of the manual assembly track.
S3.5: updating the probability P that each frame in the sequence of joint angles is selected according to a reward valuei. The update strategy is: each round of selected frames is updated to
Figure GDA0003060098550000091
Unselected frames are updated to
Figure GDA0003060098550000092
Wherein 0<α<1 is the update rate. And S (xi) is determined according to the iteration result of each time, if the assembly task is well completed, the S (xi) is 1, and if not, the S (xi) is 0.
S3.6: and updating the whole joint angle track sequence. The method is to delete the frames with the probability lower than a certain threshold, and form the rest frames into a new joint angle sequence, and let k be k + 1.
S3.7: and judging whether the absolute value of the reward value is lower than a small constant or the iteration number is larger than a certain threshold, if not, returning to the step S3.2 to continue execution, and if so, outputting the current remaining frame sequence.
S3.8: and (4) taking the frame sequence output in the step (S3.7) as a path, performing speed planning interpolation in the joint space in the shortest time, and outputting a motion track in the shortest time.
As can be seen from the figures 5 and 6, the assembly track obtained by the post-processing module is simpler and more efficient than the assembly track manually demonstrated, and is suitable for being executed by a robot.
In some embodiments, step S4 further includes:
s4.1: and (5) verifying the assembly of the robot. Firstly, selecting a proper assembly fixture according to an assembly task by using fixture state data obtained through an offline data processing module, automatically replacing the fixture through a robot tail end quick-change device, then carrying out field assembly by using a shortest-time assembly track obtained through a post-processing module, and evaluating the assembly effect.
An exemplary rail mounting application of the present invention is described in conjunction with fig. 7 and 1. In one example, in the optical motion capture platform 10 shown in FIG. 1, a skilled assembler picks up the guide rails from the assembly storage area 26, then mounts them to the ready-to-assemble 24 of the assembly jig table 25 and screws them. The process can be repeated for multiple times, so that a data acquisition module of the system can acquire enough movement positions and posture values of arms, palms and fingers, and then a manual assembly track is generated.
For the extracted manual assembly trajectory data, the above step S2 is executed by the computing device 13 to perform exception data elimination, trajectory segmentation, trajectory fusion, offline trajectory generation, and assembly jig analysis. For the manual rail assembly example, the end of the operation path (such as the position and posture of the finger) is limited to the range of the assembly jig table 25 and the assembly storage area 26, and some path data outside the range or path data causing assembly interference may be excluded as abnormal data. In addition, for different assembly steps, the manual assembly track can be segmented, such as a guide rail picking and placing step, a guide rail and to-be-assembled piece positioning step, a screw mounting step and the like, which can be divided by capturing the speed and the motion type of the reflective marker point. For example, when the guide rail is picked and placed by hand, the motion of the fingers is mainly in space translation and the elbow and the shoulder have obvious motion, so the group of motion tracks can be divided into the picking and placing tracks. In addition, when the fingers and the arms clamp the guide rail and keep the guide rail still for a given time, the group of motion tracks can be divided into positioning tracks; when the collected trajectory data indicates that only the palm and fingers operate the wrench tool for partial torquing, the set of motion trajectories is divided into fastener installation trajectories. Then, in the trajectory fusion method, the generated assembly motion model reflects assembly motion key information, such as key points of a reasonable conveying path of the guide rail, the precise direction of installation of the guide rail, the type of the fastener and the installation position thereof, and the like.
Then, in the computing device 13, the above step S3 is executed, the above preprocessed data is matched with a suitable robot (for example, the tandem robot 21 shown in fig. 7 may be adopted), and the joint angle corresponding to each motion frame of the robot is solved inversely by the position of the end, and meanwhile, the velocity planning interpolation is performed in the joint space in the shortest time, and the motion track in the shortest time is output. In addition, the gripper 23 of the robot tip is also configured according to different assembly steps, such as air claw configuration for pick-and-place step and positioning step, and electric screw driver configuration for screw mounting step. Further, by synchronizing with the data of the robot controller and its jig library, it is possible to utilize the shortest assembly trajectory in time obtained through the post-processing module when the above-described step S4 is executed with respect to the computing device 13, and then configure jig nodes for matching or replacing jigs between each assembly step. For example, as shown in fig. 7, before carrying the guide rail from the mount storage area 26 to the to-be-mounted package 24 on the mount table 25, a pneumatic claw gripper for gripping the guide rail to be mounted is arranged before the robot is switched from the previous step to the step of picking up the guide rail. With the gripper deployed at this node, the path of the robot tip to move to the tool storage area 22 to install or replace a gripper can be introduced through the robot controller and the gripper management device of the tool storage area 22. Therefore, the assembly track demonstrated manually can be transited to the robot off-line assembly track (including the clamp) to realize the practical application scene operation of the off-line assembly platform 20.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, a mini-computing device, a mainframe, a workstation, a networked or distributed computing environment, a separate or integrated computing device platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it is readable by a programmable computing device, which when read by the storage medium or device is operative to configure and operate the computing device to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computing device-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computing device itself when programmed according to the methods and techniques described herein.
The computing device program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.

Claims (6)

1. A robot assembly demonstration trajectory extraction method for offline example learning, the method comprising the steps of:
A. collecting demonstration assembly tracks of arms and hands of an assembly demonstrator through an optical motion capture device;
B. performing off-line preprocessing on the acquired motion tracks of the arm and the hand, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks to obtain an assembly motion model modeled by adopting a Gaussian mixture model, and matching a clamp for assembly motion;
C. generating an offline optimized track by a track generation module according to the obtained assembly motion model and the assembly fixture information, and providing the offline optimized track to the robot for simulation assembly verification;
the step A also comprises the following steps:
a1, establishing data connection channels between a plurality of infrared motion capture cameras in the optical motion capture device and the computing equipment;
a2, configuring the field of view of each infrared motion capture camera to be concentrated in the same three-dimensional area, and calibrating each infrared motion capture camera;
a3, collecting images of the light-reflecting mark points on the key positions of the arms and the hands of the assembly demonstrator in the three-dimensional area;
a4, triggering all infrared motion capture cameras to read the position data of each reflective mark point in real time;
the step B comprises the following steps:
limiting the end of the action path to the range of the assembly clamping table and the assembly storage area, and eliminating some path data beyond the range or path data causing assembly interference as abnormal data;
dividing different assembly steps by capturing the speed and the motion type of the reflective marker points so as to segment the manual assembly track, wherein: dividing the group of motion tracks into pick-and-place tracks if the motion of the fingers is mainly spatial translation and the elbows and shoulders have obvious motion; for the situation that the guide rails are clamped by fingers and arms and are kept still for a given time, dividing the group of motion tracks into positioning tracks; when the collected track data indicate that only the palm and the finger operating tool are partially twisted, dividing the group of motion tracks into fastener mounting tracks;
the step B also comprises the following steps:
b1, acquiring the position information of each light reflection mark point at each sampling moment in each assembly demonstration through an anomaly detection algorithm processing data acquisition module based on local anomaly factors, calculating the local anomaly factor of each light reflection mark point at each sampling moment, and regarding the point with the local anomaly factor larger than a preset threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set;
b2, performing density clustering by taking the speed of each light reflection mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm;
b3, modeling assembly tracks of multiple assembly demonstration obtained by the data preprocessing module by adopting a Gaussian mixture model, appointing the number of Gaussian kernels according to a method for maximizing a Bayesian information criterion, learning multiple sections of assembly tracks by adopting an expectation maximization method, and obtaining parameters of each Gaussian kernel, so that an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of assembly actions is obtained;
the step B1 further includes the following steps:
providing a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as the reachable distance, where x(k)Is the set { xiThe kth sample closest to x in the } k is a manually selected integer parameter;
providing local reachability density
Figure FDA0003060098540000021
If the sample x therein is such that the local anomaly is a factor
Figure FDA0003060098540000022
If the value is larger than the preset threshold value, the corresponding sample x is eliminated from the demonstration data.
2. The method of claim 1, wherein step B further comprises the steps of:
b4, analyzing the tracks of the light reflection mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and then performing data matching on the assembly action of the hands to obtain the type of the clamp most suitable for the assembly action.
3. The method of claim 2, wherein said step B4 further comprises the steps of:
identifying the divided assembly trajectory as one or more off-line assembly steps;
configuring a fixture node between each off-line assembly step for matching or replacing a fixture;
in synchronism with the data from the robot controller and its gripper library, a path is introduced for the robot tip to move to the tool storage area for mounting or replacing a gripper.
4. The method of claim 1, wherein step C further comprises the steps of:
c1, importing the track optimized by the track generation module in an off-line manner into a simulation environment of a robot simulation platform V-REP;
c2, matching the robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate the assembly along an off-line assembly track in the simulation environment, and verifying whether the assembled part reaches the expected position and posture.
5. A computing device comprising a memory and a processor, wherein the processor implements the method of any one of claims 1 to 4 when executing a program stored in the memory.
6. A robotic off-line trajectory extraction system, comprising an optical motion capture platform comprising a plurality of optical motion capture cameras, a network device, a computing device, and a plurality of retro-reflective marker points disposed on hands of an assembly presenter, and the computing apparatus of claim 5, wherein,
the plurality of optical motion capture cameras are arranged in a symmetrical fashion around the assembly presentation stage,
each optical motion capture camera is connected to a computing device through a network device.
CN201910817859.4A 2019-08-30 2019-08-30 Robot assembly demonstration track extraction method and device for offline example learning Active CN110561431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910817859.4A CN110561431B (en) 2019-08-30 2019-08-30 Robot assembly demonstration track extraction method and device for offline example learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910817859.4A CN110561431B (en) 2019-08-30 2019-08-30 Robot assembly demonstration track extraction method and device for offline example learning

Publications (2)

Publication Number Publication Date
CN110561431A CN110561431A (en) 2019-12-13
CN110561431B true CN110561431B (en) 2021-08-31

Family

ID=68777097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910817859.4A Active CN110561431B (en) 2019-08-30 2019-08-30 Robot assembly demonstration track extraction method and device for offline example learning

Country Status (1)

Country Link
CN (1) CN110561431B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017085811A1 (en) * 2015-11-18 2017-05-26 富士機械製造株式会社 Teaching device and control information-generating method
CN109676615A (en) * 2019-01-18 2019-04-26 合肥工业大学 A kind of spray robot teaching method and device using arm electromyography signal and motion capture signal
CN109848964A (en) * 2019-01-24 2019-06-07 浙江工业大学 Teaching robot's data collection system based on optics motion capture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10777006B2 (en) * 2017-10-23 2020-09-15 Sony Interactive Entertainment Inc. VR body tracking without external sensors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017085811A1 (en) * 2015-11-18 2017-05-26 富士機械製造株式会社 Teaching device and control information-generating method
CN109676615A (en) * 2019-01-18 2019-04-26 合肥工业大学 A kind of spray robot teaching method and device using arm electromyography signal and motion capture signal
CN109848964A (en) * 2019-01-24 2019-06-07 浙江工业大学 Teaching robot's data collection system based on optics motion capture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机械臂运动的示教学习方法与应用研究;胡晋;《中国博士学位论文全文数据库-信息科技辑》;20180815;第2-3章 *

Also Published As

Publication number Publication date
CN110561431A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110561450B (en) Robot assembly offline example learning system and method based on dynamic capture
CN110561430B (en) Robot assembly track optimization method and device for offline example learning
Tang et al. A framework for manipulating deformable linear objects by coherent point drift
US20220009100A1 (en) Software Interface for Authoring Robotic Manufacturing Process
CN110573308B (en) Computer-based method and system for spatial programming of robotic devices
EP3272473B1 (en) Teaching device and method for generating control information
EP3166084B1 (en) Method and system for determining a configuration of a virtual robot in a virtual environment
CN109397285B (en) Assembly method, assembly device and assembly equipment
CN112207835B (en) Method for realizing double-arm cooperative work task based on teaching learning
CN104457566A (en) Spatial positioning method not needing teaching robot system
JP2021167060A (en) Robot teaching by human demonstration
CN109531577B (en) Mechanical arm calibration method, device, system, medium, controller and mechanical arm
CN111459274B (en) 5G + AR-based remote operation method for unstructured environment
JP6150386B2 (en) Robot teaching method
CN111421554A (en) Mechanical arm intelligent control system, method and device based on edge calculation
CN114474106A (en) Method for controlling a robot device and robot control device
CN108153957A (en) Space manipulator kinetics simulation analysis method, system and storage medium
CN110561431B (en) Robot assembly demonstration track extraction method and device for offline example learning
CN106774178B (en) Automatic control system and method and mechanical equipment
JP2022099420A (en) Simulation device and simulation program
JP7376318B2 (en) annotation device
CN109531579B (en) Mechanical arm demonstration method, device, system, medium, controller and mechanical arm
TWI696529B (en) Automatic positioning method and automatic control apparatus
CN212312013U (en) Motion simulation platform
RU2813444C1 (en) Mixed reality human-robot interaction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Lou Yunjiang

Inventor after: Cao Zhiqi

Inventor after: Hu Haopeng

Inventor after: Zhao Zhilong

Inventor after: Yang Xiansheng

Inventor after: Zhang Jinmin

Inventor before: Lou Yunjiang

Inventor before: Cao Zhiqi

Inventor before: Hu Haopeng

Inventor before: Zhao Zhilong

Inventor before: Yang Xiansheng

GR01 Patent grant
GR01 Patent grant