Disclosure of Invention
Aiming at the problem that the prior art is lack of a solution facing narrow space and industrial high-precision assembly, the invention provides a robot assembly demonstration track extraction method and device for offline example learning.
The first aspect of the technical scheme of the invention is a robot assembly demonstration track extraction method for offline example learning, which comprises the following steps:
A. collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device;
B. performing off-line preprocessing on the acquired motion tracks of the arm and the hand, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks to obtain an assembly motion model modeled by adopting a Gaussian mixture model, and matching a clamp for assembly motion;
C. and providing the information to the robot for simulation assembly verification according to the obtained assembly track and the assembly fixture information.
In some embodiments according to the invention, said step a further comprises the steps of:
a1, establishing data connection channels between a plurality of infrared motion capture cameras in the optical motion capture device and the computing equipment;
a2, configuring the field of view of each infrared motion capture camera to be concentrated in the same three-dimensional area, and calibrating each infrared motion capture camera;
a3, collecting images of light-reflecting mark points on key positions of arms and hands of assembling demonstration personnel in the three-dimensional area;
and A4, triggering all the infrared motion capture cameras to read the position data of each reflective marker point in real time.
In some embodiments according to the invention, said step B further comprises the steps of:
b1, processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire the position information of each reflective marker point at each sampling moment in each assembly demonstration, calculating the anomaly factor of each reflective marker point at each sampling moment, and regarding the points with the anomaly factors larger than a preset threshold value as sampling noise points and removing the sampling noise points from the demonstration data set;
b2, performing density clustering by taking the speed of each light reflection mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm;
b3, modeling the assembly tracks of multiple assembly demonstration obtained by the data preprocessing module by adopting a Gaussian mixture model, appointing the number of Gaussian kernels according to a method for maximizing a Bayesian information criterion, learning multiple sections of assembly tracks by adopting an expectation maximization method, and obtaining the parameters of each Gaussian kernel, thereby obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of assembly actions.
In some embodiments according to the invention, said step B1 further comprises the steps of:
providing a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as an achievable distance, whereinx(k)Is the set { xiThe kth sample closest to x in the } k is a manually selected integer parameter;
providing local reachability density
If the sample x therein is such that the local anomaly is a factor
If the rising value exceeds the preset value, the corresponding sample x is eliminated from the demonstration data.
In some embodiments according to the invention, said step B further comprises the steps of:
b4, analyzing the tracks of the light reflection mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and then performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
In some embodiments according to the invention, said step B4 further comprises the steps of:
identifying the divided assembly trajectory as one or more off-line assembly steps;
configuring a fixture node between each off-line assembly step for matching or replacing a fixture;
in synchronism with the data from the robot controller and its gripper library, a path is introduced for movement of the robot tip to the tool storage area for mounting or replacing a gripper.
In some embodiments according to the invention, said step C further comprises the steps of:
c1, importing the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform V-REP;
c2, matching the robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate the assembly along an off-line assembly track in the simulation environment, and verifying whether the assembled part reaches the expected position and posture.
A second aspect of the present invention is a computing apparatus including a memory and a processor. The processor implements the above method when executing a program stored in the memory.
The third aspect of the technical scheme of the invention is a robot offline track extraction system. The system comprises an optical motion capture platform and the computing device, wherein the optical motion capture platform comprises a plurality of optical motion capture cameras, network equipment, computing equipment and a plurality of reflective marker points arranged on the hands of an assembly demonstration worker, the optical motion capture cameras are symmetrically arranged around the assembly demonstration workbench, and each optical motion capture camera is connected to the computing equipment through the network equipment.
The invention has the beneficial effects that:
the assembly motion tracks of the arm and the hand of the user, which are acquired by the optical motion capture equipment, have high precision, and the light-reflecting mark points do not influence the assembly motion of the user, so that the manual experience migration is simpler and more flexible; the offline data preprocessing can reduce redundant data in the demonstration data and keep key information; the example learning method enables the assembly process to have high mobility, can be rapidly deployed on different assembly production lines, robots and other equipment, and is wide in application range.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention.
FIG. 1 is a diagram illustrating an optical motion capture platform, according to one embodiment. The optical motion capture platform 10 comprises: a set of optical motion capture cameras 11 (e.g., high precision infrared cameras), network devices 12 (e.g., routers, ethernet switches, etc.), and computing devices 13 (e.g., industrial controllers, industrial personal computers, PCs, etc.). As shown, a plurality of (e.g., six) optical motion capture cameras 11 are arranged above a workspace where a presentation is assembled by a support stand, and one workspace is obliquely photographed from a plurality of angles (e.g., 0.4m × 0.6m × 1m in length, width, and height, respectively). Key positions of the joints of the arms, the fingertips of the hands, etc. of an assembler (e.g., a skilled assembler) affix a plurality of retro-reflective marker points 14. Preferably, the cameras are arranged around the workbench in a symmetrical mode shown in fig. 1, the optimized position layout of the cameras in space can ensure the capture precision of fine motions of both hands, and the data fusion of the multi-angle cameras can avoid the problem of mutual shielding of the motions of the hands, thereby ensuring the consistency of the capture motions. Thus, within the workspace, all of the cameras are configured to simultaneously capture visual information of the real-time position and pose reflected by the retro-reflective marker points of the arms and hands of the assembler during the assembly process. Further, each camera 11 may be attached with a light source.
Referring to fig. 2, in an embodiment, a robot assembly offline example learning system includes a data acquisition module, a data preprocessing module, an offline robot motion trajectory generation module, and a simulation verification module. These modules may be integrated in the computer device 13 running in the optical motion capture platform, or may be integrated in other computing devices running in the robot-assembled offline example learning system.
The data acquisition module is configured to: calibrating the optical motion capture camera 11 by an application program of a visual calibration algorithm; after calibration is completed, sending a collecting command to the optical motion capture camera through computing equipment, and collecting position and posture data of the reflective mark points 14 attached to the arms and hands of skilled assembly workers; the switch 12 performs data transmission with the optical motion capture camera 11, receives position and posture data of the motion of the user's arm and hand collected by the optical motion capture camera, and performs off-line analysis on the received data to generate motion trajectory information executable by the robot. In addition, the position change data of the light reflecting mark points captured by the camera is sent to an upper computer, and the upper computer stores double-hand assembling action data.
The data preprocessing module is used for performing off-line processing on the acquired motion data and removing noise so as to reduce redundant data. In one embodiment, the data pre-processing module is configured to perform anomaly detection, trajectory segmentation, and trajectory fusion.
And the abnormality detection is to process the data acquisition module through an abnormality detection algorithm based on local abnormality factors to acquire the position information of each reflective mark point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each reflective marker point at each sampling moment. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency.
And the track segmentation is to perform clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm. By the method, the track can be segmented, only the track data related to assembly is reserved, and irrelevant track data (such as the motion track from any position of the arm and the hand of the user to the position of the part to be assembled, the motion track of the arm and the hand of the user away from the part after assembly and the like) are eliminated. To simplify the late learning process.
The track fusion method comprises the following steps: firstly, a Gaussian Mixture Model (GMM) is adopted to Model an assembly track of multiple assembly demonstration obtained by a data preprocessing module, and the number of Gaussian kernels is specified according to a method of maximizing Bayesian Information Criterion (BIC); learning multiple sections of assembly tracks by using an Expectation-Maximization (EM) method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
The off-line robot motion track generation module is used for generating a smooth assembly track. The offline robot motion trajectory generation module is configured to: and outputting a smooth and efficient task space (Cartesian space) robot assembly motion track by a Gaussian Mixture Regression (GMR) method according to the assembly motion modeled by the Gaussian Mixture model obtained by the data preprocessing module. The track generation can be carried out according to the specific initial position and the target position during the robot assembly, so that the robot assembly is not limited to the initial position and the target position of the parts during manual demonstration. The assembly track has high mobility, is not limited by assembly sites and equipment, and can be rapidly deployed in robot assembly systems of different models and different configurations.
And the simulation verification module migrates the trajectory learned by the offline example to a simulation environment, and controls the robot in the simulation platform to complete the same assembly action. The offline example learning trajectory can be migrated to any robot system that meets the assembly freedom and workspace requirements, so that the example learning result is independent of the specific robot system. In addition, the simulation verification module provides a virtual environment to simulate running the assembly instance, wherein the steps comprise:
1) importing the track optimized by the track generation module in an off-line mode into a robot simulation platform (for example, a simulation environment of V-REP);
2) selecting a robot system model, wherein any robot system meeting the assembly freedom degree and the working space requirement can be selected, and the robot system comprises but is not limited to a serial robot, a parallel robot, a single-arm robot system or a double-arm robot system;
3) through example learning, the robot system can efficiently complete high-precision assembly tasks of a working space in the same assembly scene as a skilled worker.
And when the virtual assembly simulation of the robot passes the verification, the simulation verification module transmits the motion instruction and data of the robot after the debugging is passed to the robot controller for controlling the motion of the robot in the actual assembly.
FIG. 3 is a block flow diagram of a method of a robot assembly offline example learning system, in an embodiment. The method comprises the following steps:
s1: the demonstration motion tracks of the arms and the hands of the assembling demonstration personnel are collected through the optical motion capture device.
S2: and performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks, and obtaining an assembly motion model modeled by adopting a Gaussian mixture model.
S3: and generating an assembling motion track of the robot according to the obtained assembling motion model and the actual part initial pose and target pose of the robot assembly. And analyzing the motion information of the hand mark points to obtain the pose and state information of the assembly fixture required by the robot assembly. And a post-processing algorithm is also applied to reprocess the generated robot assembly track to obtain the motion track with the shortest time suitable for robot assembly.
S4: and controlling the robot to carry out field assembly according to the obtained assembly track and the assembly fixture information, and finishing the whole assembly demonstration learning process.
It can be understood that the above steps S1 and S2 mainly relate to robot assembly demonstration track extraction, and the step S3 mainly relates to robot assembly track optimization. Details of each step are described further below.
In some embodiments, step S1 further includes:
s1.1: and (5) building a dynamic catching platform.
Firstly, fixing a camera around an experimental platform, connecting one end of an Ethernet cable with the camera, and connecting the other end of the Ethernet cable with a switch; the switch provides power supply and data communication for the camera, the output acquisition module, the data processing module, the off-line robot motion track generation module and the simulation verification module form a local area network, and the computing equipment sends a control command.
S1.2: user assembly demonstration data is collected.
The key positions (finger tips and finger joints) of the arms and the hands of the user are pasted with light-reflecting mark points, and the same assembly action is repeatedly demonstrated in the appointed assembly working area. The camera is a high-precision infrared motion capturing camera, captures the position and posture information of reflective mark points attached to the two arms and the two hands of a user, emits the mark points to reflect light of a camera flash unit, collects the reflected light in a scene by a camera lens to form an image focusing on a camera sensor plane, and analyzes the position information of each reflective mark point. The information is transmitted to the upper computer through the switch for storage.
In some embodiments, step S2 further includes:
s2.1: provided is an abnormality detection method.
The anomaly detection method comprises the step of processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire position information of each reflective marker point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each reflective marker point at each sampling moment. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency. The specific implementation mode is as follows:
given a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||) (1)
as the reachable distance, where x(k)Is the set { xiThe k-th sample closest to x in the lattice, k being a manually selected integer parameter.
This is called local reachability density. The local anomaly factor is defined as two definitions (1) and (2)
As the LOF rises, the likelihood of the sample x becoming an outlier rises and is eliminated from the presentation data.
S2.2: a track segmentation method.
The track segmentation method carries out density clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm.
S2.3: a trajectory fusion method.
Firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
In some embodiments, step S2 further includes:
s2.4: and generating an offline track.
And the off-line robot motion trail generation module outputs a smooth and efficient task space (Cartesian space) robot assembly motion trail through a Gaussian mixture regression method according to the assembly action modeled by the Gaussian mixture model obtained by the data preprocessing module. The method can generate the track according to the specific initial and target positions of the robot during assembly, so that the assembly of the robot is not limited by the initial and target positions of parts during manual demonstration.
S2.5: and (5) assembling and analyzing a clamp.
And analyzing the tracks of the light reflecting mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
With respect to step S3
Due to the structural difference between the human arm and the robot, the assembly track of the human hand is not very suitable for the robot to execute, and in order to improve the assembly efficiency of the robot, a track post-processing module is added. The introduced post-processing flow is as shown in fig. 4, the assembly track of the human hand is optimized, noise and irrelevant actions in the hand movement process are removed, and the conversion from the hand assembly movement track to the movement track suitable for robot assembly is completed.
Thus, in some embodiments, referring to fig. 4, step S3 further includes:
s3.1: firstly, all joint angle sequences of the robot in the motion process are obtained through inverse kinematics solution of the artificial assembly track obtained by the off-line processing module
Joint angle q at each time point
iDefined as a frame. Let k equal to 1.
S3.2: the cost function is defined as (here, 6-degree-of-freedom robot example):
calculating the current track xi
(0)Cost C of
k。
S3.3: defining a frame selection policy
Calculating to obtain the selected probability P of each frame under the track
i. And selecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence, and calculating the cost function of the new sequence again.
S3.4: calculating a reward value Rk=Ck-1-Ck,CkNamely the cost function of the manual assembly track.
S3.5: updating the probability P that each frame in the sequence of joint angles is selected according to a reward value
i. The update strategy is: each round of selected frames is updated to
Unselected frames are updated to
Wherein 0<α<1 is the update rate. And S (xi) is determined according to the iteration result of each time, if the assembly task is well completed, the S (xi) is 1, and if not, the S (xi) is 0.
S3.6: and updating the whole joint angle track sequence. The method is to delete the frames with the probability lower than a certain threshold, and form the rest frames into a new joint angle sequence, and let k be k + 1.
S3.7: and judging whether the absolute value of the reward value is lower than a small constant or the iteration number is larger than a certain threshold, if not, returning to the step S3.2 to continue execution, and if so, outputting the current remaining frame sequence.
S3.8: and (4) taking the frame sequence output in the step (S3.7) as a path, performing speed planning interpolation in the joint space in the shortest time, and outputting a motion track in the shortest time.
As can be seen from the figures 5 and 6, the assembly track obtained by the post-processing module is simpler and more efficient than the assembly track manually demonstrated, and is suitable for being executed by a robot.
In some embodiments, step S4 further includes:
s4.1: and (5) verifying the assembly of the robot. Firstly, selecting a proper assembly fixture according to an assembly task by using fixture state data obtained through an offline data processing module, automatically replacing the fixture through a robot tail end quick-change device, then carrying out field assembly by using a shortest-time assembly track obtained through a post-processing module, and evaluating the assembly effect.
An exemplary rail mounting application of the present invention is described in conjunction with fig. 7 and 1. In one example, in the optical motion capture platform 10 shown in FIG. 1, a skilled assembler picks up the guide rails from the assembly storage area 26, then mounts them to the ready-to-assemble 24 of the assembly jig table 25 and screws them. The process can be repeated for multiple times, so that a data acquisition module of the system can acquire enough movement positions and posture values of arms, palms and fingers, and then a manual assembly track is generated.
For the extracted manual assembly trajectory data, the above step S2 is executed by the computing device 13 to perform exception data elimination, trajectory segmentation, trajectory fusion, offline trajectory generation, and assembly jig analysis. For the manual rail assembly example, the end of the operation path (such as the position and posture of the finger) is limited to the range of the assembly jig table 25 and the assembly storage area 26, and some path data outside the range or path data causing assembly interference may be excluded as abnormal data. In addition, for different assembly steps, the manual assembly track can be segmented, such as a guide rail picking and placing step, a guide rail and to-be-assembled piece positioning step, a screw mounting step and the like, which can be divided by capturing the speed and the motion type of the reflective marker point. For example, when the guide rail is picked and placed by hand, the motion of the fingers is mainly in space translation and the elbow and the shoulder have obvious motion, so the group of motion tracks can be divided into the picking and placing tracks. In addition, when the fingers and the arms clamp the guide rail and keep the guide rail still for a given time, the group of motion tracks can be divided into positioning tracks; when the collected trajectory data indicates that only the palm and fingers operate the wrench tool for partial torquing, the set of motion trajectories is divided into fastener installation trajectories. Then, in the trajectory fusion method, the generated assembly motion model reflects assembly motion key information, such as key points of a reasonable conveying path of the guide rail, the precise direction of installation of the guide rail, the type of the fastener and the installation position thereof, and the like.
Then, in the computing device 13, the above step S3 is executed, the above preprocessed data is matched with a suitable robot (for example, the tandem robot 21 shown in fig. 7 may be adopted), and the joint angle corresponding to each motion frame of the robot is solved inversely by the position of the end, and meanwhile, the velocity planning interpolation is performed in the joint space in the shortest time, and the motion track in the shortest time is output. In addition, the gripper 23 of the robot tip is also configured according to different assembly steps, such as air claw configuration for pick-and-place step and positioning step, and electric screw driver configuration for screw mounting step. Further, by synchronizing with the data of the robot controller and its jig library, it is possible to utilize the shortest assembly trajectory in time obtained through the post-processing module when the above-described step S4 is executed with respect to the computing device 13, and then configure jig nodes for matching or replacing jigs between each assembly step. For example, as shown in fig. 7, before carrying the guide rail from the mount storage area 26 to the to-be-mounted package 24 on the mount table 25, a pneumatic claw gripper for gripping the guide rail to be mounted is arranged before the robot is switched from the previous step to the step of picking up the guide rail. With the gripper deployed at this node, the path of the robot tip to move to the tool storage area 22 to install or replace a gripper can be introduced through the robot controller and the gripper management device of the tool storage area 22. Therefore, the assembly track demonstrated manually can be transited to the robot off-line assembly track (including the clamp) to realize the practical application scene operation of the off-line assembly platform 20.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, a mini-computing device, a mainframe, a workstation, a networked or distributed computing environment, a separate or integrated computing device platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it is readable by a programmable computing device, which when read by the storage medium or device is operative to configure and operate the computing device to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computing device-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computing device itself when programmed according to the methods and techniques described herein.
The computing device program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.