CN109108942B - Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS - Google Patents

Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS Download PDF

Info

Publication number
CN109108942B
CN109108942B CN201811057825.1A CN201811057825A CN109108942B CN 109108942 B CN109108942 B CN 109108942B CN 201811057825 A CN201811057825 A CN 201811057825A CN 109108942 B CN109108942 B CN 109108942B
Authority
CN
China
Prior art keywords
mechanical arm
motion
teaching
information
demonstrator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811057825.1A
Other languages
Chinese (zh)
Other versions
CN109108942A (en
Inventor
吴怀宇
张思伦
陈洋
吴杰
梅壮
代雅婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201811057825.1A priority Critical patent/CN109108942B/en
Publication of CN109108942A publication Critical patent/CN109108942A/en
Application granted granted Critical
Publication of CN109108942B publication Critical patent/CN109108942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator

Abstract

The invention relates to a mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS. Firstly, setting a teaching object, then controlling the teaching object to do demonstration motion, obtaining a depth map by using Kinect, positioning and tracking the three-dimensional pose of the teaching object by combining a PnP algorithm, establishing a space mapping system to map the pose of the teaching object to the tail end of a mechanical arm, resolving control information of each joint of the mechanical arm according to inverse kinematics, sending the control information in real time to indirectly control the motion of the mechanical arm, and finally recording teaching motion information on line and performing local linear optimization and learning on the teaching motion information by using an adaptive DMPS algorithm. The invention gets rid of the constraint of the hardware structure of the mechanical arm in the traditional teaching mode and the dependence on a complex sensor, reduces the hardware cost and the teaching difficulty of the teaching, has the non-contact characteristic, enhances the safety of the teaching process and simultaneously has wide applicability, and the self-adaptive DMPS method provided by the invention ensures that the whole system has good anti-interference performance.

Description

Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS
Technical Field
The invention belongs to the field of mechanical arm motion planning, and particularly relates to a motion control method and system for mechanical arm online visual teaching learning based on a QR code (a two-dimensional bar code).
Background
The motion planning mainly aims at a mechanical arm module with a high-dimensional motion space on a robot, is different from planar path planning, and is mainly divided into two categories, namely joint space trajectory planning and Cartesian space trajectory planning, wherein the former mainly uses a spline interpolation method in the traditional process, and the latter mainly uses a space straight line or space circular arc equal planning method. Due to the multi-degree-of-freedom space characteristics of the mechanical arm, planning and calculation are complex in practical application of the methods, each posture of the mechanical arm in the motion process needs to be recalculated for each new target, and particularly when an obstacle exists between the mechanical arm and a motion target point, the motion of the mechanical arm is difficult to plan by the traditional method, so that the traditional method has the defects of complex calculation, low intelligence, poor adaptability and the like. Open Motion Planning Library (OMPL) developed by Kavraki et al, university of rice, usa, has become the current mainstream robotic arm Motion Planning platform. The methods suitable for high-dimensional motion planning in the OMPL mainly include a Probabilistic Road Map (PRM), a rapid spanning Random Tree (RRT), an artificial potential field (ALT) method and the like, and the algorithms search areas by using different sampling modes and have the advantages of high planning speed, complete probability and the like.
In recent years, machine learning has been rapidly developed, and researchers have been working on developing a robot arm movement method for teaching a learning mode to simplify the use of a robot arm. How to teach the mechanical arm online is a big problem of research and development, some research institutions adopt a customized mechanical arm which can be taught, such as a KUKA mechanical arm, the mechanical arm can bind the mechanical arm with a user arm to enable the mechanical arm to move simultaneously for online teaching, and the method not only needs professional hardware equipment, but also has certain danger in the teaching process; other research institutions adopt a large number of sensors to be mounted on the moving parts of the users for teaching, the method not only needs to rely on a large number of expensive sensors, but also is easy to have the problems of interference and the like to influence the learning effect, the accuracy cannot be ensured when the number of the sensors is reduced, and a plurality of auxiliary devices have to be manually configured for increasing the accuracy. Therefore, how to implement a teaching method which is safe and has wide applicability is a goal which is always pursued in the industry.
Disclosure of Invention
Aiming at the problems of teaching difficulty, poor learning anti-interference performance and the like in the teaching learning mode in the field of mechanical arm motion planning, the invention provides a mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS, which greatly reduce the number of sensors and ensure the precision, get rid of the constraint of a mechanical arm hardware structure and the dependence on a complex sensor in the traditional teaching mode, reduce the teaching hardware cost and the teaching difficulty, enhance the safety of a user by the non-contact characteristic, have wide applicability and aim at the problems of interference of teaching information and the like.
In order to solve the technical problems, the invention adopts the following technical scheme:
a mechanical arm motion control method based on visual real-time teaching and adaptive DMPS is characterized in that three-dimensional pose recognition, positioning and tracking are carried out on a teaching object with set QR code characteristics through computer vision, the center of a QR code is a two-dimensional code teaching recognition part, and the periphery of the two-dimensional code teaching recognition part is a rectangular frame formed by connecting a white rectangular area and small black rectangles at four corners; meanwhile, a space mapping system is established to map the space information of the teaching object to the tail end of the mechanical arm, and a user indirectly controls the mechanical arm to move in real time by controlling the teaching object during teaching.
Further, the method comprises the following steps: firstly, setting a teaching object, performing demonstration motion by controlling the teaching object, acquiring a depth map by using a Kinect camera, and identifying, positioning and tracking the three-dimensional pose of the teaching object by combining a PnP algorithm; establishing a space mapping system to map the space pose of the demonstrator to the end pose of the mechanical arm, resolving in real time according to inverse kinematics to obtain control information of each joint angle, sending the control information to the mechanical arm to indirectly control the motion of the mechanical arm in real time, acquiring and recording demonstration motion information on line, and performing local linear optimization and learning on the motion information by using a self-adaptive DMPS (motion adaptive clustering system) algorithm; the depth information of the teaching object is mainly extracted from a depth map obtained by a Kinect depth camera.
Further, the process of recording teaching motion information on line and performing local linear optimization and learning by using the adaptive DMPS algorithm is as follows:
recording the spatial motion characteristics of a teaching object during training on line, performing three-degree-of-freedom decomposition on the recorded motion characteristics, learning by applying DMPS (distributed multi-layer packet switch) on each degree of freedom to obtain an optimal nonlinear item ticket sequence, setting new target spatial information and generalizing the motion characteristics of a new target on each degree of freedom; setting a generalized target precision threshold, judging whether the learning generalization effect is good or not according to whether the generalization result meets the precision threshold, when the learning generalization result exceeds the precision threshold due to interference existing in the teaching motion, carrying out local least square high-order multinomial fitting on the sample exceeding the threshold corresponding to the degree of freedom to optimize, and reusing DMPS learning generalization on the optimized sample motion characteristics to enable the generalization result to be accurately generalized to a new target point finally; and fitting the three-degree-of-freedom generalization result which finally meets the threshold into the space motion characteristic of the tail end of the mechanical arm under the same regular system, and then solving the motion information of each joint of the mechanical arm by inverse dynamics to control the motion of the mechanical arm.
In the technical scheme, the teaching motion information and the mechanical arm command information interact with the following steps in real time under the ROS:
step 1: building an ROS environment, building a Kinect camera and a mechanical arm information interaction node, correcting parameters of the Kinect camera and setting the image transmission size and frequency; initializing a control module of the mechanical arm, and setting mechanical arm command receiving and issuing frequency;
step 2: setting an object with a set QR code as a demonstrator, identifying the set QR code by using a Kinect camera, extracting the depth information of the demonstrator by combining an identification result and a corresponding depth information map, solving the spatial pose of the demonstrator by setting two-dimensional information and depth information of a QR code image through a PnP algorithm, and establishing a three-dimensional pose coordinate system at the QR code of the demonstrator; 4 known points for solving the PnP algorithm are central points of four black small rectangles on the periphery of the identification setting QR code;
and step 3: establishing a D-H model of the mechanical arm, designing a conversion system of the tail end of the mechanical arm relative to a base according to positive kinematics, and then establishing a space mapping system by combining the space pose of a demonstrator relative to Kinect;
and 4, step 4: controlling a demonstrator to perform demonstration motion in front of a Kinect camera, and recording three-dimensional motion information of the demonstrator in the demonstration motion on line;
and 5: mapping the motion of a demonstrator into the motion of the tail end of the mechanical arm according to the space mapping system established in the step 3, solving the motion information of each joint of the mechanical arm by inverse dynamics, and sending a joint motion command to a motion control card of a lower computer through a node to drive the mechanical arm to move in real time so as to realize the visual real-time demonstration of the mechanical arm;
step 6: decomposing the training motion information recorded in the step 4 into three one-dimensional motion information on three degrees of freedom of x, y and z, wherein the single one-dimensional motion information is a continuous time sequence of a group of displacement, speed and acceleration
Figure BDA0001796269750000031
Representing that the motion information is discretized by taking a step length delta t, wherein t is equal to { delta t, 2 delta t, …, n delta t }, and the motion starting point x0=xdemo(0) End point g ═ xdemo(0) The motion time constant tau is n delta t, the motion characteristics of the single degree of freedom are learned by utilizing a DMPS algorithm on three degrees of freedom respectively, and a learning weight sequence w is calculatedi
And 7: setting a new moving target and a generalization precision threshold, generalizing a new target moving characteristic through a weight sequence obtained by learning and new target information, if the generalization result of a certain degree of freedom is greater than the precision threshold, optimizing the training moving characteristic of the corresponding degree of freedom by carrying out local least square high-order polynomial fitting, and returning the optimized training moving to the step 6 for relearning;
and 8: fitting the x, y and z one-dimensional motion characteristics of the new target, which meet the precision requirement and are obtained in the step 7, in the same regular system to obtain three-dimensional space motion information of the mechanical arm on the new target, decomposing the three-dimensional space motion information by using inverse kinematics, and calculating motion information of each joint of the mechanical arm;
and step 9: and (4) sending the motion information of each joint of the mechanical arm obtained in the step (8) to a motion control card of the lower computer, so that the mechanical arm autonomously and accurately moves to a new target point according to the motion information.
In the above technical solution, in step 1, the image size is set to 640 × 480, the image acquisition frequency is 30 frames per second, and the robot arm node command issuing and receiving frequency is set to 30 HZ.
In the technical scheme, in the step 2, the demonstrator positioning algorithm is a PnP algorithm, and the demonstrator depth information is a QR code center depth set in a Kinect depth map; the positioning accuracy is 1 mm.
In the above technical scheme, in step 3, the connection points in the space transformation system are set as the Kinect and the robot arm base, that is, the pose of the demonstrator relative to the Kinect is the pose of the robot arm tail end relative to the robot arm base.
In the technical scheme, in the step 7, the generalization threshold precision is set to be 5%, the training motion local optimization range is set to be 5% -95% of the motion whole, and the polynomial order in least square high-order polynomial fitting is set to be 8-15 orders.
In the above technical scheme, in step 9, the command sending frequency of the upper computer is set to be 30HZ, and the precision of the command information of the mechanical arm joint is set to be 0.01 °.
A system employing the above method, comprising:
the teaching object with the surface capable of moving and with the set QR code characteristics is moved by a demonstrator or autonomously moved by the demonstrator;
the Kinect camera is used for identifying the QR code on the demonstrator;
the upper computer is communicated with the Kinect camera to acquire the motion information of the demonstrator;
and the mechanical arm is provided with an end effector and is in remote or near-distance communication with the upper computer so as to move along with the movement of the teaching object.
The invention realizes a mechanical arm motion control method based on visual real-time teaching and self-adaptive DMPS, namely, an object with a set QR code is identified, three-dimensionally positioned and tracked through computer vision, a teaching object is controlled through a space conversion system to control the tail end of a mechanical arm to move in real time, and the motion characteristics of the teaching object are acquired on line for learning.
Compared with the prior art, the invention has the following advantages:
the teaching process provided by the invention does not need to use special mechanical arm equipment and a complex sensor, so that the hardware cost required by teaching is reduced;
the teaching process provided by the invention is not restricted by a hardware structure of the mechanical arm, and has wider applicability;
in the visual real-time teaching provided by the invention, the set QR code object is adopted for teaching, and the teaching object is not in direct contact with the mechanical arm, so that the safety of the teaching process is greatly enhanced;
the self-adaptive DMPS solves the problem of poor learning generalization precision caused by interference in teaching motion, and the whole system has certain anti-interference capability.
The whole vision real-time teaching and online learning system provided by the invention enables non-professional persons without relevant knowledge in the robot field to simply and safely use the mechanical arm, and has the advantages of easy use and practicability;
drawings
FIG. 1 is a diagram of a robot vision real-time teaching system according to the present invention.
Fig. 2 shows an embodiment of a set QR code for identifying three-dimensional positioning according to the present invention.
Fig. 3 is a flowchart of a robot arm motion control method based on visual real-time teaching and adaptive DMPS according to the present invention.
FIG. 4 is a diagram of a robot vision real-time teaching process; wherein (a) - (d) are different position diagrams of the mechanical arm moving along with the teaching object in real time in sequence.
FIG. 5 is an orthogonal exploded view of three degrees of freedom for teaching motion; wherein (a) - (c) are exploded views of three axes xyz in sequence.
FIG. 6 is a generalized diagram of three degrees of freedom motion characteristics of a new target; wherein (a) - (c) are generalized diagrams of xyz triaxial in sequence.
FIG. 7 is a comparison graph before and after x-degree-of-freedom teaching motion optimization.
FIG. 8 is a comparison graph of generalization results before and after x-degree-of-freedom teaching motion optimization.
Detailed Description
In order to further explain the technical scheme of the present invention, the following describes in detail a mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS according to the present invention with reference to fig. 1 to 8.
Fig. 1 is a structural diagram of a robot vision real-time teaching system used in the method of the present invention. The camera 1 is used for reading a teaching object with a set QR teaching code 2, the teaching object can be in a static or moving state, the mechanical arm 3 provided with the end effector is in remote or near-distance communication with an upper computer, and the upper computer acquires the depth and position information of the teaching object represented by the QR teaching code 2 through the camera 1 and further controls the mechanical arm 3 to move.
As shown in fig. 2, the set QR code 2 on the teaching object is a rectangular two-dimensional code including rectangular color blocks 5 at four corners of a rectangle and a two-dimensional code area 4 located between the four rectangular color blocks 5. The rectangular color block 5 is a PNP algorithm positioning part, and the two-dimension code area 4 is a teaching recognition part.
The invention relates to a mechanical arm motion control method based on visual real-time teaching and self-adaptive DMPS (distributed multi-processing system), which is shown in figures 3-8, adopts Kinect and PnP (pseudo-random-programming) algorithms to perform spatial three-dimensional positioning tracking on visual real-time teaching objects, establishes a spatial mapping relation between the teaching objects and a mechanical arm actuator so that a user can indirectly control the motion of a mechanical arm in real time by controlling the teaching objects, and can learn and train motion information through self-adaptive Dynamic motion primitive algorithms (DMPS for short), thereby realizing the motion learning and autonomous accurate generalization of the mechanical arm.
According to the method, a depth map is obtained by using a Kinect, three-dimensional pose positioning tracking is carried out on a demonstrator with a set QR code characteristic by combining with a PnP algorithm, a space mapping system is built to map the space information of the demonstrator to the tail end of a mechanical arm, a user can indirectly control the mechanical arm to move in real time by controlling the demonstrator during teaching, a generalization precision threshold value is set, the teaching motion information is put into an adaptive DMPS for learning to obtain an excellent nonlinear convergence result, accurate space motion information is generalized by combining task information, finally motion information of each joint of the mechanical arm is resolved in an inverse kinematics way, and the motion information is sent to a motion control card of a lower computer to realize the autonomous motion of the mechanical arm.
Firstly, setting an object with a set QR code as a demonstrator, controlling the demonstrator to perform demonstration motion, identifying the demonstrator by a Kinect and PnP (passive-n-Point) algorithm, and then performing three-dimensional positioning and tracking on the demonstrator by combining depth information. And establishing a space conversion system to map the space pose of the teaching object to the end pose of the mechanical arm, and solving the motion information of each joint of the mechanical arm according to inverse kinematics to control the motion of the mechanical arm in real time. And then recording the spatial motion characteristics of the teaching object during training on line, decomposing the three degrees of freedom of the teaching object, learning by applying DMPS on each degree of freedom to obtain an optimal nonlinear item weight certificate sequence, checking the learning effect by using the generalization error of a new target, setting the spatial information of the new target, generalizing the motion characteristics of the new target on each degree of freedom, setting a generalized target precision threshold, optimizing a sample on the degree of freedom corresponding to the exceeded threshold by performing local least square multinomial fitting when the learning generalization result exceeds the precision threshold due to interference existing in the teaching motion, and re-applying DMPS to the optimized sample motion characteristics to enable the optimized sample motion characteristics to be precisely generalized to a new target point. And fitting the three-degree-of-freedom generalization result which finally meets the threshold into the space motion characteristic of the tail end of the mechanical arm under the same regular system, and then resolving motion information of each joint of the mechanical arm by inverse dynamics to control the motion of the mechanical arm.
The whole process flow chart of the technical scheme is shown as the attached figure 1, and the specific implementation steps are as follows:
step 1: building an ROS environment, building Kinect and mechanical arm information interaction nodes, correcting parameters of a Kinect camera and setting image transmission size and frequency; and initializing a control module of the mechanical arm, and setting command receiving and issuing frequency of the control module. The detailed steps are as follows:
step 1-1: establishing a Kinect two-dimensional graph node, a Kinect depth graph node, a demonstrator three-dimensional positioning node, a space mapping system node and a mechanical arm command node in an ROS environment;
step 1-2: setting the Kinect image acquisition frequency to be 30 frames per second, the size of the acquired image to be 640 multiplied by 480, and setting the mechanical arm node command issuing and receiving frequency to be 30 HZ;
step 1-3: setting the baud rate of a serial port of a motion control card of the lower computer to be 9600 (the mechanical arm adopted by the invention is a JS-R six-degree-of-freedom mechanical arm with the model of JS-R, and the motor adopts PWM control), and initializing the postures of all joints of the mechanical arm;
step 2: as shown in the attached figure 2, a QR code designed and set by the invention is used as a teaching object feature, Kinect is used for acquiring two-dimensional map and depth map information of the teaching object, four point space poses are extracted and identified for each acquired frame image, and a PnP algorithm is used for positioning the three-dimensional pose of the QR code of the teaching object feature. The detailed steps are as follows:
step 2-1: using Kinect to obtain a two-dimensional graph and a depth graph of a teaching object, and identifying a QR code set on the teaching object for each obtained frame image;
step 2-2: taking four black rectangle central points of the QR code peripheral rectangle obtained by identification as known points, setting a world coordinate system in a vision system as a Kinect coordinate system, and setting the pixel position in a two-dimensional image as a three-dimensional coordinate (X) corresponding to the vertex of (u, v) in the vision systemc,Yc,Zc) Comprises the following steps:
Figure BDA0001796269750000071
the Kinect intrinsic parameter is marked as f ═ (f) in the inventionx,fy,u0,v0) Each vertex depth Z is obtained from the Kinect depth map (525,525,319.5,239.5)c
Step 2-3: from four known points (X)wi,Ywi,Zwi) I is 1, 2, 3, 4 to obtain the rotation matrix R of the teaching object in the visual systemvisionAnd a translation vector TvisionAnd obtaining the pose of the teaching object, and specifically solving the following steps:
Figure BDA0001796269750000072
Figure BDA0001796269750000081
here TzIn order to average depth of the demonstrator, the QR code is set to be a symmetrical rectangle according to the invention, and T is takenzThe center of the rectangle is the center depth of the QR code;
and step 3: and establishing a D-H model of the mechanical arm, designing a conversion system of the tail end of the mechanical arm relative to the base according to positive kinematics, and establishing a space mapping system in combination with the space pose of the teaching object in the visual system. According to the invention, a world coordinate system in a vision system is set as a Kinect coordinate system, a world coordinate system in a mechanical arm system is set as a mechanical arm base coordinate system, and the tail end attitude of the mechanical arm under a space mapping system is set as follows through optimization:
Figure BDA0001796269750000082
and 4, step 4: controlling the teaching object to perform space motion in front of the Kinect, positioning the motion position and posture of the teaching object in real time according to the step 2, and simultaneously recording three-dimensional motion information of the teaching object in training motion on line;
and 5: as shown in the attached drawing 3, the teaching object motion is mapped into the tail end motion of the mechanical arm according to the space mapping system established in the step 3, the motion information of each joint of the mechanical arm is calculated by inverse dynamics, and as shown in the attached drawing 4, a joint motion command is sent to a lower computer motion control card through a node to drive the mechanical arm to move along with the teaching object in real time, so that the visual real-time teaching of the mechanical arm is realized; FIGS. 4 (a) - (d) are in the order named
Step 6: decomposing the training motion information recorded in the step 4 into three one-dimensional motion information on three degrees of freedom of x, y and z, wherein the single one-dimensional motion information is a continuous sequence of a group of displacement, speed and acceleration
Figure BDA0001796269750000083
It is shown that the motion is discretized by taking the step Δ t, where t ∈ { Δ t, 2 Δ t, …, n Δ t }, the motion start point x0=xdemo(0) End point g ═ xdemo(0) The motion time constant tau is used for learning the motion characteristics of the single degree of freedom by utilizing a DMPS algorithm on three degrees of freedom respectively, and a learning weight sequence w is calculatediThe detailed steps are as follows:
step 6-1: as shown in fig. 5, the teaching object motion controlled by the human under vision recorded in step 4 is orthogonally decomposed in three degrees of freedom x, y and z to obtain three pieces of single-dimensional motion information, the single-dimensional motion is discretized by taking the step length delta t as 0.05, and the time constant tau as 20
Step 6-2: respectively learning the single-dimensional motion characteristics acquired in the step 6-1 by utilizing a DMPS algorithm in three degrees of freedom, and learning the nonlinear function f of a targettarget(s) approximating a non-linear function f of the training motiondemo(s) fitting error J by least squaresiLearning to obtain a converged weight sequence wiSetting the weight number of the weight sequence as 10;
and 7: setting a new moving target and a generalization precision threshold, generalizing the weight sequence obtained by learning and new target information to obtain a new target moving characteristic, if the generalization result of a certain degree of freedom is greater than the precision threshold, carrying out local least square high-order polynomial fitting on the training moving characteristic of the corresponding degree of freedom to remove disturbance, and returning the optimized training movement to the step 6 for relearning. The detailed steps are as follows:
step 7-1: setting a new moving target and a generalization precision threshold, wherein the three-degree-of-freedom positions of the new target are different from those of the original training target, and the three-degree-of-freedom generalization precision threshold is 5 percent:
(xnew,ynew,znew)=(xtraining+0.1m,ytraining+0.1m,ztraining+0.1m)
step 7-2: as shown in fig. 5, the motion characteristics of the new target in three degrees of freedom are generalized according to the weight sequence obtained by learning in step 6-2 and the new target information in step 7-1, and then the precision of the generalization result of each degree of freedom is obtained, and the precision is set to solve as follows:
generalization accuracy | (generalization result-generalization target)/(new target-training target) | non-counting
The generalization precision is the absolute value obtained by dividing the absolute error of the generalization result by the variation of the target coordinate in the corresponding degree of freedom, and the generalization precision of the three degrees of freedom of the invention is respectively as follows: 10.87%, 5.64%, 7.66%;
and 7-3: the generalization accuracy and setting obtained in the step 7-2The generalization precision 5% contrast exceeds the precision threshold, taking x degrees of freedom as an example, as shown in FIG. 7, for teaching a motion continuous sequence in x degrees of freedom
Figure BDA0001796269750000091
Performing least square high-order polynomial fitting optimization on 5% -95% of the whole to obtain an optimized x-degree-of-freedom motion information sequence, and setting x-degree-of-freedom upward motion to perform 8-order polynomial optimization;
and 7-4: returning to the step 6, learning the motion information on the x degree of freedom optimized in the step 7-3 again, and then generalizing the step 7, wherein the generalized result after optimization is shown in the figure 8, the precision of the generalized result after the x degree of freedom optimization is 3 percent and is less than the generalized threshold value 5 percent, and the requirement is met;
and 7-5: the same x freedom degree optimization step is carried out, and the motion of the y freedom degree and the z freedom degree which do not meet the precision requirement is optimized until all the motion meets the generalization precision;
and 8: fitting three new target one-dimensional motion characteristics of x, y and z obtained by generalization in the step 7 under the same regular system to obtain new target three-dimensional space motion information, and then decomposing the new target three-dimensional space motion information by using inverse kinematics to obtain motion information of each joint of the mechanical arm;
and step 9: and (4) sending the motion information of each joint of the mechanical arm obtained in the step (8) to a motion control card of the lower computer, so that the mechanical arm autonomously and accurately moves to a new target point according to the motion information.
In the above technical solution, the teaching object is characterized by having a set QR (Quick Response, hereinafter abbreviated as QR) code, and the teaching object is identified as the set QR code;
in the technical scheme, the depth information of the demonstrator is mainly extracted from a depth map obtained by a Kinect depth camera;
in the technical scheme, the teaching object motion information and the mechanical arm command information are interacted in real time under the ROS.
In the technical scheme, in the step 1, the size of an image is set to be 640 multiplied by 480, the image acquisition frequency is 30 frames per second, and the command issuing and receiving frequency of a mechanical arm node is set to be 30 HZ;
in the above technical solution, in step 2, the teaching object is characterized in that: the method comprises the steps that a demonstrator is provided with a set QR code, the periphery of the demonstrator is surrounded by a white area with four corners as small black rectangles by taking the QR code as a center, the size of the whole image is 8cm multiplied by 8cm, the size of the center QR code is 5cm multiplied by 5cm rectangles, the size of the small black rectangles is 1.5cm multiplied by 1.5cm, the positioning algorithm of the demonstrator is a PnP (passive-n-Point) algorithm, the positioning precision is 1mm, and the depth information of the demonstrator is the center depth of the QR code in a Kinect depth map;
in the above technical scheme, in step 2, the teaching object three-dimensional pose visual positioning specifically comprises the following steps:
the pose of the teaching object has a rotation matrix R and a translation vector T:
Figure BDA0001796269750000101
the teaching materials have internal parameters of f ═ f (f)x,fy,u0,v0) Perspective projective transformation under Kinect is as follows:
Figure BDA0001796269750000102
the coordinate of the two-dimensional image coordinate system corresponding to the pixel position (u, v) of the teaching object in the two-dimensional image is as follows:
Figure BDA0001796269750000103
for known 4 space points (X) under the world coordinate systemwi,Ywi,zwi) 1, 2, 3, 4 has:
Figure BDA0001796269750000104
here TzFor mean depth of the teaching object, the above 8 independent equations solve for R1,R2T, solving R from R as an orthogonal matrix3And obtaining all poses of the teaching object. The invention sets a world coordinate system in a vision system as a Kinect coordinate system, and sets a world coordinate system in a mapped mechanical arm system as a mechanical arm base coordinate system. In the invention, 4 known points for solving the PnP algorithm are set for identifying and setting a QR code to obtain four central points of a small black rectangle.
In the technical scheme, in the step 3, nodes in the conversion system are set as a Kinect and a mechanical arm base, namely the pose of the demonstrator relative to the Kinect is the pose of the mechanical arm tail end relative to the mechanical arm base;
in the above technical solution, in step 6, the learning process is specifically as follows:
the core of the dynamic motion primitive theory is to describe motion by a series of nonlinear differential equations with target points. For a single one-dimensional motion it is expressed as:
Figure BDA0001796269750000111
Figure BDA0001796269750000112
Figure BDA0001796269750000113
ψi(s)=exp(-hi(s-ci)2)
wherein τ is a time scaling factor, x0G is the system start and end states, x and v are the system current state and velocity, K, D is the system constant, f is the linear weighted sum of the radial basis functions, ψi(s) is a radial basis kernel function, hiAnd ciThe bandwidth and the average value of the basic kernel function, N is the number of the radial basic kernel functions, w is the weight value of the radial basic kernel functions in linear weighting, s is a function related to time t, and the specific dynamic characteristic of the linear weight:
Figure BDA0001796269750000114
this second order system is a regular system, where α is a predetermined constant (a > 0), the initial state s (0) is 1, and s is a process from the initial state toward 0
Placing the sample in DMPS to obtain fdemo(s):
Figure BDA0001796269750000115
Procedure for learning of movement, i.e. non-linear function fdemo(s) a non-linear function f approximating a model of a real sampletarget(s),fdemoWhere s is determined by a regularization system, convergence wiValue of (a) completes ftarget(s)→fdemo(s) approximating. Fitting error J by using a least square learning method:
J=∑S(fdemo(s)-ftarget(s))2
when J is equal to JminEstimate weight value wiNamely, the system optimal weight value is estimated as follows:
f=Tw
Figure BDA0001796269750000121
w=[w1 …wN]T
when a new movement target point g is givennewAccording to which the optimum weight value wiCan fit a new motion characteristic discrete sequence reversely
Figure BDA0001796269750000122
The discrete sequence can fit a new target motion characteristic with the original sample characteristic to complete the generalization of motion.
In the technical scheme, in the step 7, the generalization threshold precision is set to be 5%, the training motion local optimization range is set to be 5% -95% of the motion whole, and the polynomial order in least square high-order polynomial fitting is set to be 8-15 orders;
in the technical scheme, in the step 9, the command sending frequency of the upper computer is set to be 30HZ, and the precision of the command information of the mechanical arm joint is set to be 0.01 degrees;
in summary, the mechanical arm motion learning method based on visual real-time teaching and adaptive DMPS provided by the invention realizes the functions of mechanical arm real-time visual teaching and accurate generalization, the visual real-time teaching function realized by the invention has wide applicability, the visual real-time teaching characteristic not only reduces the hardware cost required by teaching, but also greatly enhances the safety of the teaching process, and meanwhile, the invention also solves the problem of poor accuracy of the final generalization result caused by interference in the teaching motion, can accurately plan the mechanical arm to move to a target point, and enhances the anti-interference performance of the whole system.

Claims (7)

1. A mechanical arm motion control method based on visual real-time teaching and adaptive DMPS is characterized in that three-dimensional pose recognition, positioning and tracking are carried out on a teaching object with set QR code characteristics through computer vision, the center of a QR code is a two-dimensional code teaching recognition part, and the periphery of the two-dimensional code teaching recognition part is a rectangular frame formed by connecting a white rectangular area and small black rectangles at four corners; meanwhile, a space mapping system is established to map the space information of the teaching object to the tail end of the mechanical arm, and a user indirectly controls the motion of the mechanical arm in real time by controlling the teaching object during teaching; finally, on-line collection and recording of teaching motion information and local linear optimization and learning of the motion information by using a self-adaptive DMPS algorithm are carried out; the teaching motion information and mechanical arm command information are interacted in real time under the ROS environment, and the teaching motion information and mechanical arm command information real-time interaction method comprises the following steps:
step 1: building an ROS environment, building a Kinect camera and a mechanical arm information interaction node, correcting parameters of the Kinect camera and setting the image transmission size and frequency; initializing a control module of the mechanical arm, and setting mechanical arm command receiving and issuing frequency;
step 2: setting an object with a set QR code as a demonstrator, identifying the set QR code by using a Kinect camera, extracting the depth information of the demonstrator by combining an identification result and a corresponding depth information map, solving the spatial pose of the demonstrator by setting two-dimensional information and depth information of a QR code image through a PnP algorithm, and establishing a three-dimensional pose coordinate system at the QR code of the demonstrator; 4 known points for solving the PnP algorithm are central points of four black small rectangles on the periphery of the identification setting QR code;
and step 3: establishing a D-H model of the mechanical arm, designing a conversion system of the tail end of the mechanical arm relative to a base according to positive kinematics, and then establishing a space mapping system by combining the space pose of a demonstrator relative to Kinect;
and 4, step 4: controlling a demonstrator to perform demonstration motion in front of a Kinect camera, and recording three-dimensional motion information of the demonstrator in the demonstration motion on line;
and 5: mapping the motion of a demonstrator into the motion of the tail end of the mechanical arm according to the space mapping system established in the step 3, solving the motion information of each joint of the mechanical arm by inverse dynamics, and sending a joint motion command to a motion control card of a lower computer through a node to drive the mechanical arm to move in real time so as to realize the visual real-time demonstration of the mechanical arm;
step 6: decomposing the training motion information recorded in the step 4 into three one-dimensional motion information on three degrees of freedom of x, y and z, wherein the single one-dimensional motion information is a continuous time sequence of a group of displacement, speed and acceleration
Figure FDA0002663610070000011
Representing that the motion information is discretized by taking a step length delta t, wherein t is equal to { delta t, 2 delta t, …, n delta t }, and the motion starting point x0=xdemo(0) End point g ═ xdemo(0) The motion time constant tau is n delta t, the motion characteristics of the single degree of freedom are learned by utilizing a DMPS algorithm on three degrees of freedom respectively, and a learning weight sequence w is calculatedi
And 7: setting a new moving target and a generalization precision threshold, generalizing a new target moving characteristic through a weight sequence obtained by learning and new target information, if the generalization result of a certain degree of freedom is greater than the precision threshold, optimizing the training moving characteristic of the corresponding degree of freedom by carrying out local least square high-order polynomial fitting, and returning the optimized training moving to the step 6 for relearning;
and 8: fitting the x, y and z one-dimensional motion characteristics of the new target, which meet the precision requirement and are obtained in the step 7, in the same regular system to obtain three-dimensional space motion information of the mechanical arm on the new target, decomposing the three-dimensional space motion information by using inverse kinematics, and calculating motion information of each joint of the mechanical arm;
and step 9: and (4) sending the motion information of each joint of the mechanical arm obtained in the step (8) to a motion control card of the lower computer, so that the mechanical arm autonomously and accurately moves to a new target point according to the motion information.
2. The method for controlling the motion of a mechanical arm based on visual real-time teaching and adaptive DMPS according to claim 1, wherein in step 1, the image size is set to 640 x 480, the image acquisition frequency is 30 frames per second, and the mechanical arm node command issuing and receiving frequency is set to 30 HZ.
3. The mechanical arm motion control method based on visual real-time teaching and adaptive DMPS (distributed multi-dimensional motion protection system) as claimed in claim 1, wherein in the step 2, the teaching object positioning algorithm is a PnP algorithm, and the teaching object depth information is a QR code center depth set in a Kinect depth map; the positioning accuracy is 1 mm.
4. The mechanical arm motion control method based on visual real-time teaching and adaptive DMPS as claimed in claim 1, wherein in step 3, the connection points in the space transformation system are set to be a Kinect and a mechanical arm base, that is, the pose of the teaching object relative to the Kinect is the pose of the mechanical arm tail end relative to the mechanical arm base.
5. The mechanical arm motion control method based on visual real-time teaching and adaptive DMPS according to claim 1, wherein in step 7, the generalization threshold precision is set to 5%, the local optimization range of the training motion is set to 5% -95% of the motion whole body, and the polynomial order in least square high-order polynomial fitting is set to 8-15 orders.
6. The mechanical arm motion control method based on visual real-time teaching and adaptive DMPS according to claim 1, wherein in step 9, the command sending frequency of the upper computer is set to be 30HZ, and the information precision of the mechanical arm joint command is set to be 0.01 °.
7. A system for using the method of any of claims 1-6, comprising:
the teaching object with the surface capable of moving and with the set QR code characteristics is moved by a demonstrator or autonomously moved by the demonstrator;
the Kinect camera is used for identifying the QR code on the demonstrator;
the upper computer is communicated with the Kinect camera to acquire the motion information of the demonstrator;
and the mechanical arm is provided with an end effector and is in remote or near-distance communication with the upper computer so as to move along with the movement of the teaching object.
CN201811057825.1A 2018-09-11 2018-09-11 Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS Active CN109108942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811057825.1A CN109108942B (en) 2018-09-11 2018-09-11 Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811057825.1A CN109108942B (en) 2018-09-11 2018-09-11 Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS

Publications (2)

Publication Number Publication Date
CN109108942A CN109108942A (en) 2019-01-01
CN109108942B true CN109108942B (en) 2021-03-02

Family

ID=64859196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811057825.1A Active CN109108942B (en) 2018-09-11 2018-09-11 Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS

Country Status (1)

Country Link
CN (1) CN109108942B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110116116A (en) * 2019-05-14 2019-08-13 中国地质大学(武汉) Robotic laser cleaning path planning system based on computer vision and method
CN110298854B (en) * 2019-05-17 2021-05-11 同济大学 Flight snake-shaped arm cooperative positioning method based on online self-adaption and monocular vision
CN110026987B (en) * 2019-05-28 2022-04-19 广东工业大学 Method, device and equipment for generating grabbing track of mechanical arm and storage medium
JP2020196060A (en) * 2019-05-31 2020-12-10 セイコーエプソン株式会社 Teaching method
CN110471281B (en) * 2019-07-30 2021-09-24 南京航空航天大学 Variable-discourse-domain fuzzy control system and control method for trajectory tracking control
CN110481029B (en) * 2019-09-05 2021-08-20 南京信息职业技术学院 Position-follow-up 3D printing warping-prevention temperature compensation system and compensation method
CN111002289B (en) * 2019-11-25 2021-08-17 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN111216124B (en) * 2019-12-02 2020-11-06 广东技术师范大学 Robot vision guiding method and device based on integration of global vision and local vision
CN111823215A (en) * 2020-06-08 2020-10-27 深圳市越疆科技有限公司 Synchronous control method and device for industrial robot
CN111890353B (en) * 2020-06-24 2022-01-11 深圳市越疆科技有限公司 Robot teaching track reproduction method and device and computer readable storage medium
CN112207835B (en) * 2020-09-18 2021-11-16 浙江大学 Method for realizing double-arm cooperative work task based on teaching learning
CN112476489B (en) * 2020-11-13 2021-10-22 哈尔滨工业大学(深圳) Flexible mechanical arm synchronous measurement method and system based on natural characteristics
CN112530267B (en) * 2020-12-17 2022-11-08 河北工业大学 Intelligent mechanical arm teaching method based on computer vision and application
CN112975975A (en) * 2021-03-02 2021-06-18 路邦康建有限公司 Robot control interface correction method and hospital clinical auxiliary robot thereof
CN113977580B (en) * 2021-10-29 2023-06-27 浙江工业大学 Mechanical arm imitation learning method based on dynamic motion primitive and self-adaptive control

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100483790B1 (en) * 2002-03-22 2005-04-20 한국과학기술연구원 Multi-degree of freedom telerobotic system for micro assembly
JP5233709B2 (en) * 2009-02-05 2013-07-10 株式会社デンソーウェーブ Robot simulation image display system
EP2366502B1 (en) * 2010-02-26 2011-11-02 Honda Research Institute Europe GmbH Robot with hand-object movement correlations for online temporal segmentation of movement tasks
CN106444738B (en) * 2016-05-24 2019-04-09 武汉科技大学 Method for planning path for mobile robot based on dynamic motion primitive learning model
CN106142092A (en) * 2016-07-26 2016-11-23 张扬 A kind of method robot being carried out teaching based on stereovision technique
CN107160364B (en) * 2017-06-07 2021-02-19 华南理工大学 Industrial robot teaching system and method based on machine vision

Also Published As

Publication number Publication date
CN109108942A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109108942B (en) Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS
US11580724B2 (en) Virtual teach and repeat mobile manipulation system
Hudson et al. End-to-end dexterous manipulation with deliberate interactive estimation
Omrčen et al. Autonomous acquisition of pushing actions to support object grasping with a humanoid robot
Kragic et al. A framework for visual servoing
Xue et al. Gesture-and vision-based automatic grasping and flexible placement in teleoperation
Gonçalves et al. Grasp planning with incomplete knowledge about the object to be grasped
Lin et al. Cloud robotic grasping of Gaussian mixture model based on point cloud projection under occlusion
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
Sanchez-Lopez et al. A real-time 3D pose based visual servoing implementation for an autonomous mobile robot manipulator
Kar et al. Visual motor control of a 7 DOF robot manipulator using a fuzzy SOM network
Wang et al. Robot arm perceptive exploration based significant slam in search and rescue environment
Jafari et al. Robotic hand-eye coordination: from observation to manipulation
Huh et al. Self-supervised Wide Baseline Visual Servoing via 3D Equivariance
Fan Dexterity in robotic grasping, manipulation and assembly
Omrcen et al. Sensorimotor processes for learning object representations
Al-Shanoon Developing a mobile manipulation system to handle unknown and unstructured objects
Kothiyal Perception Based UAV Path Planning for Fruit Harvesting
Sayour et al. Research Article Autonomous Robotic Manipulation: Real-Time, Deep-Learning Approach for Grasping of Unknown Objects
Webb Belief driven autonomous manipulator pose selection for less controlled environments
Guo Collision Avoidance System for Human-Robot Collaboration
Singh et al. Grasping real objects using virtual images
Qi et al. Revolutionizing Packaging: A Robotic Bagging Pipeline with Constraint-aware Structure-of-Interest Planning
Khurana Human-Robot Collaborative Control for Inspection and Material Handling using Computer Vision and Joystick
Zhang et al. Reinforcement Learning Based End-to-end Autonomous Obstacle Avoidance for Manipulators

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant