CN112099442A - Parallel robot vision servo system and control method - Google Patents

Parallel robot vision servo system and control method Download PDF

Info

Publication number
CN112099442A
CN112099442A CN202010951451.9A CN202010951451A CN112099442A CN 112099442 A CN112099442 A CN 112099442A CN 202010951451 A CN202010951451 A CN 202010951451A CN 112099442 A CN112099442 A CN 112099442A
Authority
CN
China
Prior art keywords
target object
image
parallel robot
camera
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010951451.9A
Other languages
Chinese (zh)
Inventor
李冰
刘程
李佳帅
陈坤杰
师兆辰
马恒涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin North Rice Technology Co ltd
Harbin Engineering University
Original Assignee
Harbin North Rice Technology Co ltd
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin North Rice Technology Co ltd, Harbin Engineering University filed Critical Harbin North Rice Technology Co ltd
Priority to CN202010951451.9A priority Critical patent/CN112099442A/en
Publication of CN112099442A publication Critical patent/CN112099442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/414Structure of the control system, e.g. common controller or multiprocessor systems, interface to servo, programmable interface controller
    • G05B19/4142Structure of the control system, e.g. common controller or multiprocessor systems, interface to servo, programmable interface controller characterised by the use of a microprocessor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/34Director, elements to supervisory
    • G05B2219/34013Servocontroller

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a parallel robot vision servo system and a control method, wherein the upper end of a parallel robot (2) is arranged on a rack and is positioned above a conveying device (1), a fixed camera (3) is arranged on the rack, a movable camera (4) is arranged at the lower end of the parallel robot (2), a target object enters the visual field of the fixed camera (3) under the driving of the conveying device (1), the fixed camera (3) sends shot images of the movable camera (4) and the target object to be grabbed to a controller (5), and the controller (5) performs vision servo control according to the collected images. The invention can ensure the enough visual field of the camera, and can not cause the failure of visual servo due to shielding, and the controller can not cause the failure of visual servo due to the time difference existing in switching through real-time switching.

Description

Parallel robot vision servo system and control method
Technical Field
The invention relates to a parallel robot vision servo system and a control method, belonging to the technical field of machine vision.
Background
With the rapid development of the manufacturing industry in China, the industries such as food, medical treatment, electronics, light industry and the like have greater and greater requirements on parallel robot products capable of realizing rapid sorting, packaging and detection, manual operation is replaced by the parallel robot products, and the production efficiency is greatly improved.
In the traditional machine vision positioning, a camera and the parallel robot only send coordinates at a certain moment and perform calculation through a conveyor belt encoder to complete positioning operation. The camera and the parallel robot work independently, an open loop system is adopted, and errors are easily formed by external interference, so that positioning failure is caused. The vision servo well solves the problems, obtains the target position in real time for feedback, and can ensure that failure caused by external interference is avoided, so the vision servo is introduced for parallel robot control. However, the traditional camera can obtain accurate information of a target at the visual servo of the tail end of the parallel robot, but only can obtain a small shooting space, so that the working range of the parallel robot is seriously influenced, and the working efficiency of the parallel robot is reduced; the traditional camera fixed vision servo can ensure the working range of the parallel robot, but the information of the target is not accurate enough, and the positioning is not accurate because the parallel robot moves to shield the target.
Disclosure of Invention
In view of the foregoing prior art, the technical problem to be solved by the present invention is to provide a parallel robot vision servo system and a control method thereof, which can ensure sufficient view of a camera and prevent vision servo failure due to occlusion.
In order to solve the technical problem, according to the parallel robot vision servo system, the upper end of a parallel robot 2 is mounted on a rack and is positioned above a conveying device 1, a fixed camera 3 is arranged on the rack, a movable camera 4 is mounted at the lower end of the parallel robot 2, a target object enters the visual field of the fixed camera 3 under the driving of the conveying device 1, the fixed camera 3 sends shot images of the movable camera 4 and the target object to be grabbed to a controller 5, and the controller 5 performs vision servo control according to the collected images.
The invention also comprises a control method adopting the parallel robot vision servo system, which comprises the following steps:
step 1: the target object to be grabbed enters the visual field of the fixed camera under the driving of the conveying device, and the fixed camera sends the shot images of the movable camera and the target object to be grabbed to the controller;
step 2: the controller divides the received image according to the fusion of the color and the edge information to obtain the contour moment of the mobile camera and the target object, and obtains the centroid position of the mobile camera and the centroid position of the target object to be grabbed through the contour moment;
and step 3: a visual servo algorithm is obtained based on a Gauss-Newton method and a Levenberg-Marquardt algorithm, a joint angle of the parallel robot in operation is obtained through calculation of the visual servo algorithm, and a controller outputs a control signal to the parallel robot according to the obtained joint angle to control the parallel robot to move so that the mass center position of a moving camera is close to the mass center position of a target object to be grabbed;
and 4, step 4: the controller reads an image of a target object to be grabbed shot by the mobile camera, calculates a homography matrix of a current target object image and an expected image by adopting a hybrid vision servo method, decomposes the homography matrix to obtain a rotation matrix and a translation matrix corresponding to the rotation motion and the translation motion of the lower end of the parallel robot, the expected image is the target object image which is shot by placing the target object under the mobile camera in advance and adopting the mobile camera, and then outputs rotation motion and translation motion control signals to the parallel robot so that the mass center of the parallel robot is continuously close to the mass center position of the target object to be grabbed until the mass center of the parallel robot is concentric with the mass center position of the target object to be grabbed;
and 5: calculating the height Z of a target object by adopting imaging geometry according to pictures shot by a fixed camera, and outputting an obtained height Z signal to the parallel robot by a controller;
step 6: before the parallel robot controls the tail end to grab the target according to the read height Z signal, the controller reads an image signal of a target object output by the mobile camera for judgment, when the target object and the mobile camera are concentric, the object is grabbed, and otherwise, the steps 4 to 6 are repeated; if the target runs out of the field of view of the mobile camera, steps 3 to 6 are repeated.
The invention also includes:
1. in the step 3, a visual servo algorithm is obtained based on a Gauss Newton method and a Levenberg-Marquardt algorithm, and the joint angle of the parallel robot obtained through calculation of the visual servo algorithm is specifically as follows:
the first step, defining on the shooting plane of the fixed camera, representing the position of the target object by a function e (t) of time t, representing the position of the tail end of the parallel robot by a function e (q) of the joint angle q of the robot, and defining an error function between the two to be expressed as: f (q, t) ═ e (q) -e (t);
and step two, deducing an uncalibrated visual servo strategy of an eye fixing system consisting of the parallel robot and the mobile camera according to a nonlinear variance minimization principle, and defining a variance minimization function F (q, t) of an error function:
Figure BDA0002677083860000021
then F (q, t) is discretized into a plurality of points (q, t), if a certain time is defined as k (k is 1, 2, …), the k time is (qk, tk), and Taylor-series expansion is performed on the points (qk, tk), so as to obtain a Taylor expansion formula:
Figure BDA0002677083860000031
and thirdly, minimizing the first derivative of F (qk +1, tk +1) at qk by 0, neglecting the high-order derivative, and modifying the Taylor expansion by combining a Levenberg-Marquardt algorithm to obtain a joint angle expression of the parallel robot at the moment of k + 1:
Figure BDA0002677083860000032
in the formula qk∈RnR is a real number, and n is the number of the joint angles of the robot;
αkscale factors, usually taken from the confidence interval of the current system
Jk-an image Jacobian matrix obtained from the image comprising the relation between the position time t of the target object and the robot joint angle q,
vk-a scale factor, vk>0;
fk-deviation input quantity, fk=f(qk,tk);
Δ t — the sampling period, i.e., the distance between k and k + 1;
fourthly, estimating an image jacobian matrix Jk in a joint angle expression of the parallel robot at the moment k +1 by a dynamic Broyden method, defining a first-stage Lao series affine model of an error function f (q, t) as m (q, t), neglecting a high-order derivative term, improving the stability of a control system by using RLS, and finally obtaining an estimated image jacobian matrix:
Figure BDA0002677083860000033
in the formula
Figure BDA0002677083860000034
Order to
Figure BDA0002677083860000035
Wherein q is the joint angle of the parallel robot according to p0=(DTD)-1Selecting an initial value P0Then iteratively calculating P1,P2,....PK
Δf=fk-fk-1
Δq=qk-qk-1
Δt=tk-tk-1
Lambda is a forgetting factor, and lambda is more than 0 and less than or equal to 1;
J0=0m×no is a zero matrix, m is the dimension of the position coordinate of the tail ends of the parallel robots, and n is the number of the joint angles of the robots.
And fifthly, substituting the jacobian matrix of the image estimated in the fourth step into the third step to replace Jk to obtain the joint angle qk +1 of the parallel robot.
2. In step 4, a homography matrix of the current target object image and the expected image is calculated by adopting a hybrid vision servo method, and the homography matrix is decomposed to obtain a rotation matrix and a translation matrix corresponding to the rotation motion and the translation motion of the lower end of the parallel robot, wherein the rotation matrix and the translation matrix are specifically as follows:
firstly, extracting feature points in the whole image containing the current target object by adopting Visual Studio software and a FAST algorithm;
secondly, calculating the movement, namely the optical flow, from the feature point extracted in the previous step of the image of the current frame target object to the pixel position of the image of the next frame target object based on an LK sparse optical flow method;
thirdly, screening out the correct position of the feature point in the next frame of image by judging the brightness of the optical flow, further completing the tracking of the feature point between two adjacent frames of images, and obtaining the pixel coordinates of a feature point pair formed by two feature points which correspond to each other in the current image and the next frame of image;
fourthly, selecting at least 4 groups of characteristic point pairs to calculate to obtain a homography matrix between two frames of images, then calculating the homography matrix between the image of the current target object and the expected image of the target object based on the transfer characteristic of the homography matrix, and completing the calculation of the homography matrix by frame-by-frame multiplication;
fifthly, decomposing the homography matrix H based on singular values to obtain:
H=d*R+pn*
wherein d is the distance from the moving camera to the plane of the conveyor, R is a rotation matrix between the current image and the desired image of the target object, p is a translation vector between the current image and the desired image of the target object, and n is a unit normal vector between the current image and the desired image of the target object;
and sixthly, controlling the parallel robot to rotate and translate according to the rotation matrix R and the translation matrix p, and realizing decoupling of rotation control and translation control until the centroid of the target object is concentric with the centroid of the mobile camera.
Compared with the prior art, the invention has the beneficial effects that:
the accuracy is good: visual servo control is carried out in the whole process, so that failure of visual servo caused by the fact that the target runs out of the view and is shielded is avoided;
the stability is high: the controller switches in real time, so that visual servo failure caused by time difference existing in switching can be avoided;
the applicability is wide: the method can acquire the three-dimensional information of the target, is not limited to the operation on a single target, can operate various targets, and has a wider application range.
Drawings
FIG. 1 is a schematic diagram of a parallel robot vision servo control device according to the present invention.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The invention comprises a parallel robot, a fixed camera, a mobile camera and a controller, wherein the parallel robot is arranged above a conveying device through a rack, the fixed camera is arranged on the rack, the mobile camera is arranged at the tail end of the parallel robot, and the controller is arranged outside the parallel robot. Based on the system, firstly, the fixed camera controls the parallel robot through the visual servo based on the image, so that the target object enters the visual field of the mobile camera; then, the mobile camera completes parallel robot control through hybrid vision servo, so that the target object and the mobile camera are concentric; and finally, forming binocular vision by the two cameras to obtain the height of the target object, and controlling the parallel robot to complete the grabbing of the target object. The method has good accuracy and high stability.
The invention solves the technical problems by the following technical scheme:
a parallel robot visual servo control method comprises the following steps:
the method comprises the following steps that firstly, a parallel robot is installed above a conveying device through a rack, a fixed camera is installed on the rack, a mobile camera is installed at the tail end of the parallel robot, and a controller is installed outside the parallel robot;
step two, the target object to be grabbed enters the visual field of the fixed camera under the driving of the conveying device, the fixed camera sends the shot images of the movable camera and the target object to be grabbed to the controller, and the controller runs the visual servo control process based on the images, and the method specifically comprises the following steps:
(1) the controller segments the received image based on the fusion of color and edge information to obtain the contour moment of the mobile camera and the target object, and obtains the centroid position of the mobile camera and the centroid position of the target object to be grabbed through the contour moment;
(2) a visual servo algorithm is obtained based on a Gauss-Newton method and a Levenberg-Marquardt algorithm, a joint angle of the parallel robot in operation is obtained through calculation of the visual servo algorithm, and a controller outputs a control signal to a controller of the parallel robot according to the obtained joint angle to control the parallel robot to move so that the mass center position of a moving camera is close to the mass center position of a target object to be grabbed;
thirdly, the controller reads an image of a target object to be grabbed shot by the mobile camera, then calculates the current target object image by adopting a hybrid vision servo method, and obtains a rotation matrix and a translation matrix corresponding to the rotation motion and the translation motion of the tail end of the parallel robot by putting the target object under the mobile camera in advance and adopting a homography matrix of an expected image of the target object shot by the mobile camera, and then outputs rotation motion and translation motion control signals to the controller of the parallel robot to enable the mass center of the parallel robot to be continuously close to the mass center position of the target object to be grabbed until the mass center of the parallel robot and the mass center of the target object are concentric;
calculating the height Z of the target object based on imaging geometry according to the picture shot by the fixed camera, and outputting the obtained height Z signal to the controller of the parallel robot by the controller;
step five, before the controller of the parallel robot controls the tail end to grab the target according to the read height Z signal, the controller reads an image signal of a target object output by the mobile camera, if the target object and the mobile camera are concentric, the object is grabbed, and if not, the steps three to five are repeated; and if the target runs out of the field of view of the mobile camera, repeating the second step to the fifth step.
Referring to fig. 1, in the vision servo system for the parallel robots of the present invention, the upper end of a parallel robot 2 is mounted on a frame and located above a conveyor 1, a fixed camera 3 is disposed on the frame, a mobile camera 4 is mounted at the lower end of the parallel robot 2, a target object is driven by the conveyor 1 to enter the field of view of the fixed camera 3, the fixed camera 3 sends the images of the mobile camera 4 and the target object to be captured to a controller 5, and the controller 5 performs vision servo control according to the captured images.
By adopting the parallel robot vision servo system, the parallel robot vision servo control method comprises the following steps:
firstly, a parallel robot 2 is arranged above a conveying device 1 through a rack, a fixed camera 3 is arranged on the rack, a movable camera 4 is arranged at the tail end of the parallel robot 2, and a controller 5 is arranged outside the parallel robot.
Step two, the target object to be grabbed enters the visual field of the fixed camera 3 under the driving of the conveying device 1, the fixed camera 3 sends the shot images of the movable camera 4 and the target object to be grabbed to the controller 5, and the controller 5 runs the visual servo control process based on the images, which specifically comprises the following steps:
(1) the controller 5 may use existing Visual Studio software and other software to segment the received image based on the fusion of color and edge information to obtain the contour moments of the moving camera 4 and the target object, and obtain the centroid position of the moving camera 4 and the centroid position of the target object to be grabbed through the contour moments.
(2) Based on the gauss-Newton method (see piesmeier J a, Mc Murray G V, Lipkin H.A dynamic quadrature-Newton method for calibrating visual serving [ C ]// Robotics and Automation,1999, proceedings.1999ieee International Conference on.ieee,1999,2: 1595-:
the first step, defined on the shooting plane of the fixed camera 3, is represented by e (t) that the position of the target object is a function of time t, e (q) that the position of the end of the parallel robot is a function of the joint angle q of the robot, and the error function between the two is defined as:
f(q,t)=e(q)-e(t)
and step two, deducing an uncalibrated visual servo strategy of an eye fixing system consisting of the parallel robot 2 and the mobile camera 4 according to a nonlinear variance minimization principle. A variance minimization function F (q, t) defining an error function:
Figure BDA0002677083860000071
f (q, t) is discretized into a plurality of points (q, t), and if a certain time is defined as k (k is 1, 2, …), the k time is (q)k,tk) And at point (q)k,tk) And carrying out Taylor series expansion to obtain a Taylor expansion formula:
Figure BDA0002677083860000072
the third step is to let F (q)k+1,tk+1) At qkIs 0, the higher derivatives are ignored, in combination with the Levenberg-Marquardt algorithmAnd modifying the formula to obtain an expression of joint angles of the parallel robots at the k +1 moment:
Figure BDA0002677083860000073
in the formula qk∈RnR is a real number, and n is the number of the joint angles of the robot;
αkscale factors, usually taken from the confidence interval of the current system
Jk-an image Jacobian matrix obtained from the image comprising the relation between the position time t of the target object and the robot joint angle q,
vk-a scale factor, vk>0;
fk-deviation input quantity, fk=f(qk,tk);
Δ t-the sampling period, i.e., the spacing between times k and k + 1.
Fourthly, estimating an image Jacobian matrix J in a joint angle expression of the parallel robot at the k +1 moment by a dynamic Broyden methodkDefining a first-stage Law series affine model of an error function f (q, t) as m (q, t), neglecting a high-order derivative term, applying RLS to improve the stability of a control system, and finally obtaining an estimated image Jacobian matrix
Figure BDA0002677083860000081
In the formula
Figure BDA0002677083860000082
Order to
Figure BDA0002677083860000083
Wherein q is the joint angle of the parallel robot according to p0=(DTD)-1Selecting an initial value P0Then iteratively calculating P1,P2,....PK
Δf=fk-fk-1
Δq=qk-qk-1
Δt=tk-tk-1
Lambda is a forgetting factor, and lambda is more than 0 and less than or equal to 1;
J0=0m×no is a zero matrix, m is the dimension of the position coordinate of the tail ends of the parallel robots, and n is the number of the joint angles of the robots.
And fifthly, substituting the image Jacobian matrix estimated in the fourth step into the third step to replace Jk to obtain a joint angle qk +1 of the parallel robot, and outputting a control signal to the controller of the parallel robot 2 by the controller 5 according to the obtained joint angle to control the parallel robot 2 to move until the target object to be grabbed enters the visual field of the mobile camera 4.
Step three, the controller 5 reads the image of the target object to be grabbed shot by the mobile camera 4, then calculates the homography matrix of the current target object image (i.e. the current target object image is the target object in the field of view of the mobile camera 4 but not the central position) and the expected image of the target object shot by the mobile camera 4 (i.e. the expected image is the central position of the target object in the field of view of the mobile camera 4) by placing the target object under the mobile camera 4 in advance by adopting a hybrid visual servo method (the hybrid visual servo method is referred to as Malis E, Chaumette F, Boudet S.21/2D visual servo [ J ]. Robotics and Automation, IEEE Transactions on, 15(2):238 and 250 ]), and obtains the rotation matrix and the translation matrix corresponding to the rotation motion and the translation motion of the end of the parallel robot by decomposing the homography matrix, and then outputting control signals of rotary motion and translational motion to a controller of the parallel robot to enable the mass center of the parallel robot to be continuously close to the mass center position of the target object to be grabbed until the mass center of the parallel robot and the mass center position are concentric:
the homography matrix of the current image and the expected image is calculated, and the homography matrix of the target object is decomposed to obtain a rotation matrix and a translation matrix, wherein the process comprises the following steps:
firstly, extracting characteristic points with larger difference between pixels in the whole image containing the current target object and enough pixel points in the surrounding neighborhood by adopting the existing Visual Studio and other software and a FAST algorithm;
secondly, calculating the movement, namely the optical flow, from the feature point extracted in the previous step of the image of the current frame target object to the pixel position of the image of the next frame target object based on an LK sparse optical flow method;
thirdly, screening out the correct position of the feature point in the next frame of image by judging the brightness of the optical flow, and further completing the tracking of the feature point between two adjacent frames of images, and by using the method, the pixel coordinates of the feature point pair formed by two feature points which correspond to each other in the current image and the next frame of image can be efficiently and rapidly obtained;
fourthly, selecting at least 4 groups of characteristic point pairs to calculate to obtain a homography matrix between two frames of images, then calculating the homography matrix between the image of the current target object and the expected image of the target object based on the transfer characteristic of the homography matrix, and completing the calculation of the homography matrix by frame-by-frame multiplication;
fifthly, decomposing the homography matrix H based on singular values to obtain:
H=d*R+pn*
in the formula (d)*For moving the distance of the camera 4 to the plane of the conveyor 1, R is the rotation matrix between the current image and the desired image of the target object, p is the translation vector between the image of the current target object and the desired image, n*n is a unit normal vector between the image of the current target object and the expected image.
And sixthly, respectively controlling the rotation and the translation of the parallel robot through a rotation matrix R and a translation matrix p obtained by decomposing the homography matrix H, so that the decoupling of the rotation control and the translation control is realized until the centroid of the target object is concentric with the centroid of the mobile camera 4.
Fourthly, because the mobile camera 4 is concentric with the target object, the plane coordinates of the mobile camera 4 and the target object are consistent, then the height Z of the target object is calculated based on imaging geometry according to the picture shot by the fixed camera 3, and the controller outputs the obtained height Z signal to the controller of the parallel robot;
conversion of world and image coordinate systems:
Figure BDA0002677083860000091
wherein [ u, v,1 ]]tIs the coordinates under the coordinate system of the target object image, [ X, Y, Z,1 ]]TThe matrix M is the product of the parameter matrix and the transformation matrix in the fixed camera 3, which is the coordinate of the target object in the world coordinate system, that is:
M=K[C|T]
in the formula, K is an intra-camera parameter matrix, [ C | T ] is a conversion matrix, C is a rotation matrix, and T is a translation matrix, wherein C and T are both obtained by a Zhang-friend calibration method.
Step five, before the controller of the parallel robot controls the tail end to grab the target according to the read height Z signal, the controller 5 reads the image signal of the target object output by the mobile camera 4, if the target object and the mobile camera 4 are concentric, the object is grabbed, otherwise, the steps three to five are repeated; if the object runs out of the field of view of the mobile camera 4, steps two through five are repeated.

Claims (4)

1. A parallel robot vision servo system is characterized in that: parallel robot (2) upper end is installed in the frame, and be located the top of conveyer (1), fixed camera (3) set up in the frame, the lower extreme at parallel robot (2) is installed in mobile camera (4), the target object gets into the field of vision of fixed camera (3) under conveyer's (1) drive, controller (5) is sent to with the image of the mobile camera (4) of shooing and the target object of waiting to snatch in fixed camera (3), controller (5) carry out vision servo control according to gathering the image.
2. A control method using the parallel robot vision servo system of claim 1, comprising the steps of:
step 1: the target object to be grabbed enters the visual field of the fixed camera under the driving of the conveying device, and the fixed camera sends the shot images of the movable camera and the target object to be grabbed to the controller;
step 2: the controller divides the received image according to the fusion of the color and the edge information to obtain the contour moment of the mobile camera and the target object, and obtains the centroid position of the mobile camera and the centroid position of the target object to be grabbed through the contour moment;
and step 3: a visual servo algorithm is obtained based on a Gauss-Newton method and a Levenberg-Marquardt algorithm, a joint angle of the parallel robot in operation is obtained through calculation of the visual servo algorithm, and a controller outputs a control signal to the parallel robot according to the obtained joint angle to control the parallel robot to move so that the mass center position of a moving camera is close to the mass center position of a target object to be grabbed;
and 4, step 4: the controller reads an image of a target object to be grabbed shot by the mobile camera, calculates a homography matrix of a current target object image and an expected image by adopting a hybrid vision servo method, decomposes the homography matrix to obtain a rotation matrix and a translation matrix corresponding to the rotation motion and the translation motion of the lower end of the parallel robot, the expected image is the target object image which is shot by placing the target object under the mobile camera in advance and adopting the mobile camera, and then outputs rotation motion and translation motion control signals to the parallel robot so that the mass center of the parallel robot is continuously close to the mass center position of the target object to be grabbed until the mass center of the parallel robot is concentric with the mass center position of the target object to be grabbed;
and 5: calculating the height Z of a target object by adopting imaging geometry according to pictures shot by a fixed camera, and outputting an obtained height Z signal to the parallel robot by a controller;
step 6: before the parallel robot controls the tail end to grab the target according to the read height Z signal, the controller reads an image signal of a target object output by the mobile camera for judgment, when the target object and the mobile camera are concentric, the object is grabbed, and otherwise, the steps 4 to 6 are repeated; if the target runs out of the field of view of the mobile camera, steps 3 to 6 are repeated.
3. A control method according to claim 2 using the parallel robot vision servo system of claim 1, characterized in that: and 3, obtaining a visual servo algorithm based on a Gauss Newton method and a Levenberg-Marquardt algorithm, wherein the joint angle of the parallel robot obtained through calculation of the visual servo algorithm specifically comprises the following steps:
the first step, defining on the shooting plane of the fixed camera, representing the position of the target object by a function e (t) of time t, representing the position of the tail end of the parallel robot by a function e (q) of the joint angle q of the robot, and defining an error function between the two to be expressed as: f (q, t) ═ e (q) -e (t);
and step two, deducing an uncalibrated visual servo strategy of an eye fixing system consisting of the parallel robot and the mobile camera according to a nonlinear variance minimization principle, and defining a variance minimization function F (q, t) of an error function:
Figure FDA0002677083850000021
then F (q, t) is discretized into a plurality of points (q, t), if a certain time is defined as k (k is 1, 2, …), the k time is (qk, tk), and Taylor-series expansion is performed on the points (qk, tk), so as to obtain a Taylor expansion formula:
Figure FDA0002677083850000022
and thirdly, minimizing the first derivative of F (qk +1, tk +1) at qk by 0, neglecting the high-order derivative, and modifying the Taylor expansion by combining a Levenberg-Marquardt algorithm to obtain a joint angle expression of the parallel robot at the moment of k + 1:
Figure FDA0002677083850000023
in the formula qk∈RnR is a real number, and n is the number of the joint angles of the robot;
αkscale factors, usually taken from the confidence interval of the current system
Jk-an image Jacobian matrix obtained from the image comprising the relation between the position time t of the target object and the robot joint angle q,
vk-a scale factor, vk>0;
fk-deviation input quantity, fk=f(qk,tk);
Δ t — the sampling period, i.e., the distance between k and k + 1;
fourthly, estimating an image jacobian matrix Jk in a joint angle expression of the parallel robot at the moment k +1 by a dynamic Broyden method, defining a first-stage Lao series affine model of an error function f (q, t) as m (q, t), neglecting a high-order derivative term, improving the stability of a control system by using RLS, and finally obtaining an estimated image jacobian matrix:
Figure FDA0002677083850000024
in the formula
Figure FDA0002677083850000031
Order to
Figure FDA0002677083850000032
Wherein q is the joint angle of the parallel robot according to p0=(DTD)-1Selecting an initial value P0Then iteratively calculating P1,P2,....PK
Δf=fk-fk-1
Δq=qk-qk-1
Δt=tk-tk-1
Lambda is a forgetting factor, and lambda is more than 0 and less than or equal to 1;
J0=0m×no is a zero matrix, m is the dimension of the position coordinate of the tail ends of the parallel robots, and n is the number of the joint angles of the robots.
And fifthly, substituting the jacobian matrix of the image estimated in the fourth step into the third step to replace Jk to obtain the joint angle qk +1 of the parallel robot.
4. A control method according to claim 2 or 3 using the parallel robot vision servo system of claim 1, characterized in that: step 4, calculating the homography matrix of the current target object image and the expected image by adopting a hybrid vision servo method, and decomposing the homography matrix to obtain a rotation matrix and a translation matrix corresponding to the rotation motion and the translation motion of the lower end of the parallel robot specifically:
firstly, extracting feature points in the whole image containing the current target object by adopting Visual Studio software and a FAST algorithm;
secondly, calculating the movement, namely the optical flow, from the feature point extracted in the previous step of the image of the current frame target object to the pixel position of the image of the next frame target object based on an LK sparse optical flow method;
thirdly, screening out the correct position of the feature point in the next frame of image by judging the brightness of the optical flow, further completing the tracking of the feature point between two adjacent frames of images, and obtaining the pixel coordinates of a feature point pair formed by two feature points which correspond to each other in the current image and the next frame of image;
fourthly, selecting at least 4 groups of characteristic point pairs to calculate to obtain a homography matrix between two frames of images, then calculating the homography matrix between the image of the current target object and the expected image of the target object based on the transfer characteristic of the homography matrix, and completing the calculation of the homography matrix by frame-by-frame multiplication;
fifthly, decomposing the homography matrix H based on singular values to obtain:
H=d*R+pn*
wherein d is the distance from the moving camera to the plane of the conveyor, R is a rotation matrix between the current image and the desired image of the target object, p is a translation vector between the current image and the desired image of the target object, and n is a unit normal vector between the current image and the desired image of the target object;
and sixthly, controlling the parallel robot to rotate and translate according to the rotation matrix R and the translation matrix p, and realizing decoupling of rotation control and translation control until the centroid of the target object is concentric with the centroid of the mobile camera.
CN202010951451.9A 2020-09-11 2020-09-11 Parallel robot vision servo system and control method Pending CN112099442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010951451.9A CN112099442A (en) 2020-09-11 2020-09-11 Parallel robot vision servo system and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010951451.9A CN112099442A (en) 2020-09-11 2020-09-11 Parallel robot vision servo system and control method

Publications (1)

Publication Number Publication Date
CN112099442A true CN112099442A (en) 2020-12-18

Family

ID=73750880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010951451.9A Pending CN112099442A (en) 2020-09-11 2020-09-11 Parallel robot vision servo system and control method

Country Status (1)

Country Link
CN (1) CN112099442A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114609976A (en) * 2022-04-12 2022-06-10 天津航天机电设备研究所 Non-calibration visual servo control method based on homography and Q learning
CN114946403A (en) * 2022-07-06 2022-08-30 青岛科技大学 Tea picking robot based on calibration-free visual servo and tea picking control method thereof
CN114986250A (en) * 2022-05-23 2022-09-02 西门子(中国)有限公司 Stepping control method and stepping control system of battery pole piece cutting equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109848987A (en) * 2019-01-22 2019-06-07 天津大学 A kind of parallel robot Visual servoing control method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109848987A (en) * 2019-01-22 2019-06-07 天津大学 A kind of parallel robot Visual servoing control method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114609976A (en) * 2022-04-12 2022-06-10 天津航天机电设备研究所 Non-calibration visual servo control method based on homography and Q learning
CN114986250A (en) * 2022-05-23 2022-09-02 西门子(中国)有限公司 Stepping control method and stepping control system of battery pole piece cutting equipment
CN114986250B (en) * 2022-05-23 2024-05-03 西门子(中国)有限公司 Stepping control method and stepping control system of battery pole piece cutting equipment
CN114946403A (en) * 2022-07-06 2022-08-30 青岛科技大学 Tea picking robot based on calibration-free visual servo and tea picking control method thereof

Similar Documents

Publication Publication Date Title
CN112099442A (en) Parallel robot vision servo system and control method
EP3011362B1 (en) Systems and methods for tracking location of movable target object
Allen et al. Real-time visual servoing
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
Stavnitzky et al. Multiple camera model-based 3-D visual servo
Ryberg et al. Stereo vision for path correction in off-line programmed robot welding
CN109848987B (en) Parallel robot vision servo control method
Liu et al. Fast eye-in-hand 3-D scanner-robot calibration for low stitching errors
CN109664317B (en) Object grabbing system and method of robot
CN112766328B (en) Intelligent robot depth image construction method fusing laser radar, binocular camera and ToF depth camera data
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN112894812A (en) Visual servo trajectory tracking control method and system for mechanical arm
CN100417952C (en) Vision servo system and method for automatic leakage detection platform for sealed radioactive source
CN112518748A (en) Automatic grabbing method and system of vision mechanical arm for moving object
CN114536346B (en) Mechanical arm accurate path planning method based on man-machine cooperation and visual detection
Natarajan et al. Robust stereo-vision based 3D modelling of real-world objects for assistive robotic applications
CN116872216B (en) Robot vision servo operation method based on finite time control
CN114067210A (en) Mobile robot intelligent grabbing method based on monocular vision guidance
CN114140534A (en) Combined calibration method for laser radar and camera
CN113706628A (en) Intelligent transfer robot cooperation system and method for processing characteristic image by using same
EP4101604A1 (en) System and method for improving accuracy of 3d eye-to-hand coordination of a robotic system
Özgür et al. High speed parallel kinematic manipulator state estimation from legs observation
CN114842079A (en) Device and method for measuring pose of prefabricated intermediate wall in shield tunnel
Popescu et al. Real-time assembly fault detection using image analysis for industrial assembly line
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination