CN104460505A - Industrial robot relative pose estimation method - Google Patents

Industrial robot relative pose estimation method Download PDF

Info

Publication number
CN104460505A
CN104460505A CN201410634619.8A CN201410634619A CN104460505A CN 104460505 A CN104460505 A CN 104460505A CN 201410634619 A CN201410634619 A CN 201410634619A CN 104460505 A CN104460505 A CN 104460505A
Authority
CN
China
Prior art keywords
profile
industrial robot
image
pose estimation
estimation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410634619.8A
Other languages
Chinese (zh)
Inventor
曲道奎
徐方
姜楠
李邦宇
李燊
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Siasun Robot and Automation Co Ltd
Original Assignee
Shenyang Siasun Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Siasun Robot and Automation Co Ltd filed Critical Shenyang Siasun Robot and Automation Co Ltd
Priority to CN201410634619.8A priority Critical patent/CN104460505A/en
Publication of CN104460505A publication Critical patent/CN104460505A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form

Abstract

The invention relates to the technical field of computer vision, in particular to an industrial robot relative pose estimation method based on an artificial mark. The method comprises the steps that binarization processing is carried out on an original image by adopting self-adaptive thresholding to obtain a binary image; boundary contour extraction is carried out on the binary image to obtain contours of all foreground targets; shape and topological structure analysis is carried out on the contours, and an outer contour area of the mark is obtained; least square ellipse fitting is carried out on the outer contours; five relative degrees of freedom are calculated through circular deformation analysis; an image moment is used for analyzing a character T to obtain a yaw angle, and a three-dimensional pose of a camera relative to the artificial mark is obtained. According to the industrial robot relative pose estimation method, three-dimensional pose estimation under monocular vision is achieved based on the artificial mark, the depth information at the artificial mark can be obtained through a monocular camera to be used as an industrial robot vision guide, the cost spent for purchasing a binocular camera is omitted, meanwhile, the algorithm efficiency is high, a large number of computing resources are not needed, and implementation is easy.

Description

Industrial robot relative pose estimation method
Technical field
The present invention relates to technical field of computer vision, particularly a kind of industrial robot relative pose estimation method based on artificial target.
Background technology
When industrial robot carries out assembly manipulation, two kinds of modes are usually had to guide industrial robot to carry out work.Based on a mode for teaching, which performance accuracy is very high, and without the need to too much professional skill, but the position of each assembling workpiece needs to remain unchanged, and lacks intelligent; Another kind of mode obtains the part or all of posture information of assembling workpiece by other external sensors, and then guides industrial robot motion.
For the situation obtaining part posture information, usually adopt Quick Response Code or mark, obtained center and the anglec of rotation of its two dimension by image processing techniques, but its elevation information needs teaching to provide, therefore the method cannot process the situation of alterable height.
For the situation obtaining whole three-dimensional pose information, prior art adopts binocular stereo vision by carrying out three-dimensionalreconstruction to scene usually, obtain the posture information of target object again, adopt the method for binocular stereo vision can process most scene, but the situation that its depth information obtained after rebuilding may have the degree of depth to lose, cause the object pose information calculated imperfect and cannot carry out, simultaneously the computational resource of the scene reconstruction process need at substantial of binocular.
The method of another kind of reduction three-dimensional information is the mode using structured light, and such as three-dimensional laser, which can well obtain the depth information of scene, but which adds the identification difficulty to target.
Summary of the invention
The present invention is intended to the defect overcoming existing Visual identification technology, obtains three-dimensional information, provides a kind of industrial robot relative pose estimation method based on artificial target.
For achieving the above object, the present invention is by the following technical solutions:
The invention provides a kind of industrial robot relative pose estimation method, comprising:
S1, employing self-adaption thresholding carry out binary conversion treatment to original image, obtain bianry image;
S2, boundary profile extraction is carried out to described bianry image, obtain the profile of all foreground targets;
S3, shape and Analysis of Topological Structure are carried out to described profile, obtain the outline region indicated;
S4, least square ellipse matching is carried out to outline;
S5, calculate five relative freedoms by circular deformation analysis;
S6, use image moment are analyzed character T, obtain crab angle, obtain the three-dimensional pose of camera relative to artificial target.
In some embodiments, each pixel in original image and the pixel average around it in large small neighbourhood of s × s compare by described self-adaption thresholding, if current pixel value is less than average threshold value, then the pixel value of described pixel is set to 0, otherwise is set to 255.
In some embodiments, described threshold value is 15%.
In some embodiments, described boundary profile is extracted as the whole pictures of raster scanning, until run into the starting point meeting frontier tracing, for current new-found border distributing labels sequence number, simultaneously according to the border that the last time of record runs into, determine the father border of current border, adopt edge following algorithm, according to frontier point marking convention, complete search and the mark on border; After completing frontier tracing, recover the process of raster scanning, until arrive the lower right position of image.
In some embodiments, described in be masked as, the size of profile meets in threshold range; Must be there is hole profile in current outline, and this hole profile also must have sub-profile; The subcircular of inspection current outline; Calculate the radius ratio of current outline profile with it.
In some embodiments, if current profile is close to circle, the value of circular o is then more close to 1; Circular threshold value is set to 0.8.
Beneficial effect of the present invention is: the three-dimensional pose realized under monocular vision based on artificial target is estimated, the depth information at artificial target place can be obtained by monocular camera, as the vision guide of industrial robot, save the cost buying binocular camera, efficiency of algorithm of the present invention is high simultaneously, without the need to a large amount of computational resources, and implement comparatively simple.Compared with prior art, the method saves cost and computational resource, and can obtain three-dimensional pose information comparatively accurately.
Accompanying drawing explanation
Fig. 1 schematically shows artificial target's schematic diagram according to an embodiment of the invention.
Fig. 2 schematically shows integral image schematic diagram according to an embodiment of the invention.
Fig. 3 schematically shows frontier tracing schematic diagram according to an embodiment of the invention.
Fig. 4 schematically shows coordinate system schematic diagram according to an embodiment of the invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and the specific embodiments, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, and be not construed as limiting the invention.
Manufacturing artificial mark of the present invention, as shown in Figure 1, this mark is made up of the annulus of character T two centres, and two annulus are for removing difference solution; This mark is placed on position fixing on assembling case, namely ensures that mark is constant and known with the relative position relation of workpiece slot; The image of camera acquisition processes, and implements Relative attitude and displacement estimation.
The invention provides industrial robot relative pose estimation method, comprise
S1, employing self-adaption thresholding carry out binary conversion treatment to original image, obtain bianry image;
S2, boundary profile extraction is carried out to described bianry image, obtain the profile of all foreground targets;
S3, shape and Analysis of Topological Structure are carried out to described profile, obtain the outline region indicated;
S4, least square ellipse matching is carried out to outline;
S5, calculate five relative freedoms by circular deformation analysis;
S6, use image moment are analyzed character T, obtain crab angle, obtain the three-dimensional pose of camera relative to artificial target.
Particularly, adopt self-adaption thresholding to carry out binary conversion treatment to original image, obtain bianry image; Wherein, image threshold is that the pixel in image is carried out two-value classification according to certain feature (such as, image pixel intensities), final formation one width bianry image.
Because in physical environment, image often there will be uneven illumination; even serious shade circumstance of occlusion; the method of fixed threshold is difficult to obtain the foreground object expected usually, adopts the self-adaption thresholding method of variable thresholding effectively to overcome the problems referred to above in the embodiment of the present invention.
When need to have in computed image frequently overlapping rectangles region pixel and time, usually need to adopt integral image, by the plus and minus calculation of constant time in integral image, significantly save computing time.Adopt the calculating of integral image in the embodiment of the present invention in self-adaption thresholding algorithm, below integral image is described.
If original image is f (x) in the image pixel intensities at (x, y) some place, then in integral image, the value at corresponding (x, y) some place is I (x, y), and its definition is as follows:
I(x,y)=f(x,y)+I(x-1,y)+I(x,y-1)-I(x-1,y-1) (1)
As shown in Figure 2, when integral image build after, to region D in calculating chart pixel and, known this region D top left co-ordinate (x 1, y 1) and lower right corner coordinate (x 2, y 2), its computing formula is as follows:
Σ x = x 1 x 2 Σ y = y 1 y 2 f ( x , y ) = I ( x 2 , y 2 ) - I ( x 2 , y 1 - 1 ) - I ( x 1 - 1 , y 2 ) + I ( x 1 - 1 , y 1 - 1 ) - - - ( 2 )
In the present invention, the basic thought of self-adaption thresholding is compared each pixel in image and the pixel average around it in large small neighbourhood of s × s, if current pixel value is less than certain number percent of average, 15% this parameter is adopted in the embodiment of the present invention, then this pixel value is set to 0, otherwise is set to 255.The stronger region of contrast can be retained by above-mentioned strategy, reduce the susceptibility for the slow region of graded simultaneously.
Boundary profile extraction is carried out to described bianry image, obtains the profile of all foreground targets; Namely after original image is by thresholding process, become a width bianry image, there is obvious connected region and corresponding profile information in this bianry image, pass through edge following algorithm, complete the analysis of extraction to Edge tracking of binary image and topological structure, for follow-up Mark Detection.
In bianry image, the background of the surrounding border composing images of image.Suppose that in this bianry image, image pixel intensities is divided into 0 and 1, then pixel value be 0 and 1 pixel be called 0 pixel and 1 pixel, without loss of generality, suppose the 0 pixel filling background of bianry image.The connected region be made up of with 0 pixel 1 pixel is called 1 connected sum 0 and is communicated with.If 0 connected region contains the background of image, then claim this connected region to be background, otherwise be hole.
For the topological relation on border in bianry image is described, following relevant definition,
(1) frontier point: when four neighborhoods, 1 pixel (i, j) if there is a neighborhood point is 0 pixel (p, q), then claims this pixel to be frontier point.
Also be expressed as, if (i, j) is region S 1in point, (p, q) is region S 2in point, then frontier point (i, j) is defined as between 1 connected region S 1with 0 connected region S 2between pixel.
(2) outer boundary and border, hole: outer boundary directly surrounds the border of 1 connected region generation by 0 connected region; Corresponding with it, border, hole directly surrounds the border of 0 connected region generation by 1 connected region.
Above-mentioned direct encirclement refers to that two connected regions exist encirclement relation, and there is frontier point therebetween.
(3) father border: between 1 connected region S 1with 0 connected region S 2father's boundary definition of outer boundary be: if S 2be bore region, then his father border is direct surrounds aperture region S 21 connected region and S 2the border, hole formed; If S 2be background, then the special border, hole that formed of image background is as his father border.Between bore region S 3with directly surround its 1 connected region S 4father's boundary definition on border, hole be: S 4the outer boundary formed with 0 connected region of directly surrounding it.
The basic thought of above-mentioned algorithm: use the whole pictures of raster scanning, until run into the starting point meeting frontier tracing, as shown in Figure 3, wherein frontier tracing starting point condition, (a) outline starting point (i in figure, j), (b) hole profile starting point (i, j); For current new-found border distributing labels sequence number, simultaneously according to the border that the last time of record runs into, determine the father border of current border, its rule is shown in Table 1, and wherein B is the most new-found current profile, and B' is the profile run into recently; Adopt classical edge following algorithm, according to frontier point marking convention, complete search and the mark on border; After completing frontier tracing, recover the process of raster scanning, until arrive the lower right position of image.
Criterion is determined on the father border of table 1 profile B
By the analyzing and processing of aforementioned bianry image, the profile in bianry image can be obtained fast and effectively, and shape and Analysis of Topological Structure are carried out to described profile, obtain the outline region indicated.Wherein, in the embodiment of the present invention, mark has obvious geometric properties:
(1), the size of profile meets in the threshold range of a setting;
(2), must there is hole profile in current outline, and this hole profile also must have sub-profile;
(3), inspection current outline subcircular, its formula is as follows:
o = 4 πA u 2 - - - ( 3 )
Wherein, A is the image area that current outline surrounds, and u is the length of profile, and circular o is a number between 0 to 1.If current profile is close to circle, the value of o is then more close to 1; Circular threshold value obtains by experiment, and the threshold value of final o is set to 0.8.
(4), calculate the radius ratio of current outline profile with it, computing formula is as follows:
c r = A inner A outer - - - ( 4 )
Wherein, A outerand A innerthe area of current outline and its sub-profile respectively, owing to indicating that the radius of inner and outer ring is a fixing constant and does not change, therefore c with the angle change projecting to video camera rfluctuate near a steady state value.By above-mentioned strategy, it is analyzed, can ensure to rise and fall mark in natural scene without error detection.
Least square ellipse matching is carried out to outline; After indicating detected identifying, just can obtain the coordinate set of pixel coordinate point on mark outer shroud.Due to the impact of perspective projection, its original circular shape becomes oval, if its general equation is as follows:
Ax 2+2Bxy+Cy 2+2Dx+2Ey=1 (5)
For by its deformation analysis is completed 5DOF relative pose estimation, first solve its parameter, write as matrix form as shown in Equation 6, its detailed expansion is as shown with 7.
Φx=b (6)
x 1 2 2 x 1 y 1 y 1 2 2 x 1 2 y 1 x 2 2 2 x 2 y 2 y 2 2 2 x 2 2 y 2 M M M M M M M M M M x n 2 2 x n y n y n 2 2 x n 2 y n A B C D E = 1 1 M M 1 - - - ( 7 )
Wherein, A, B, C, D, E are 5 elliptic parameters to be solved.Because the point quantity on ellipse in image is a lot, therefore forms over-determined systems, usually adopting the method for least square fitting for solving this system of equations, namely correspond to following minimum optimization problem:
δ=min||Φx-b|| (8)
Wherein, δ is residual error.Solving this problem, the present embodiment adopts the method for svd to solve it.
The singular value of matrix and decomposition thereof: establish Φ ∈ C m × n, k=min (m, n), Hermite positive semidefinite matrix Φ hthe eigenwert of Φ is λ 1>=λ 2>=L>=λ k>=0 claims nonnegative real number σ i(A) be the singular value of matrix Φ, formula is as follows:
σ i ( A ) = λ i , i = 1,2 , L , k - - - ( 9 )
Matrix singular value decomposition theorem: establish Φ ∈ C m × n, and its order rank (Φ)=r, then there are m rank, n rank unitary matrix U, V make,
Φ = U Σ 0 0 0 V H - - - ( 10 )
Wherein, ∑=diag (σ 1, σ 2, L, σ r), σ is the non-zero singular value of matrix Φ.
Therefore the solution for above-mentioned elliptic parameter least square fitting problem is
x=V∑ -1U Tb (11)
Wherein, Φ=U ∑ V h, ∑ -1be defined as the inverse of all non-zero singular value.
In reality, a large amount of cultures has circular surface structure, and when image acquisition being carried out to it by camera, because camera and object exist certain visual angle change, thus cause to be changed on the image that obtains by video camera perspective and be shown as ellipse, the present invention is by the deformation analysis to circle marker, and the 5DOF pose completing relative pose is estimated.Namely five relative freedoms are calculated by circular deformation analysis.
As shown in Figure 4, if camera coordinate system initial point is the intersection point of camera sensor chip and optical axis, optical axis is as Z axis.In two dimensional surface, oval general equation is as follows:
Ax_c^2+2Bx_c y_c+Cy_c^2+2Dx_c+2Ey_c+F=0 (12)
Calculate derivation for convenience, its corresponding matrix form is shown below:
x c y c 1 A B D B C E D E F x c y c 1 - - - ( 13 )
Suppose rectilinear light beam simultaneously by the ellipse on video camera initial point and the plane of delineation, now light beam forms an elliptic cone tilted.If the focal length of camera is f, then, under camera coordinate system, the plane of delineation is expressed as the plane of z=f.In the present invention, the focal length of video camera is considered to known quantity all the time, and demarcate acquisition by carrying out intrinsic parameter to it in advance, the point on the elliptic cone of therefore this inclination is expressed as following formula:
P=k(x_c,y_c,f)^T (14)
Wherein, k is a scale factor, for describing the distance of a P to camera coordinate system initial point.According to formula 13 and 14, the elliptic cone expression formula of inclination is as follows:
P TQP=0 (15)
Wherein, the expression formula of Q is as follows:
Q = A B D f B C E f D f E f F f 2 - - - ( 16 )
Support plane coordinate system for one, this coordinate system is defined as initial point at camera initial point, but Z axis is defined by the downward unit normal vector of circular flat in real scene.If z 0be the Z coordinate supporting Plane-point, therefore circular in plane expression formula is as follows:
( x - x 0 ) 2 + ( y - y 0 ) 2 = r 2 z = z 0 - - - ( 17 )
Wherein, (x 0, y 0, z 0) be the center indicated of rising and falling, r is the radius of its outer toroid.Same, following formula definition oblique cone,
P c T Q c P c = 0 - - - ( 18 )
Wherein, Q cdefined formula as follows:
Q c = 1 0 x 0 z 0 0 1 y 0 z 0 x 0 z 0 y 0 z 0 x 0 2 + y 0 2 - r 2 z 0 2 - - - ( 19 )
According to two coordinate systems above, it has identical true origin, therefore there is rotation relationship between Two coordinate system.Because oblique cone and oblique elliptic cone are same conical surfaces, therefore there is a rotation matrix R c, by P ctransfer to P, its formula is as follows:
P=R cP c(20)
For arbitrary k ≠ 0, kQ cthat describe is identical circular cone Q c, according to formula 15, formula 18 and formula 20, following formula can be obtained:
k R c T Q R c = Q c - - - ( 21 )
For determining R cand Q c, so that obtain center point coordinate circular in the unit normal vector of seating surface and scene, first Q is carried out Diagonalization Decomposition.If λ 1, λ 2, λ 3the eigenwert of matrix Q, v 1=(v 1xv 1yv 1z) t, v 2=(v 2xv 2yv 2z) t, v 3=(v 3xv 3yv 3z) tbe characteristic of correspondence vector respectively, then matrix Q can be expressed as following formula:
Q=VΛV T(22)
According to Schur theorem, because matrix Q is real symmetric matrix, namely Q is normal matrix, then matrix Λ is diagonal matrix, and diagonal entry is proper vector, and expression is as follows:
Λ = diag { λ 1 , λ 2 , λ 3 } V = v 1 v 2 v 3 - - - ( 23 )
Bring formula 22 into formula 21, arrangement can obtain
kR TΛR=Q c(24)
Wherein, the expression formula of matrix R is as follows:
R = r 1 x r 2 x r 3 x r 1 y r 2 y r 3 y r 1 z r 2 z r 3 z = V T R c - - - ( 25 )
According to formula 25, formula 24 is launched, following expression can be obtained:
λ 1 ( r 1 x 2 - r 2 x 2 ) + λ 2 ( r 1 y 2 - r 2 y 2 ) + λ 3 ( r 1 z 2 - r 2 z 2 ) = 0 λ 1 r 1 x r 2 x + λ 2 r 1 y r 2 y + λ 3 r 1 z r 2 z = 0 - - - ( 26 )
If current Graphics is conic section, then its 3 eigenwerts meet existence 2 is that just 1 is negative, thus, makes following hypothesis:
&lambda; 1 &lambda; 2 > 0 &lambda; 1 &lambda; 3 < 0 | &lambda; 1 | &GreaterEqual; | &lambda; 2 | - - - ( 27 )
Formula 26 is simplified, and known R tr=I, obtains matrix R expression formula as follows:
R = g cos &alpha; S 1 g sin &alpha; S 2 h sin &alpha; - S 1 cos &alpha; 0 S 1 S 2 h cos S 2 h sin &alpha; - S 1 g - - - ( 28 )
Wherein, α is a free variable, S 1and S 2undetermined symbol weight, simultaneously
g = &lambda; 2 - &lambda; 3 &lambda; 1 - &lambda; 3 , h = &lambda; 1 - &lambda; 2 &lambda; 1 - &lambda; 3 - - - ( 29 )
Bring formula 28 into formula 24, then determine parameter k, solve the value obtained as follows:
x 0 z 0 = S 2 ( &lambda; 1 - &lambda; 2 ) ( &lambda; 2 - &lambda; 3 ) cos &alpha; &lambda; 2 y 0 z 0 = S 1 S 2 ( &lambda; 1 - &lambda; 2 ) ( &lambda; 2 - &lambda; 3 ) sin &alpha; &lambda; 2 r 2 z 0 2 = - &lambda; 1 &lambda; 3 &lambda; 2 2 k = &lambda; 2 - - - ( 30 )
As shown in Figure 4, the z-axis unit normal vector under the marker coordinates system that obtains rising and falling and description N and C of true origin under camera coordinates system, its concrete formula is defined as follows:
C = z 0 VR x 0 / z 0 y 0 / z 0 1 z 0 V S 2 &lambda; 3 &lambda; 2 &lambda; 1 - &lambda; 2 &lambda; 1 - &lambda; 3 0 N = VR 0 0 1 = V S 2 &lambda; 1 - &lambda; 3 &lambda; 1 - &lambda; 3 0 S 1 &lambda; 2 - &lambda; 3 &lambda; 1 - &lambda; 3 - - - ( 31 )
Wherein, s 3it is a symbolic variable for determining.
Two solutions being calculated N and C by formula 31 are come, and solve respectively obtain four groups of solutions according to two marks, and getting two groups of close solutions is then last solution.
After identification obtains mark, then obtaining the picture position of character T in mark, by adopting image moment to analyze character T, obtain it towards the angle between camera x-axis, thus guided robot rotating.Image moment is certain special weighted mean of image pixel intensities in image, is commonly used to the geometric attribute of object in Description Image, the area of such as target, center of gravity, tensor and direction etc.Namely use image moment to analyze character T, obtain crab angle, obtain the three-dimensional pose of camera relative to artificial target.
For two-dimentional continuous function f (x, y), its p+q rank square be defined as shown in the formula:
m p,q=∫∫x py qf(x,y)dxdy (32)
For discrete digital picture I (x, y), shown in its p+q rank image moment is defined as follows:
m p , q = &Sigma; x &Sigma; y x p y q I ( x , y ) - - - ( 33 )
According to above-mentioned formula, when (p, q)=(0,0), its 0 rank square describes the area in traversal (x, y) region, and by the calculating of following formula, obtain the center of gravity of the object that this square describes, its formula is as follows:
x c = m 1,0 m 0,0 , y c = m 0,1 m 0,0 - - - ( 34 )
But this original image square to the description of object can along with object in the picture position change and change, be the square obtaining invariant position, introduce the concept of center square, its defined formula is as follows:
u p , q = &Sigma; x &Sigma; y ( x - x c ) p ( y - y c ) q I ( x , y ) - - - ( 35 )
Single order and order central are apart from being obtained by the following derivation of equation
u p , q = m p , q m 0,0 - ( m 1,0 m 0 , 0 ) p &times; ( m 0,1 m 0,0 ) q - - - ( 36 )
Further, the centre distance below three conventional rank is calculated by original square, and its computing formula is as follows:
u 1,0 = u 0,1 = 0 u 2,0 = m 2,0 - m 0,0 x c u 1,1 = m 1,1 - m 0,0 x c y c u 0,2 = m 0,2 - m 0,0 y c u 3,0 = m 3,0 - 3 m 2,0 x c + 2 m 0,0 3 x c u 2,1 = m 2,1 - m 2,0 y c - 2 m 1,1 x c + 2 m 0,0 2 x c y c u 1,2 = m 1 , 2 - m 0,2 x c - 2 m 1,1 y c + 2 m 0,0 2 x c y c u 0 , 3 = m 0,3 - 3 m 0,2 y c + 2 m 0,0 3 y c - - - ( 37 )
In mark, the direction of tee is defined as the angle between its major axes orientation and image x-axis direction, and the main shaft of objects in images is defined as the maximum direction of its extendability, and its computing formula is as follows:
&theta; = 1 2 arcatn 2 u 1,1 u 2,0 - u 0,2 - - - ( 38 )
The present invention is based on the three-dimensional pose that artificial target realizes under monocular vision to estimate, the depth information at artificial target place can be obtained by monocular camera, as the vision guide of industrial robot, save the cost buying binocular camera, efficiency of algorithm of the present invention is high simultaneously, without the need to a large amount of computational resources, and implement comparatively simple.Compared with prior art, the method saves cost and computational resource, and can obtain three-dimensional pose information comparatively accurately.
The above the specific embodiment of the present invention, does not form limiting the scope of the present invention.Any various other done by technical conceive of the present invention change and distortion accordingly, all should be included in the protection domain of the claims in the present invention.

Claims (6)

1. an industrial robot relative pose estimation method, is characterized in that, comprising:
S1, employing self-adaption thresholding carry out binary conversion treatment to original image, obtain bianry image;
S2, boundary profile extraction is carried out to described bianry image, obtain the profile of all foreground targets;
S3, shape and Analysis of Topological Structure are carried out to described profile, obtain the outline region indicated;
S4, least square ellipse matching is carried out to outline;
S5, calculate five relative freedoms by circular deformation analysis;
S6, use image moment are analyzed character T, obtain crab angle, obtain the three-dimensional pose of camera relative to artificial target.
2. industrial robot relative pose estimation method as claimed in claim 1, it is characterized in that, each pixel in original image and the pixel average around it in large small neighbourhood of s × s compare by described self-adaption thresholding, if current pixel value is less than average threshold value, then the pixel value of described pixel is set to 0, otherwise is set to 255.
3. industrial robot relative pose estimation method as claimed in claim 2, it is characterized in that, described threshold value is 15%.
4. industrial robot relative pose estimation method as claimed in claim 1, it is characterized in that, described boundary profile is extracted as the whole pictures of raster scanning, until run into the starting point meeting frontier tracing, be current new-found border distributing labels sequence number, simultaneously according to the border that the last time of record runs into, determine the father border of current border, adopt edge following algorithm, according to frontier point marking convention, complete search and the mark on border; After completing frontier tracing, recover the process of raster scanning, until arrive the lower right position of image.
5. industrial robot relative pose estimation method as claimed in claim 1, is characterized in that, described in be masked as, the size of profile meets in threshold range; Must be there is hole profile in current outline, and this hole profile also must have sub-profile; The subcircular of inspection current outline; Calculate the radius ratio of current outline profile with it.
6. industrial robot relative pose estimation method as claimed in claim 5, it is characterized in that, if current profile is close to circle, the value of circular o is then more close to 1; Circular threshold value is set to 0.8.
CN201410634619.8A 2014-11-12 2014-11-12 Industrial robot relative pose estimation method Pending CN104460505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410634619.8A CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410634619.8A CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Publications (1)

Publication Number Publication Date
CN104460505A true CN104460505A (en) 2015-03-25

Family

ID=52906738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410634619.8A Pending CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Country Status (1)

Country Link
CN (1) CN104460505A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809700A (en) * 2016-03-23 2016-07-27 南京航空航天大学 Detecting and positioning method of air refueling taper sleeve image blocked by fuel receiving plug
CN105825515B (en) * 2016-03-23 2018-09-25 南京航空航天大学 A kind of fuel filling taper sleeve image detection localization method for autonomous air refuelling
CN108805065A (en) * 2018-05-31 2018-11-13 华南理工大学 One kind being based on the improved method for detecting lane lines of geometric properties
CN108982116A (en) * 2018-06-27 2018-12-11 北京艾瑞思机器人技术有限公司 Transport vehicle and its chassis parameter calibration method, device and computer-readable medium
CN109146979A (en) * 2018-08-01 2019-01-04 苏州乐佰图信息技术有限公司 It walks the method for deviation for compensating manipulator
CN110000795A (en) * 2019-05-15 2019-07-12 苏州市职业大学 A kind of method of Visual servoing control, system and equipment
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium
CN112068128A (en) * 2020-09-19 2020-12-11 重庆大学 Method for processing data and acquiring pose of linear radar in straight road scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜楠: "面对室内场景的空地多机器人协作环境感知", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809700A (en) * 2016-03-23 2016-07-27 南京航空航天大学 Detecting and positioning method of air refueling taper sleeve image blocked by fuel receiving plug
CN105809700B (en) * 2016-03-23 2018-06-01 南京航空航天大学 A kind of drogue image detection localization method blocked by oily plug
CN105825515B (en) * 2016-03-23 2018-09-25 南京航空航天大学 A kind of fuel filling taper sleeve image detection localization method for autonomous air refuelling
CN108805065A (en) * 2018-05-31 2018-11-13 华南理工大学 One kind being based on the improved method for detecting lane lines of geometric properties
CN108982116A (en) * 2018-06-27 2018-12-11 北京艾瑞思机器人技术有限公司 Transport vehicle and its chassis parameter calibration method, device and computer-readable medium
CN108982116B (en) * 2018-06-27 2020-07-03 北京旷视机器人技术有限公司 Transport vehicle and chassis parameter calibration method, device and computer readable medium thereof
CN109146979A (en) * 2018-08-01 2019-01-04 苏州乐佰图信息技术有限公司 It walks the method for deviation for compensating manipulator
CN109146979B (en) * 2018-08-01 2022-02-01 苏州乐佰图信息技术有限公司 Method for compensating for deviation of mechanical arm from walking position
CN110000795A (en) * 2019-05-15 2019-07-12 苏州市职业大学 A kind of method of Visual servoing control, system and equipment
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium
CN110969666B (en) * 2019-11-15 2023-08-18 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device, system and storage medium
CN112068128A (en) * 2020-09-19 2020-12-11 重庆大学 Method for processing data and acquiring pose of linear radar in straight road scene

Similar Documents

Publication Publication Date Title
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN104460505A (en) Industrial robot relative pose estimation method
CN105021124B (en) A kind of planar part three-dimensional position and normal vector computational methods based on depth map
Marton et al. General 3D modelling of novel objects from a single view
JP5618569B2 (en) Position and orientation estimation apparatus and method
Tang et al. 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects
Kuo et al. 3D object detection and pose estimation from depth image for robotic bin picking
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
Wirth et al. Visual odometry for autonomous underwater vehicles
Romero-Ramire et al. Fractal Markers: a new approach for long-range marker pose estimation under occlusion
CN105139416A (en) Object identification method based on image information and depth information
CN105023010A (en) Face living body detection method and system
EP2887315B1 (en) Camera calibration device, method for implementing calibration, program and camera for movable body
US10043279B1 (en) Robust detection and classification of body parts in a depth map
Kallwies et al. Determining and improving the localization accuracy of AprilTag detection
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
Sansoni et al. Optoranger: A 3D pattern matching method for bin picking applications
Ferrara et al. Wide-angle and long-range real time pose estimation: A comparison between monocular and stereo vision systems
Pi et al. Stereo visual SLAM system in underwater environment
Zhou et al. Vision-based pose estimation from points with unknown correspondences
Ückermann et al. Realtime 3D segmentation for human-robot interaction
Azartash et al. Visual odometry for RGB-D cameras for dynamic scenes
CN110514140B (en) Three-dimensional imaging method, device, equipment and storage medium
CN111179271B (en) Object angle information labeling method based on retrieval matching and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325