CN104460505A - Industrial robot relative pose estimation method - Google Patents

Industrial robot relative pose estimation method Download PDF

Info

Publication number
CN104460505A
CN104460505A CN201410634619.8A CN201410634619A CN104460505A CN 104460505 A CN104460505 A CN 104460505A CN 201410634619 A CN201410634619 A CN 201410634619A CN 104460505 A CN104460505 A CN 104460505A
Authority
CN
China
Prior art keywords
msub
mrow
boundary
contour
industrial robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410634619.8A
Other languages
Chinese (zh)
Inventor
曲道奎
徐方
姜楠
李邦宇
李燊
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Siasun Robot and Automation Co Ltd
Original Assignee
Shenyang Siasun Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Siasun Robot and Automation Co Ltd filed Critical Shenyang Siasun Robot and Automation Co Ltd
Priority to CN201410634619.8A priority Critical patent/CN104460505A/en
Publication of CN104460505A publication Critical patent/CN104460505A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision, in particular to an industrial robot relative pose estimation method based on an artificial mark. The method comprises the steps that binarization processing is carried out on an original image by adopting self-adaptive thresholding to obtain a binary image; boundary contour extraction is carried out on the binary image to obtain contours of all foreground targets; shape and topological structure analysis is carried out on the contours, and an outer contour area of the mark is obtained; least square ellipse fitting is carried out on the outer contours; five relative degrees of freedom are calculated through circular deformation analysis; an image moment is used for analyzing a character T to obtain a yaw angle, and a three-dimensional pose of a camera relative to the artificial mark is obtained. According to the industrial robot relative pose estimation method, three-dimensional pose estimation under monocular vision is achieved based on the artificial mark, the depth information at the artificial mark can be obtained through a monocular camera to be used as an industrial robot vision guide, the cost spent for purchasing a binocular camera is omitted, meanwhile, the algorithm efficiency is high, a large number of computing resources are not needed, and implementation is easy.

Description

Industrial robot relative pose estimation method
Technical Field
The invention relates to the technical field of computer vision, in particular to an industrial robot relative pose estimation method based on artificial signs.
Background
When an industrial robot performs an assembly operation, there are generally two ways to guide the industrial robot to perform work. The teaching-based mode has high operation precision and does not need excessive professional skills, but the position of each workpiece assembly needs to be kept unchanged, and the intelligence is lacked; in another mode, other external sensors are used for obtaining the pose information of part or all of the assembled workpieces so as to guide the industrial robot to move.
For the situation of obtaining partial pose information, a two-dimensional code or a mark is usually adopted, and the two-dimensional center position and the rotation angle of the two-dimensional code are obtained through an image processing technology, but the height information needs to be taught, so that the method cannot process the situation of variable height.
For the situation of acquiring all three-dimensional pose information, in the prior art, binocular stereo vision is generally adopted to perform three-dimensional reconstruction on a scene and then acquire pose information of a target object, most of scenes can be processed by adopting a method of the binocular stereo vision, but depth information acquired after reconstruction may have a situation of depth loss, so that the calculated target pose information is incomplete and cannot be performed, and meanwhile, a large amount of calculation resources are consumed in a reconstruction process of a dual-purpose scene.
Another method for restoring three-dimensional information is to use a structured light method, such as a three-dimensional laser, which can obtain depth information of a scene well, but increases difficulty in identifying an object.
Disclosure of Invention
The invention aims to overcome the defects of the existing visual identification technology, obtain three-dimensional information and provide an industrial robot relative pose estimation method based on artificial signs.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides an industrial robot relative pose estimation method, which comprises the following steps:
s1, performing binarization processing on the original image by adopting self-adaptive thresholding to obtain a binary image;
s2, extracting the boundary contour of the binary image to obtain the contour of all foreground objects;
s3, analyzing the shape and the topological structure of the contour to obtain the marked outer contour area;
s4, performing least square ellipse fitting on the outer contour;
s5, calculating five relative degrees of freedom through circular deformation analysis;
and S6, analyzing the character T by using the image moment to obtain a yaw angle and obtain a three-dimensional pose of the camera relative to the artificial mark.
In some embodiments, the adaptive thresholding compares each pixel in the original image to the mean of the pixels in its surrounding s × s neighborhood, and if the current pixel value is less than the mean threshold, the pixel value of the pixel is set to 0, otherwise, it is set to 255.
In some embodiments, the threshold is 15%.
In some embodiments, the boundary contour is extracted by raster scanning the whole picture until a starting point meeting boundary tracking is met, a label serial number is distributed for a currently newly found boundary, meanwhile, a parent boundary of the current boundary is determined according to a recorded boundary encountered last time, and boundary searching and marking are completed according to a boundary point marking rule by adopting a boundary tracking algorithm; after the boundary tracking is completed, the raster scanning process is resumed until the lower right corner position of the image is reached.
In some embodiments, the indication is that the size of the contour is within a threshold; the current profile must have a hole profile, and the hole profile must also have a sub-profile; checking the near-circular shape of the current contour; and calculating the radius ratio of the current contour to the sub-contour thereof.
In some embodiments, the closer the current profile is to a circle, the closer the value of circle o is to 1; the threshold for the circle is set to 0.8.
The invention has the beneficial effects that: the three-dimensional pose estimation under monocular vision is realized based on the artificial mark, the depth information at the artificial mark can be obtained through the monocular camera and used as the visual guidance of the industrial robot, the cost for purchasing a binocular camera is saved, meanwhile, the algorithm is high in efficiency, a large amount of computing resources are not needed, and the implementation is simple. Compared with the prior art, the method saves cost and computing resources, and can obtain more accurate three-dimensional pose information.
Drawings
FIG. 1 schematically illustrates a diagram of an artificial mark according to one embodiment of the invention.
Fig. 2 schematically shows a schematic diagram of an integral image according to an embodiment of the invention.
FIG. 3 schematically illustrates a boundary tracking schematic according to one embodiment of the invention.
FIG. 4 schematically shows a coordinate system diagram according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
The invention makes an artificial mark, as shown in fig. 1, the mark is composed of two rings with a character T in the middle, and the two rings are used for removing the discrimination; the mark is placed at a fixed position on the assembly box, namely, the relative position relation between the mark and the workpiece groove is ensured to be constant and known; and processing the image acquired by the camera and carrying out relative pose estimation.
The invention provides an industrial robot relative pose estimation method, which comprises the following steps
S1, performing binarization processing on the original image by adopting self-adaptive thresholding to obtain a binary image;
s2, extracting the boundary contour of the binary image to obtain the contour of all foreground objects;
s3, analyzing the shape and the topological structure of the contour to obtain the marked outer contour area;
s4, performing least square ellipse fitting on the outer contour;
s5, calculating five relative degrees of freedom through circular deformation analysis;
and S6, analyzing the character T by using the image moment to obtain a yaw angle and obtain a three-dimensional pose of the camera relative to the artificial mark.
Specifically, carrying out binarization processing on an original image by adopting self-adaptive thresholding to obtain a binary image; the image thresholding is to perform binary classification on pixel points in an image according to a certain characteristic (for example, pixel intensity), and finally form a binary image.
Because the images in the natural environment often have uneven illumination and even serious shadow shielding conditions, the method for fixing the threshold value is usually difficult to obtain the expected foreground object, and the self-adaptive thresholding method for the variable threshold value is adopted in the embodiment of the invention to effectively overcome the problems.
When the pixel sum of the overlapped rectangular area in the image needs to be frequently calculated, the integral image is usually needed to be adopted, and the calculation time is greatly saved through the addition and subtraction operation of constant time in the integral image. The calculation of the integral image is adopted in the adaptive thresholding algorithm in the embodiment of the invention, and the integral image is explained below.
Assuming that the pixel intensity of the original image at the (x, y) point is f (x), the value of the corresponding (x, y) point in the integral image is I (x, y), which is defined as follows:
I(x,y)=f(x,y)+I(x-1,y)+I(x,y-1)-I(x-1,y-1) (1)
as shown in FIG. 2, when the integral image is constructed, if the pixel sum of the region D in the image is calculated, the coordinate (x) of the upper left corner of the region D is known1,y1) And the coordinates of the lower right corner (x)2,y2) The calculation formula is as follows:
<math> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> </mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> </munderover> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
the basic idea of adaptive thresholding in the invention is to compare each pixel point in the image with the pixel mean value in the neighborhood of s × s around the pixel point, if the current pixel value is less than a certain percentage of the mean value, in the embodiment of the invention, 15% of the parameter is adopted, the pixel value of the point is set as 0, otherwise, the pixel value is set as 255. The region with stronger contrast can be reserved by the strategy, and meanwhile, the sensitivity of the region with slow gradient change is reduced.
Extracting the boundary contour of the binary image to obtain the contours of all foreground objects; after the original image is subjected to thresholding, the original image is changed into a binary image, obvious connected regions and corresponding contour information exist in the binary image, and the extraction of binary image boundaries and the analysis of topological structures are completed through a boundary tracking algorithm and are used for subsequent mark detection.
In a binary image, the surrounding borders of the image constitute the background of the image. Assuming that the pixel intensity in the binary image is divided into 0 and 1, the pixels with pixel values of 0 and 1 are respectively called 0 pixel and 1 pixel, and without loss of generality, it is assumed that 0 pixel fills the background of the binary image. Connected regions consisting of 1 pixel and 0 pixel are called 1-connected and 0-connected. If the 0 connected region contains the background of the image, the connected region is called the background, otherwise, the connected region is the hole.
To illustrate the topological relationship of the boundaries in the binary image, the following relevant definitions,
(1) boundary points are as follows: in the case of four neighborhoods, if one neighborhood point of 1 pixel (i, j) is 0 pixel (p, q), the pixel is referred to as a boundary point.
Also shown, if (i, j) is the region S1(p, q) is the region S2Is defined as being between 1 connected region S and the boundary point (i, j) is defined as being between 1 connected region S1And 0 connected region S2The pixel points in between.
(2) Outer boundary and hole boundary: the outer boundary is a boundary generated by directly surrounding a 1 connected region by a 0 connected region; correspondingly, the hole boundary is a boundary generated by directly surrounding the 0-connected region by the 1-connected region.
The above-mentioned direct surrounding means that two connected regions have a surrounding relationship and a boundary point exists therebetween.
(3) Parent boundary: between 1 and the region S1And 0 connected region S2Is defined as: if S2If it is a hole region, its parent boundary is the directly surrounding hole region S21 connecting the region with S2The formed pore boundaries; if S2Is the background, the special hole boundary formed by the image background is used as the parent boundary. Between the hole region S3With the 1-communication region S immediately surrounding it4The parent boundary of the pore boundary of (a) is defined as: s4An outer boundary formed with a 0-connected region immediately surrounding it.
The basic idea of the above algorithm: raster scanning the whole picture until meeting a starting point satisfying boundary tracing, as shown in fig. 3, wherein the boundary tracing starting point condition is (a) an outer contour starting point (i, j) and (b) a hole contour starting point (i, j); distributing label serial numbers for the currently newly found boundaries, and determining parent boundaries of the current boundaries according to the last encountered boundaries of the records, wherein the rules are shown in table 1, wherein B is the currently newly found contour, and B' is the recently encountered contour; the search and marking of the boundary are completed by adopting a classical boundary tracking algorithm according to a boundary point marking rule; after the boundary tracking is completed, the raster scanning process is resumed until the lower right corner position of the image is reached.
TABLE 1 parent Border determination criterion for Profile B
By the analysis processing of the binary image, the contour in the binary image can be rapidly and effectively obtained, and the contour is subjected to shape and topological structure analysis to obtain the outer contour region of the mark. Among them, the mark in the embodiment of the present invention has obvious geometrical features:
(1) the size of the outline is within a set threshold range;
(2) the current profile must have a hole profile, and the hole profile must also have a sub-profile;
(3) checking the approximate circle of the current contour, wherein the formula is as follows:
<math> <mrow> <mi>o</mi> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mi>&pi;A</mi> </mrow> <msup> <mi>u</mi> <mn>2</mn> </msup> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
where A is the image area enclosed by the current contour, u is the length of the contour, and the circle o is a number between 0 and 1. If the current contour is closer to a circle, the value of o is closer to 1; the threshold for the circle was experimentally found and the final threshold for o was set to 0.8.
(4) And calculating the radius ratio of the current contour to the sub-contour thereof, wherein the calculation formula is as follows:
c r = A inner A outer - - - ( 4 )
wherein A isouterAnd AinnerThe area of the current contour and its sub-contour, respectively, c, since the radius of the inner and outer rings of the logo is a fixed constant and does not change with the angle projected to the camerarFluctuating around a constant value. By analyzing the detection result through the strategy, the error-free detection of the landing mark in a natural scene can be ensured.
Performing least square ellipse fitting on the outer contour; when the mark is detected and identified, a coordinate set of pixel coordinate points on the outer ring of the mark can be obtained. Due to the effect of perspective projection, the original circular shape becomes an ellipse, and the general equation is set as follows:
Ax2+2Bxy+Cy2+2Dx+2Ey=1 (5)
in order to complete the 5-degree-of-freedom relative pose estimation through deformation analysis, the parameters are solved first, and are written into a matrix form as shown in formula 6, and the detailed expansion is shown in formula 7.
Φx=b (6)
x 1 2 2 x 1 y 1 y 1 2 2 x 1 2 y 1 x 2 2 2 x 2 y 2 y 2 2 2 x 2 2 y 2 M M M M M M M M M M x n 2 2 x n y n y n 2 2 x n 2 y n A B C D E = 1 1 M M 1 - - - ( 7 )
Wherein, A, B, C, D and E are 5 ellipse parameters to be solved. Because the number of contour points on the ellipse in the image is large, an overdetermined system of equations is formed, and a least square fitting method is generally adopted for solving the system of equations, namely, the following minimum optimization problem is adopted:
=min||Φx-b|| (8)
where, is the residual. For solving the problem, the singular value decomposition method is adopted in the embodiment to solve the problem.
Singular values of the matrix and its decomposition: let phi e Cm×nK min (m, n), Hermite semi-positive definite matrix ΦHA characteristic value of phi1≥λ2≥L≥λkNon-negative real number sigma being equal to or greater than 0i(A) For the singular values of the matrix Φ, the formula is as follows:
<math> <mrow> <msub> <mi>&sigma;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>A</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> </msqrt> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mi>k</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
singular value decomposition theorem of matrix: let phi e Cm×nAnd the rank (Φ) ═ r, then there is an m-order, n-order unitary matrix U, V such that,
<math> <mrow> <mi>&Phi;</mi> <mo>=</mo> <mi>U</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mi>V</mi> <mi>H</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
where, Σ ═ diag (σ)12,L,σr) And σ is the non-zero singular value of the matrix Φ.
The solution to the above-described elliptical parametric least squares fitting problem is therefore
x=V∑-1UTb (11)
Wherein phi ═ U ∑ VH,∑-1Defined as the inverse of all non-zero singular values.
In practice, a large number of artificial objects have circular surface structures, and when the camera acquires images of the artificial objects, the images obtained through perspective change of the camera are displayed as ellipses due to certain visual angle change of the camera and the objects. I.e. five relative degrees of freedom are calculated by circular deformation analysis.
As shown in fig. 4, the origin of the camera coordinate system is defined as the intersection of the camera sensor chip and the optical axis, and the optical axis is defined as the Z-axis. In a two-dimensional plane, the general equation for an ellipse is as follows:
Ax_c^2+2Bx_c y_c+Cy_c^2+2Dx_c+2Ey_c+F=0 (12)
for the convenience of calculation derivation, the corresponding matrix form is shown as follows:
x c y c 1 A B D B C E D E F x c y c 1 - - - ( 13 )
assuming that the straight beam passes through both the camera origin and the ellipse in the image plane, the beam forms a tilted elliptical cone. Assuming that the focal length of the camera is f, the image plane is represented as a plane where z is f in the camera coordinate system. The focal length of the camera in the invention is always considered as a known quantity and is obtained by calibrating the internal parameters of the known quantity in advance, so that the points on the inclined elliptic cone are expressed as the following formula:
P=k(x_c,y_c,f)^T (14)
where k is a scale factor describing the distance of point P from the origin of the camera coordinate system. From equations 13 and 14, the tilted elliptic cone is expressed as follows:
PTQP=0 (15)
wherein Q is expressed as follows:
Q = A B D f B C E f D f E f F f 2 - - - ( 16 )
a support plane coordinate system defined with the origin at the camera origin, but with the Z-axis defined as the unit normal vector downward of the circular plane in real scene. Let z0Is the Z coordinate supporting a point on the plane, so the expression for a circle on the plane is as follows:
( x - x 0 ) 2 + ( y - y 0 ) 2 = r 2 z = z 0 - - - ( 17 )
wherein (x)0,y0,z0) Is the center of the landing logo and r is the radius of its outer circle. Also, the following formula defines a beveled cone,
P c T Q c P c = 0 - - - ( 18 )
wherein Q iscThe definition formula of (1) is as follows:
Q c = 1 0 x 0 z 0 0 1 y 0 z 0 x 0 z 0 y 0 z 0 x 0 2 + y 0 2 - r 2 z 0 2 - - - ( 19 )
according to the above two coordinate systems, they have the same origin of coordinates, so there is a rotational relationship between the two coordinate systems. Since the oblique cone and the oblique elliptical cone are the same cone surface, there is a rotation matrix RcA 1 is to PcMoving to P, the formula is as follows:
P=RcPc (20)
for arbitrary k ≠ 0, kQcDescribing the same cone QcFrom equations 15, 18, and 20, the following equations can be derived:
k R c T Q R c = Q c - - - ( 21 )
to determine RcAnd QcIn order to obtain the unit normal vector of the support surface and the coordinates of the center point of the circle in the scene, first, Q is diagonalized. Let λ1,λ2,λ3Is the eigenvalue, v, of the matrix Q1=(v1x v1y v1z)T,v2=(v2x v2y v2z)T,v3=(v3x v3y v3z)TRespectively, corresponding feature vectors, the matrix Q can be expressed as follows:
Q=VΛVT (22)
according to Schur's theorem, since the matrix Q is a real symmetric matrix, i.e., Q is a normal matrix, the matrix Λ is a diagonal matrix, and diagonal elements are eigenvectors, the specific expression is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>&Lambda;</mi> <mo>=</mo> <mi>diag</mi> <mo>{</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <mo>}</mo> </mtd> </mtr> <mtr> <mtd> <mi>V</mi> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>v</mi> <mn>1</mn> </msub> </mtd> <mtd> <msub> <mi>v</mi> <mn>2</mn> </msub> </mtd> <mtd> <msub> <mi>v</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow> </math>
substituting the formula 22 into the formula 21 and sorting to obtain
kRTΛR=Qc (24)
Wherein the expression of the matrix R is as follows:
R = r 1 x r 2 x r 3 x r 1 y r 2 y r 3 y r 1 z r 2 z r 3 z = V T R c - - - ( 25 )
expanding equation 24 according to equation 25, the following specific expression can be obtained:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>r</mi> <mrow> <mn>1</mn> <mi>x</mi> </mrow> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>r</mi> <mrow> <mn>2</mn> <mi>x</mi> </mrow> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>r</mi> <mrow> <mn>1</mn> <mi>y</mi> </mrow> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>r</mi> <mrow> <mn>2</mn> <mi>y</mi> </mrow> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>r</mi> <mrow> <mn>1</mn> <mi>z</mi> </mrow> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>r</mi> <mrow> <mn>2</mn> <mi>z</mi> </mrow> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mi>r</mi> <mrow> <mn>1</mn> <mi>x</mi> </mrow> </msub> <msub> <mi>r</mi> <mrow> <mn>2</mn> <mi>x</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msub> <mi>r</mi> <mrow> <mn>1</mn> <mi>y</mi> </mrow> </msub> <msub> <mi>r</mi> <mrow> <mn>2</mn> <mi>y</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <msub> <mi>r</mi> <mrow> <mn>1</mn> <mi>z</mi> </mrow> </msub> <msub> <mi>r</mi> <mrow> <mn>2</mn> <mi>z</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>26</mn> <mo>)</mo> </mrow> </mrow> </math>
if the current graph is a conic, its 3 eigenvalues satisfy that there are 2 as positive and 1 as negative, thus making the following assumptions:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>></mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mo>|</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>&GreaterEqual;</mo> <mo>|</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>|</mo> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>27</mn> <mo>)</mo> </mrow> </mrow> </math>
equation 26 is simplified and R is knownTAnd R is I, and a matrix R expression is obtained as follows:
<math> <mrow> <mi>R</mi> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>g</mi> <mi>cos</mi> <mi>&alpha;</mi> </mtd> <mtd> <msub> <mi>S</mi> <mn>1</mn> </msub> <mi>g</mi> <mi>sin</mi> <mi>&alpha;</mi> </mtd> <mtd> <msub> <mi>S</mi> <mn>2</mn> </msub> <mi>h</mi> </mtd> </mtr> <mtr> <mtd> <mi>sin</mi> <mi>&alpha;</mi> </mtd> <mtd> <mo>-</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mi>cos</mi> <mi>&alpha;</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> <mi>h</mi> <mi>cos</mi> </mtd> <mtd> <msub> <mi>S</mi> <mn>2</mn> </msub> <mi>h</mi> <mi>sin</mi> <mi>&alpha;</mi> </mtd> <mtd> <mo>-</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mi>g</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>28</mn> <mo>)</mo> </mrow> </mrow> </math>
where α is a free variable, S1And S2Is an undetermined symbol quantity, while
<math> <mrow> <mi>g</mi> <mo>=</mo> <msqrt> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> </mfrac> </msqrt> <mo>,</mo> <mi>h</mi> <mo>=</mo> <msqrt> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> </mfrac> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>29</mn> <mo>)</mo> </mrow> </mrow> </math>
Substituting equation 28 into equation 24, the parameter k is determined,the values obtained by the solution are as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <msub> <mi>z</mi> <mn>0</mn> </msub> </mfrac> <mo>=</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mfrac> <mrow> <msqrt> <mrow> <mo>(</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </msqrt> <mi>cos</mi> <mi>&alpha;</mi> </mrow> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <msub> <mi>z</mi> <mn>0</mn> </msub> </mfrac> <mo>=</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> <mfrac> <mrow> <msqrt> <mrow> <mo>(</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </msqrt> <mi>sin</mi> <mi>&alpha;</mi> </mrow> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <msup> <mi>r</mi> <mn>2</mn> </msup> <msup> <msub> <mi>z</mi> <mn>0</mn> </msub> <mn>2</mn> </msup> </mfrac> <mo>=</mo> <mo>-</mo> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> <msup> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mi>k</mi> <mo>=</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>30</mn> <mo>)</mo> </mrow> </mrow> </math>
as shown in fig. 4, descriptions N and C of z-axis unit normal vector and origin of coordinates in the landing indicator coordinate system in the camera coordinate system are obtained, and the specific formula is defined as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>C</mi> <mo>=</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mi>VR</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>/</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>/</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <msub> <mi>z</mi> <mn>0</mn> </msub> <mi>V</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mn>2</mn> </msub> <mfrac> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mfrac> <msqrt> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> </mfrac> </msqrt> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> <mtr> <mtd> <mi>N</mi> <mo>=</mo> <mi>VR</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>V</mi> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mn>2</mn> </msub> <msqrt> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> </mfrac> </msqrt> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mn>1</mn> </msub> <msqrt> <mfrac> <mrow> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>3</mn> </msub> </mrow> </mfrac> </msqrt> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>31</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,S3is a symbolic variable that is deterministic.
Two solutions of N and C are obtained by calculation of a formula 31, four groups of solutions are obtained by respectively solving according to the two signs, and the two groups of solutions which are close to each other are taken as the final solution.
After the mark is identified, the image position of the character T in the mark is obtained, the character T is analyzed by adopting the image moment, the included angle between the orientation of the character T and the x axis of the camera is obtained, and therefore the robot is guided to rotate. Image moments are some special weighted average of pixel intensities in an image, and are usually used to describe geometric properties of objects in the image, such as the area, center of gravity, tensor, and direction of the object. Namely, the character T is analyzed by using the image moment to obtain a yaw angle, and the three-dimensional pose of the camera relative to the artificial mark is obtained.
For a two-dimensional continuous function f (x, y), the order moment p + q is defined as:
mp,q=∫∫xpyqf(x,y)dxdy (32)
for a discrete digital image I (x, y), its moment of the p + q order is defined as follows:
<math> <mrow> <msub> <mi>m</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <msup> <mi>x</mi> <mi>p</mi> </msup> <msup> <mi>y</mi> <mi>q</mi> </msup> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>33</mn> <mo>)</mo> </mrow> </mrow> </math>
according to the above formula, when (p, q) — (0,0), the 0 th order moment thereof describes the area of traversal of the (x, y) region, the center of gravity of the object described by the moment is obtained by calculation of the following formula, which is shown below:
x c = m 1,0 m 0,0 , y c = m 0,1 m 0,0 - - - ( 34 )
however, the description of the original image moment to the object changes with the position of the object in the image, and in order to obtain the position-invariant moment, the concept of the central moment is introduced, and the definition formula is as follows:
<math> <mrow> <msub> <mi>u</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mi>p</mi> </msup> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mi>q</mi> </msup> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>35</mn> <mo>)</mo> </mrow> </mrow> </math>
the first and high order center distance is derived from the following formula
<math> <mrow> <msub> <mi>u</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <msub> <mi>m</mi> <mn>0,0</mn> </msub> </mfrac> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>m</mi> <mn>1,0</mn> </msub> <msub> <mi>m</mi> <mrow> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> </msub> </mfrac> <mo>)</mo> </mrow> <mi>p</mi> </msup> <mo>&times;</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>m</mi> <mn>0,1</mn> </msub> <msub> <mi>m</mi> <mn>0,0</mn> </msub> </mfrac> <mo>)</mo> </mrow> <mi>q</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>36</mn> <mo>)</mo> </mrow> </mrow> </math>
Further, the common center distances of the third order or less are calculated from the original moment, and the calculation formula is as follows:
u 1,0 = u 0,1 = 0 u 2,0 = m 2,0 - m 0,0 x c u 1,1 = m 1,1 - m 0,0 x c y c u 0,2 = m 0,2 - m 0,0 y c u 3,0 = m 3,0 - 3 m 2,0 x c + 2 m 0,0 3 x c u 2,1 = m 2,1 - m 2,0 y c - 2 m 1,1 x c + 2 m 0,0 2 x c y c u 1,2 = m 1 , 2 - m 0,2 x c - 2 m 1,1 y c + 2 m 0,0 2 x c y c u 0 , 3 = m 0,3 - 3 m 0,2 y c + 2 m 0,0 3 y c - - - ( 37 )
the direction of the letter T in the sign is defined as the included angle between the main axis direction of the letter T and the x-axis direction of the image, the main axis of the object in the image is defined as the direction with the maximum expansibility, and the calculation formula is as follows:
<math> <mrow> <mi>&theta;</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>arcatn</mi> <mfrac> <mrow> <mn>2</mn> <msub> <mi>u</mi> <mn>1,1</mn> </msub> </mrow> <mrow> <msub> <mi>u</mi> <mn>2,0</mn> </msub> <mo>-</mo> <msub> <mi>u</mi> <mn>0,2</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>38</mn> <mo>)</mo> </mrow> </mrow> </math>
the invention realizes the three-dimensional pose estimation under monocular vision based on the artificial mark, the depth information at the artificial mark can be obtained through the monocular camera and used as the visual guidance of the industrial robot, the cost for purchasing a binocular camera is saved, meanwhile, the algorithm efficiency of the invention is high, a large amount of computing resources are not needed, and the implementation is simpler. Compared with the prior art, the method saves cost and computing resources, and can obtain more accurate three-dimensional pose information.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. An industrial robot relative pose estimation method, characterized by comprising:
s1, performing binarization processing on the original image by adopting self-adaptive thresholding to obtain a binary image;
s2, extracting the boundary contour of the binary image to obtain the contour of all foreground objects;
s3, analyzing the shape and the topological structure of the contour to obtain the marked outer contour area;
s4, performing least square ellipse fitting on the outer contour;
s5, calculating five relative degrees of freedom through circular deformation analysis;
and S6, analyzing the character T by using the image moment to obtain a yaw angle and obtain a three-dimensional pose of the camera relative to the artificial mark.
2. The industrial robot relative pose estimation method of claim 1, wherein said adaptive thresholding compares each pixel point in the original image with the pixel mean in its surrounding s x s sized neighborhood, and if the current pixel value is less than the mean threshold, the pixel value of said pixel point is set to 0, otherwise it is set to 255.
3. An industrial robot relative pose estimation method according to claim 2, wherein said threshold value is 15%.
4. The industrial robot relative pose estimation method according to claim 1, wherein the boundary contour extraction is raster scanning the whole picture until a starting point satisfying boundary tracking is encountered, assigning a label serial number to a currently newly found boundary, determining a parent boundary of the current boundary according to a recorded boundary encountered last time, and completing boundary search and marking according to a boundary point marking rule by adopting a boundary tracking algorithm; after the boundary tracking is completed, the raster scanning process is resumed until the lower right corner position of the image is reached.
5. The industrial robot relative pose estimation method according to claim 1, wherein the flag is that a size of the contour is satisfied within a threshold range; the current profile must have a hole profile, and the hole profile must also have a sub-profile; checking the near-circular shape of the current contour; and calculating the radius ratio of the current contour to the sub-contour thereof.
6. The industrial robot relative pose estimation method according to claim 5, wherein if the current contour is closer to a circle, the value of the circle o is closer to 1; the threshold for the circle is set to 0.8.
CN201410634619.8A 2014-11-12 2014-11-12 Industrial robot relative pose estimation method Pending CN104460505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410634619.8A CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410634619.8A CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Publications (1)

Publication Number Publication Date
CN104460505A true CN104460505A (en) 2015-03-25

Family

ID=52906738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410634619.8A Pending CN104460505A (en) 2014-11-12 2014-11-12 Industrial robot relative pose estimation method

Country Status (1)

Country Link
CN (1) CN104460505A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809700A (en) * 2016-03-23 2016-07-27 南京航空航天大学 Detecting and positioning method of air refueling taper sleeve image blocked by fuel receiving plug
CN105825515B (en) * 2016-03-23 2018-09-25 南京航空航天大学 A kind of fuel filling taper sleeve image detection localization method for autonomous air refuelling
CN108805065A (en) * 2018-05-31 2018-11-13 华南理工大学 One kind being based on the improved method for detecting lane lines of geometric properties
CN108982116A (en) * 2018-06-27 2018-12-11 北京艾瑞思机器人技术有限公司 Transport vehicle and its chassis parameter calibration method, device and computer-readable medium
CN109146979A (en) * 2018-08-01 2019-01-04 苏州乐佰图信息技术有限公司 It walks the method for deviation for compensating manipulator
CN110000795A (en) * 2019-05-15 2019-07-12 苏州市职业大学 A kind of method of Visual servoing control, system and equipment
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium
CN112068128A (en) * 2020-09-19 2020-12-11 重庆大学 Method for processing data and acquiring pose of linear radar in straight road scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜楠: "面对室内场景的空地多机器人协作环境感知", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809700A (en) * 2016-03-23 2016-07-27 南京航空航天大学 Detecting and positioning method of air refueling taper sleeve image blocked by fuel receiving plug
CN105809700B (en) * 2016-03-23 2018-06-01 南京航空航天大学 A kind of drogue image detection localization method blocked by oily plug
CN105825515B (en) * 2016-03-23 2018-09-25 南京航空航天大学 A kind of fuel filling taper sleeve image detection localization method for autonomous air refuelling
CN108805065A (en) * 2018-05-31 2018-11-13 华南理工大学 One kind being based on the improved method for detecting lane lines of geometric properties
CN108982116A (en) * 2018-06-27 2018-12-11 北京艾瑞思机器人技术有限公司 Transport vehicle and its chassis parameter calibration method, device and computer-readable medium
CN108982116B (en) * 2018-06-27 2020-07-03 北京旷视机器人技术有限公司 Transport vehicle and chassis parameter calibration method, device and computer readable medium thereof
CN109146979A (en) * 2018-08-01 2019-01-04 苏州乐佰图信息技术有限公司 It walks the method for deviation for compensating manipulator
CN109146979B (en) * 2018-08-01 2022-02-01 苏州乐佰图信息技术有限公司 Method for compensating for deviation of mechanical arm from walking position
CN110000795A (en) * 2019-05-15 2019-07-12 苏州市职业大学 A kind of method of Visual servoing control, system and equipment
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium
CN110969666B (en) * 2019-11-15 2023-08-18 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device, system and storage medium
CN112068128A (en) * 2020-09-19 2020-12-11 重庆大学 Method for processing data and acquiring pose of linear radar in straight road scene

Similar Documents

Publication Publication Date Title
CN104460505A (en) Industrial robot relative pose estimation method
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN112669318B (en) Surface defect detection method, device, equipment and storage medium
CN111507976B (en) Defect detection method and system based on multi-angle imaging
Kuo et al. 3D object detection and pose estimation from depth image for robotic bin picking
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
US10043279B1 (en) Robust detection and classification of body parts in a depth map
CN109064481B (en) Machine vision positioning method
CN105139416A (en) Object identification method based on image information and depth information
JP2011174879A (en) Apparatus and method of estimating position and orientation
Ückermann et al. Realtime 3D segmentation for human-robot interaction
Sansoni et al. Optoranger: A 3D pattern matching method for bin picking applications
US11468609B2 (en) Methods and apparatus for generating point cloud histograms
KR20180090756A (en) System and method for scoring color candidate poses against a color image in a vision system
Li et al. Road markings extraction based on threshold segmentation
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN111222507A (en) Automatic identification method of digital meter reading and computer readable storage medium
Duan et al. Real time road edges detection and road signs recognition
Elrefaei et al. Development of an android application for object detection based on color, shape, or local features
CN113724329A (en) Object attitude estimation method, system and medium fusing plane and stereo information
CN105718929B (en) The quick round object localization method of high-precision and system under round-the-clock circumstances not known
Zhao et al. Binocular vision measurement for large-scale weakly textured ship hull plates using feature points encoding method
CN110514140B (en) Three-dimensional imaging method, device, equipment and storage medium
Kalutskiy et al. Method of moving object detection from mobile vision system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325