CN110763204A - Planar coding target and pose measurement method thereof - Google Patents

Planar coding target and pose measurement method thereof Download PDF

Info

Publication number
CN110763204A
CN110763204A CN201910552555.XA CN201910552555A CN110763204A CN 110763204 A CN110763204 A CN 110763204A CN 201910552555 A CN201910552555 A CN 201910552555A CN 110763204 A CN110763204 A CN 110763204A
Authority
CN
China
Prior art keywords
target
point
points
positioning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910552555.XA
Other languages
Chinese (zh)
Other versions
CN110763204B (en
Inventor
赵敏
张琪
张宇帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910552555.XA priority Critical patent/CN110763204B/en
Publication of CN110763204A publication Critical patent/CN110763204A/en
Application granted granted Critical
Publication of CN110763204B publication Critical patent/CN110763204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures

Abstract

The invention discloses a plane coding target.A coding unit is arranged on a working surface of the target, the coding unit comprises a 7 multiplied by 7 array positioning point and 26 square points with the same size, the size of the positioning point is larger than that of the square points, and the 26 square points are respectively distributed right or right below the positioning point; the target can rotate 360 degrees in the plane, and the rotating direction can be effectively judged according to the relation between the positioning point and the square point; the minimum coding units are 4, and the range codes of any 4 adjacent positioning points are different; as long as any part of targets larger than the minimum coding unit is collected, the position of the part of targets in the original target can be obtained by decoding; the invention also discloses a pose measurement method applied to the plane coding target, which can realize pose measurement only by collecting any part of target patterns during measurement, well solves the problem that vision pose measurement cannot simultaneously consider large range and high precision, and has simple structure and reliable performance.

Description

Planar coding target and pose measurement method thereof
Technical Field
The invention belongs to the technical field of testing and metering equipment and visual inspection, particularly relates to a plane coding target, and further relates to a pose measuring method based on the coding target.
Background
The position and attitude measurement of the object has important application value in the industrial fields of aerospace, robot navigation, surgical operation, photoelectric aiming system, automobile quality detection and the like. Monocular vision pose measurement refers to measurement of the geometric dimension of an object, the position and the posture of the object in space and the like by only using one vision sensor to acquire images, wherein the monocular vision measurement is concerned because of the advantages of simple structure, low cost, strong real-time performance, few calibration steps and the like.
The invention discloses a method for determining target topological relation and a camera calibration target capable of being placed at will, which is named as 'method for determining target topological relation and camera calibration target capable of being placed at will', wherein the invention patent with publication number CN101098492A and publication number 2008.01.02 discloses a method for determining target topological relation and a camera calibration target capable of being placed at will, the target is provided with 4 large circles which are arranged diagonally in the arranged small circles, the centers of the three large circles are positioned on a straight line, the center of the other large circle is positioned outside the straight line, and the topological corresponding relation of the target at any placing position can be easily determined by combining an image processing algorithm and vector parallel judgment. The invention patent named as monocular vision pose measurement system, with publication number of CN205692214U and publication date of 2016.11.16, discloses a pose measurement system combining a cooperative target, wherein the cooperative target is composed of four coplanar rings with different sizes.
The first patent requires that four large circles in the target must appear in the field of view, the second patent requires that four circles in the target must also be in the field of view, and the above patents all require that all or most of the target be in the field of view when performing pose measurement, and cannot simultaneously realize large-scale measurement, high resolution and high precision, so that how to break through the limitation of traditional target design, and realizing the advantages of large-scale and high-precision measurement and the like is a problem which needs to be solved urgently.
Disclosure of Invention
The invention aims to provide a coding target which can break through the limitation of the previous visual pose measurement and realize large range and high precision of measurement after being applied to a visual pose measurement system.
Another object of the present invention is to provide a method for measuring visual pose based on the coding target.
The technical scheme includes that the device comprises a shell, a coding target is fixedly connected to the shell, a plurality of coding units are arranged on the coding target, the coding units comprise positioning points and square points which are arranged in parallel in an array mode, the positioning points and the square points are distributed at equal intervals, and the coding target is made of glass materials with black backgrounds and white circles.
The invention is also characterized in that:
the black background adopts black matte ink, and the white circles adopt white matte ink, so that the image acquisition is facilitated.
The coding unit comprises anchor points with the same 7 x 7 array size and 26 same orientation points, wherein one orientation point is arranged at the right side of one anchor point, or one orientation point (2) is arranged at the lower right side of the anchor point, or no orientation point is arranged nearby the anchor point.
The coding unit adopts the following coding mode:
if the square position point is arranged right to the positioning point, the square position point is coded as '1'; if the square position point is arranged at the lower right part of the positioning point, the square position point is coded as '2'; if the square position point is not set near the positioning point, the code is '0', and the following code is generated:
Figure BDA0002105929820000031
the codes of any adjacent four points are not overlapped, and the codes have uniqueness.
The minimum coding unit of the coding target is 4, the range coding of any 4 adjacent positioning points is different, and as long as any part of target larger than the minimum coding unit is collected, the part can be decoded to obtain the position of the original target.
The invention also provides a method for measuring the pose of the plane coding target, which is implemented by the following steps:
step 1: connecting a computer provided with pose resolving software with a CCD camera through a data line, wherein the CCD camera is movably connected with a target to be detected, and a coding target is fixed in a CCD camera field of view, and the minimum field of view of the CCD camera is ensured to be more than 2 times of the distance between locating points of the coding target, so that a pose measuring system of a plane coding target is constructed;
step 2: the CCD camera collects any part of target images;
and step 3: removing all background sundries except the target point from the acquired image, and solving the center coordinates of the dots through image processing;
and 4, step 4: decoding according to the positioning points and the square points, determining the positions of the acquisition parts in the target, and obtaining world coordinates of the positioning points;
and 5: and obtaining the pose of the target to be detected according to the image coordinates of the positioning points and the corresponding world coordinates.
The invention is also characterized in that:
when the position and posture of the object to be detected need to be determined, the position and posture of the object to be detected can be obtained by local points or all points on the target appearing in the CCD camera field of view of the coding target-based visual position and posture measuring system constructed in the step 1, and the CCD camera is not required to acquire all points of the target, so that the moving range of the object to be detected is wider. Therefore, the problem that both large range and high resolution cannot be considered is effectively solved;
step 4 is specifically implemented as follows:
step 4.1: calculating the scaling coefficient and the rotation angle of the target image:
distinguishing a positioning point and a square point by the size of a circular point, and calculating rotation and scaling parameters of the target image by using a group of 4 positioning points in the target image; establishing affine change between a group of positioning points in the target image and any group of positioning points of the target template according to the characteristics that the distances between the positioning points are the same, and only changing translation parameters without influencing a scaling coefficient and a rotation angle; the affine transformation equation between the target image and the target template is as follows:
Figure BDA0002105929820000041
wherein (x ', y') is a target template characteristic point and is a known parameter; (x, y) are target image characteristic points and can be obtained by image processing; tx, ty is the relation between the translation parameter and the selected target template locating point; s is a scaling factor; theta is a rotation angle;
by using the positioning points in the 4 target images and the positioning points in the corresponding target templates, affine change parameters can be obtained by solving
Constructing a matrix Y by the characteristic points of the target template:
Y=[x1',y1',…,xn',yn']T(2);
constructing a matrix X by the characteristic points of the target image:
Figure BDA0002105929820000051
the affine variation parameters are:
Figure BDA0002105929820000055
for affine variation parameter
Figure BDA0002105929820000056
The first 2 items of decomposition of
scosθ=a;ssinθ=b (5);
Then
Figure BDA0002105929820000052
Figure BDA0002105929820000053
Step 4.2: converting the target image into a target image which is consistent with the target template in size and same in direction:
multiplying each pixel in the target image by the following affine matrix according to the scaling coefficient and the rotation angle calculated in the step 4.1 to obtain a converted image:
Figure BDA0002105929820000054
wherein s is a scaling coefficient and theta is a rotation angle, (x, y) is each pixel of the target image, and (nx, xy) is each pixel of the converted image;
through the operation, the converted target image is consistent with the target template in size, and the positioning points are the same in direction;
then, by utilizing the direction characteristic of the target, the relative position relation between the positioning point and the square point is utilized to obtain the rotation angle of the target image within 360 degrees;
step 4.3: matching the target image and the target template converted in the step 4.2 by using a normalized cross-correlation algorithm, positioning the position of the target image in the target template, and realizing decoding:
let the pixel size of the target template I be M × N, and the pixel size of the target image T be M × N. Randomly selecting a sub-picture I with the pixel size of mxn from the template Ix,yScheme Ix,yThe normalized cross-correlation value R (x, y) with the target image T is defined as:
Figure BDA0002105929820000061
in the formula: (i, j) is the coordinates of the pixel in the target image;
subfigure Ix,yPixel average value of (a):
Figure BDA0002105929820000062
pixel average of target image T:
Figure BDA0002105929820000063
and moving the converted target images on the target template in sequence, wherein the position of the maximum normalized cross-correlation value is the position of the target image in the target template, and decoding is realized.
The invention has the beneficial effects that:
(1) the coded target has simple and compact design, can rotate 360 degrees in a plane according to the position relation between the positioning point and the square point, and can effectively judge the rotating direction according to the relation between the positioning point and the square point; if the square point is located right to the positioning light spot, it is coded as '1', if the square point is located right below the positioning point, it is coded as '2', and if the square point is not located near the positioning point, it is coded as '0'. According to the position relation between the positioning points and the square points, the rotating direction can be effectively judged, and as long as a part of target patterns are collected, the position of the part in the original target can be obtained by decoding.
(2) The coding target adopts a group of circular arrays as target elements, comprises 7 multiplied by 7 positioning points and square points with direction characteristics, and adopts white matte ink to prepare the white positioning points and the square points, so that the imaging quality under different measurement conditions is better, and the measurement accuracy is effectively improved.
(3) The pose measuring method based on the planar coding target can obtain the pose of the object to be measured in a relatively large range by acquiring the local information of the coding target, and simultaneously realizes large range and high resolution of measurement.
(4) The designed pose calculation algorithm optimizes pose calculation parameters by adding an optimization equation, and improves pose measurement accuracy.
Drawings
FIG. 1 is a schematic diagram of the structure of the coding target of the present invention;
FIG. 2 is a schematic structural diagram of a coded target-based visual pose measurement system constructed in the coded target-based visual measurement method of the present invention;
fig. 3 is a schematic diagram of an imaging model involved in the visual coordinate measuring method based on the coding target of the present invention.
In the figure, 1, a positioning point, 2, a square point, 3, a shell, 4, a CCD camera, 5, a data line, 6, a computer, 7, an object to be detected, 8, a target bracket and 9, a coding target.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a plane coding target, the structure of which is shown in figure 1, comprising a square shell 3; the working face is provided with a coding unit, the coding unit comprises 7 x 7 arrays of positioning points 1 and 26 square points 2 with the same size, the white positioning points 1 and the square points 2 are both made of white matte ink, the diameter of each positioning point 1 is 7mm, the diameter of each square point 2 is 4mm, the 7 x 7 positioning points 1 are arranged on the working face at equal intervals, and 26 small round points 2 are distributed among 49 positioning round points 1; the distribution mode between the 7 × 7 positioning points 1 and the 26 orientation points 2 is specifically as follows: setting a square position 2 right of a positioning point 1 or setting a square position 2 at the right lower part of the positioning point or not setting the square position 2;
according to a common calibration target, the invention adopts a 7 x 7 array, and at least 3 primitives are determined to be needed according to 36 codes needed; any orientation coding is required to have uniqueness, so the element design has directivity, the coding difficulty is greatly reduced, and if the square point 2 is arranged right to the positioning light spot 1, the coding is '1'; if the orientation point 2 is arranged at the lower right of the positioning point 1, the orientation point is coded as '2'; if the square position point 2 is not arranged near the positioning point 1, the code is '0', and as few round points are adopted as possible, so that the processing difficulty is reduced;
the following code is generated:
Figure BDA0002105929820000081
the codes of any adjacent four points are not overlapped, and the codes have uniqueness.
The invention relates to a pose measurement method applied to a plane coding target, which is implemented according to the following steps:
step 1: a pose measurement system of a plane coding target is constructed by utilizing the coding target, the computer 6 and the CCD camera 10:
the pose measurement system based on the plane coding target has the structure shown in fig. 2, and comprises a coding target 9 and a computer 6, wherein a measurement algorithm module is integrated in the computer 6; the computer 6 is connected with the CCD camera 4 through a CCD sensor data line 5, the CCD camera 4 is fixed on an object to be measured 7, and a coding target 9 is fixed on a target bracket 8, so that the target is ensured to be in a camera field of view, and the camera can acquire at least four points of the target.
In the above structure, the coded target is used for providing measurement parameters, the CCD camera 4 is used for providing a measurement reference, and the computer 6 is used for providing a measurement algorithm; all parts are combined with each other, so that pose measurement can be realized.
Prior to making pose measurements, known parameters include, among others: parameters in the CCD camera 4; coordinates of each positioning point on the coded target 9 under a target coordinate system; the diameter sizes of the anchor point 1 and the anchor point 2 of the coding target 9. When the position and pose of the object to be detected need to be determined, the position and pose of the object to be detected can be obtained by local points or all points on the target appearing in the CCD camera field of view, and the CCD camera is not required to acquire all points of the target, so that the moving range of the object to be detected is wider. Therefore, the problem that large range and high resolution cannot be considered is effectively solved;
step 2: starting the position and pose measurement system based on the coding target constructed in the step 1, wherein at least four points in a field of view to be imaged are used as measured points; then the computer 6 controls the CCD camera 4 to acquire an image;
and step 3: removing all background sundries except the target point from the acquired image, and solving the center coordinates of the dots through image processing;
the specific process of step 3 is as follows:
step 3.1: the collected target image is subjected to global threshold segmentation, and the area is used as a judgment basis, so that boundary interference and noise can be effectively removed, and the positioning dots and the orientation dots can be quickly obtained.
And carrying out global threshold segmentation on the acquired target image by adopting an iterative threshold segmentation method to obtain a binary image, wherein the target point is white, the background is black, and boundary interference and noise exist simultaneously.
The area s of a target dot in the image is between a and b, namely a is less than s and less than b, the image after global threshold segmentation is operated, the area of a white area in the image, which is a value of a, less than s and less than b, is set as a target, and the area of the white area in the image, which is more than b and less than a, is set as a background, so that boundary interference and noise can be effectively removed.
Using the circularity index C ═ s/pi · r2Wherein s and r are respectively the area and the radius of the round target point, and when the circularity index is more than 0.5, the positioning dots and the azimuth dots can be judged.
Step 3.2: and carrying out local threshold segmentation on the areas near each positioning dot and each azimuth dot by using a maximum inter-class variance method, and accurately obtaining the positioning dots and the azimuth dots.
And (3) selecting a region of interest (ROI) nearby the positioning round point or the azimuth round point determined in the step (3.1), selecting the ROI as a rectangular region for convenient calculation, taking the circle center as the rectangular center and taking the radius of 4 times as the side length of the rectangle. Each positioning round point and each azimuth round point correspond to one ROI, and a maximum inter-class variance method is adopted for each ROI area in an original collected image to perform accurate threshold segmentation, so that the target segmentation accuracy is guaranteed.
Step 3.3: and (4) utilizing ellipse fitting to solve the central coordinates of the positioning dots and the azimuth dots.
And performing ellipse fitting on the edge points of the positioning dots and the azimuth dots obtained by segmentation according to a least square method to determine the centers of ellipses, namely determining the center coordinates of the positioning dots and the azimuth dots.
And 4, step 4: decoding according to the positioning points and the square points, determining the positions of the acquisition parts in the target, and obtaining world coordinates of the positioning points;
step 4 is specifically implemented as follows:
step 4.1: calculating the scaling coefficient and the rotation angle of the target image:
distinguishing a positioning point and a square point by the size of a circular point, and calculating rotation and scaling parameters of the target image by using a group of 4 positioning points in the target image; establishing affine change between a group of positioning points in the target image and any group of positioning points of the target template according to the characteristics that the distances between the positioning points are the same, and only changing translation parameters without influencing a scaling coefficient and a rotation angle; the affine transformation equation between the target image and the target template is as follows:
Figure BDA0002105929820000111
wherein (x ', y') is a target template characteristic point and is a known parameter; (x, y) are target image characteristic points and can be obtained by image processing; tx, ty is the relation between the translation parameter and the selected target template locating point; s is a scaling factor; theta is a rotation angle;
by using the positioning points in the 4 target images and the positioning points in the corresponding target templates, affine change parameters can be obtained by solving
Constructing a matrix Y by the characteristic points of the target template:
Y=[x1',y1',…,xn',yn']T(2);
constructing a matrix X by the characteristic points of the target image:
Figure BDA0002105929820000112
the affine variation parameters are:
Figure BDA0002105929820000113
for affine variation parameter
Figure BDA0002105929820000124
The first 2 items of decomposition of
scosθ=a;ssinθ=b (5);
Then
Figure BDA0002105929820000121
Figure BDA0002105929820000122
Step 4.2: converting the target image into a target image which is consistent with the target template in size and same in direction:
multiplying each pixel in the target image by the following affine matrix according to the scaling coefficient and the rotation angle calculated in the step 4.1 to obtain a converted image:
wherein s is a scaling coefficient and theta is a rotation angle, (x, y) is each pixel of the target image, and (nx, xy) is each pixel of the converted image;
through the operation, the converted target image is consistent with the target template in size, and the positioning points are the same in direction;
then, by utilizing the direction characteristic of the target, the relative position relation between the positioning point and the square point is utilized to obtain the rotation angle of the target image within 360 degrees; since the square position point is located only directly to the right or below the positioning point, if the target image rotation θ does not satisfy the requirement (specifically, the azimuth point is located only directly to the right or below the positioning point), the target image is rotated by θ +90 °, θ +180 °, θ +270 ° until the requirement is satisfied (specifically, the square position point is located only directly to the right or below the positioning point).
Step 4.3: matching the target image and the target template converted in the step 4.2 by using a normalized cross-correlation algorithm, positioning the position of the target image in the target template, and realizing decoding:
let the pixel size of the target template I be M × N, and the pixel size of the target image T be M × N. Randomly selecting a sub-picture I with the pixel size of mxn from the template Ix,yScheme Ix,yThe normalized cross-correlation value R (x, y) with the target image T is defined as:
in the formula: (i, j) is the coordinates of the pixel in the target image;
subfigure Ix,yPixel average value of (a):
Figure BDA0002105929820000132
pixel average of target image T:
and moving the converted target images on the target template in sequence, wherein the position of the maximum normalized cross-correlation value is the position of the target image in the target template, and decoding is realized.
And 5: the pose of the target to be measured can be solved according to the image coordinates of the positioning points and the corresponding world coordinates, and the method is implemented as follows:
establishing a three-dimensional coordinate system o-xyz of the camera by taking the perspective center of the CCD camera 4 as an origin, taking the optical axis direction as a z axis and taking the horizontal and vertical directions parallel to CCD pixels as an x axis and a y axis respectively, and establishing a target space coordinate system P-x ' y ' z ' by taking a target central point P as the origin, which is specifically shown in FIG. 3;
if the internal parameters of the CCD camera 4, such as: the focal length, the center of the image is known, giving enough control points (x)i′,yi′,zi') and corresponding pixel coordinates (U)i,Vi) The rotation and translation matrix R, T can be obtained by solving the equation (8);
the matrix T is a translation matrix of two coordinate systems, namely a translation matrix between a control point target coordinate system and a camera o-xyz coordinate system, and the physical meaning of the matrix T is the translation position relation between the origin of the target coordinate system and the origin of the camera coordinate system;
the target center is used as the origin of the measuring head coordinate system, so that the coordinates of the target center in the CCD camera 4 coordinate system, namely the translation matrix T, can be determined; under the coplanar condition of the target positioning points, setting the coordinate of a space point z' as 0, converting the formula (8) into the formula (9), and substituting variables to obtain an unknown quantity aiThe linear equation (10) has more than four pairs of object image corresponding points, namely the unknown quantity a can be solvediA least squares solution of; solving R, T matrix equation (11) by orthogonal constraint, establishing constraint functions (12) (13) (14) (15), and iterating R, T matrix optimal solution by Newton Gauss iteration (16) (17) (18) (19), namely that the relative target position posture of the camera can be uniquely determined;
the specific algorithms involved are as follows:
Figure BDA0002105929820000142
Figure BDA0002105929820000143
the solution is feasible in an ideal perspective imaging process, because the coordinates of the object and the image point in each coordinate system have no error, and the object and the image point strictly follow the geometric corresponding relation, the calculated R meets the orthogonal constraint relation, however, in an actual environment, due to various factors, such as an image plane point extraction error, a camera parameter calibration error, a position error between control points and the like, the spatial coordinates of the corresponding point of the object and the image have errors, and finally the R matrix does not meet the orthogonal constraint relation, and a large error is caused along with the T matrix. The following constraint function may be established.
The following equation is established from equation (8)
Figure BDA0002105929820000161
An objective function can be obtained that minimizes the sum of squared spatial coordinate errors:
Figure BDA0002105929820000162
Figure BDA0002105929820000163
this converts the constrained optimization objective function of equation (10) to an unconstrained optimization objective function:
Figure BDA0002105929820000164
the optimum value was searched by newton-gaussian method. The solution model is as follows:
Figure RE-GDA0002310895040000165
B=(fa1,fa2,fb1,fb2...faN,fbN,fp1...fp6) (17);
then the following iterative equation is present:
X(n+1)=Xn-(JT·J)-1·JT·B
X=(r1,r2...r9,Tx,Ty,Tz)T(18);
in formulae (8) to (18): (x)i′,yi′,zi') coordinates of the marker point in the target coordinate system, (U)i,Vi) R, T are rotation and translation matrixes between a target coordinate system and a camera coordinate system respectively for corresponding image point coordinates, f is a known imaging focal length, and rho is a set coefficient; (T)x,Ty,Tz) In the expanded form of the translational matrix, when the positioning spots are coplanar, (r)1, r4,r7) In the form of an expanded form of a rotation matrix, aiM is a penalty factor for intermediate amount of variable substitution.
The coding target and the pose measurement method applied to the plane coding target can break through the limitation of the previous visual pose measurement and realize large range and high precision of measurement.

Claims (6)

1. The planar coding target is characterized by comprising a shell (3), wherein the shell (3) is fixedly connected with the coding target, a plurality of coding units are arranged on the coding target, each coding unit comprises positioning points (1) and square points (2) which are arrayed in parallel, the positioning points (1) and the square points (2) are distributed at equal intervals, and the coding target is made of glass with a black background and white circles.
2. The planar coding target according to claim 1, wherein the black background is made of black matte ink, and the white circles are made of white matte ink, so as to facilitate image acquisition.
3. A planar coding target according to claim 2, wherein the coding unit comprises 7 x 7 arrays of identical anchor points (1) and 26 identical orientation points (2), one orientation point (2) is set at the right of one anchor point (1), one orientation point (2) is set at the lower right thereof, or no orientation point (2) is set near the right thereof.
4. The planar coding target of claim 3, wherein the orientation of the target is as follows: the coding unit adopts the following coding mode:
if the square position point (2) is arranged right and right of the positioning point (1), the square position point is coded as '1'; if the square position point (2) is arranged at the lower right of the positioning point (1), the square position point is coded as '2'; if the square position point (2) is not set near the positioning point (1), the code is '0', and the following code is generated:
Figure RE-FDA0002183055110000021
the codes of any adjacent four points are not overlapped, and the codes have uniqueness.
The minimum coding unit of the coding target is 4, the range codes of any 4 adjacent positioning points are different, and as long as any part of target larger than the minimum coding unit is acquired, the position of the part in the original target can be obtained by decoding.
5. A pose measurement method applied to the planar coding target of any one of claims 1 to 4, is characterized by comprising the following steps:
step 1: connecting a computer (6) provided with pose resolving software with a CCD camera (4) through a data line (5), movably connecting the CCD camera (4) with a target to be detected (7), fixing a coding target (9) in a visual field of the CCD camera (4), and ensuring that the minimum visual field of the CCD camera (4) is more than 2 times of the distance between positioning points (1) of the coding target (9) to construct a pose measuring system of a plane coding target;
step 2: the CCD camera (4) collects any part of target images;
and step 3: removing all background sundries except the target point from the acquired image, and solving the center coordinates of the dots through image processing;
and 4, step 4: decoding according to the positioning points and the square points, determining the positions of the acquisition parts in the targets, and obtaining world coordinates of the positioning points;
and 5: and obtaining the pose of the target to be detected according to the image coordinates of the positioning points and the corresponding world coordinates.
6. The pose measurement method applied to the planar coding target according to claim 5, wherein the step 4 is specifically implemented as follows:
step 4.1: calculating the scaling coefficient and the rotation angle of the target image:
distinguishing a positioning point and a square point by the size of a circular point, and calculating rotation and scaling parameters of the target image by using a group of 4 positioning points in the target image; according to the characteristic that the distances between the positioning points are the same, affine change is established between one group of positioning points in the target image and any group of positioning points of the target template, only translation parameters are changed, and the scaling coefficient and the rotation angle are not influenced; the affine transformation equation between the target image and the target template is:
Figure RE-FDA0002183055110000031
wherein (x ', y') is a target template characteristic point and is a known parameter; (x, y) are target image characteristic points and can be obtained by image processing; tx, ty is the relation between the translation parameter and the positioning point of the selected target template; s is a scaling factor; theta is a rotation angle;
by using the positioning points in the 4 target images and the positioning points in the corresponding target templates, affine change parameters can be obtained by solving
Constructing a matrix Y by the characteristic points of the target template:
Y=[x1',y1',L,xn',yn']T(2);
constructing a matrix X by the characteristic points of the target image:
Figure RE-FDA0002183055110000032
the affine variation parameters are:
Figure RE-FDA0002183055110000041
for affine variation parameter
Figure RE-FDA0002183055110000045
The first 2 items of decomposition of
scosθ=a;ssinθ=b(5);
Then
Figure RE-FDA0002183055110000042
Step 4.2: converting the target image into a target image which is consistent with the target template in size and same in direction:
multiplying each pixel in the target image by the following affine matrix according to the scaling coefficient and the rotation angle calculated in the step 4.1 to obtain a converted image:
Figure RE-FDA0002183055110000044
wherein s is a scaling coefficient and theta is a rotation angle, (x, y) is each pixel of the target image, and (nx, xy) is each pixel of the converted image;
through the operation, the converted target image is consistent with the target template in size, and the positioning points are the same in direction;
then, by utilizing the direction characteristic of the target, the relative position relation between the positioning point and the square point is utilized to obtain the rotation angle of the target image within 360 degrees;
step 4.3: matching the target image and the target template converted in the step 4.2 by using a normalized cross-correlation algorithm, positioning the position of the target image in the target template, and realizing decoding:
let the pixel size of the target template I be M × N, and the pixel size of the target image T be M × N. Randomly selecting a sub-picture I with the pixel size of mxn from the template Ix,yScheme Ix,yThe normalized cross-correlation value R (x, y) with the target image T is defined as:
Figure RE-FDA0002183055110000051
in the formula: (i, j) is the coordinates of the pixel in the target image;
subfigure Ix,yPixel average value of (a):
Figure RE-FDA0002183055110000052
pixel average of target image T:
Figure RE-FDA0002183055110000053
and moving the converted target images on the target template in sequence, wherein the position of the maximum normalized cross-correlation value is the position of the target image in the target template, and decoding is realized.
CN201910552555.XA 2019-06-25 2019-06-25 Planar coding target and pose measurement method thereof Active CN110763204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910552555.XA CN110763204B (en) 2019-06-25 2019-06-25 Planar coding target and pose measurement method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910552555.XA CN110763204B (en) 2019-06-25 2019-06-25 Planar coding target and pose measurement method thereof

Publications (2)

Publication Number Publication Date
CN110763204A true CN110763204A (en) 2020-02-07
CN110763204B CN110763204B (en) 2022-02-22

Family

ID=69329511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910552555.XA Active CN110763204B (en) 2019-06-25 2019-06-25 Planar coding target and pose measurement method thereof

Country Status (1)

Country Link
CN (1) CN110763204B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096191A (en) * 2020-12-23 2021-07-09 合肥工业大学 Intelligent calibration method for monocular camera based on coding plane target
CN113112550A (en) * 2020-12-23 2021-07-13 合肥工业大学 Coding plane target for calibrating internal and external parameters of camera and coding method thereof
CN113160329A (en) * 2020-12-23 2021-07-23 合肥工业大学 Coding plane target for camera calibration and decoding method thereof
CN113847868A (en) * 2021-08-05 2021-12-28 乐歌人体工学科技股份有限公司 Positioning method and system of material bearing device with rectangular support legs
CN114299172A (en) * 2021-12-31 2022-04-08 广东工业大学 Planar coding target for visual system and real-time pose measurement method thereof
CN114302173A (en) * 2021-12-31 2022-04-08 广东工业大学 Planar coding target and image splicing system and method applying same

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103743393A (en) * 2013-12-20 2014-04-23 西安交通大学 Pose measurement method of cylindrical target
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
JP5695821B2 (en) * 2009-08-31 2015-04-08 株式会社トプコン Color code target, color code discrimination device, and color code discrimination method
CN104864809A (en) * 2015-04-24 2015-08-26 南京航空航天大学 Vision-based position detection coding target and system
CN105352483A (en) * 2015-12-24 2016-02-24 吉林大学 Automotive body pose parameter detection system based on LED arrays
CN106247944A (en) * 2016-09-26 2016-12-21 西安理工大学 Code targets and vision coordinate measurement method based on Code targets
CN108399637A (en) * 2018-02-02 2018-08-14 上海巨幸机器人科技有限公司 A kind of coordinate method encoded with pattern
WO2018161555A1 (en) * 2017-03-06 2018-09-13 广州视源电子科技股份有限公司 Object pose detection method and device
CN108765489A (en) * 2018-05-29 2018-11-06 中国人民解放军63920部队 A kind of pose computational methods, system, medium and equipment based on combination target

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5695821B2 (en) * 2009-08-31 2015-04-08 株式会社トプコン Color code target, color code discrimination device, and color code discrimination method
CN103743393A (en) * 2013-12-20 2014-04-23 西安交通大学 Pose measurement method of cylindrical target
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN104864809A (en) * 2015-04-24 2015-08-26 南京航空航天大学 Vision-based position detection coding target and system
CN105352483A (en) * 2015-12-24 2016-02-24 吉林大学 Automotive body pose parameter detection system based on LED arrays
CN106247944A (en) * 2016-09-26 2016-12-21 西安理工大学 Code targets and vision coordinate measurement method based on Code targets
WO2018161555A1 (en) * 2017-03-06 2018-09-13 广州视源电子科技股份有限公司 Object pose detection method and device
CN108399637A (en) * 2018-02-02 2018-08-14 上海巨幸机器人科技有限公司 A kind of coordinate method encoded with pattern
CN108765489A (en) * 2018-05-29 2018-11-06 中国人民解放军63920部队 A kind of pose computational methods, system, medium and equipment based on combination target

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
孔韦韦,王炳和,李斌兵,雷阳,聂延晋,赵睿,鲁珊: "《图像融合技术 基于多分辨率非下采样理论与方法》", 31 July 2015, 西安电子科技大学出版社 *
王中宇,李亚茹,郝仁杰,程银宝,江文松: "基于点特征的单目视觉位姿测量算法", 《红外与激光工程》 *
胡占义,雷成,吴福朝: "关于P4P问题的一点讨论", 《自动化学报》 *
谷凤伟,高宏伟,姜月秋: "一种简易的单目视觉位姿测量方法研究", 《光电技术应用》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112550B (en) * 2020-12-23 2022-08-02 合肥工业大学 Coding plane target for calibrating internal and external parameters of camera and coding method thereof
CN113112550A (en) * 2020-12-23 2021-07-13 合肥工业大学 Coding plane target for calibrating internal and external parameters of camera and coding method thereof
CN113160329A (en) * 2020-12-23 2021-07-23 合肥工业大学 Coding plane target for camera calibration and decoding method thereof
CN113096191A (en) * 2020-12-23 2021-07-09 合肥工业大学 Intelligent calibration method for monocular camera based on coding plane target
CN113096191B (en) * 2020-12-23 2022-08-16 合肥工业大学 Intelligent calibration method for monocular camera based on coding plane target
CN113160329B (en) * 2020-12-23 2022-08-09 合肥工业大学 Coding plane target for camera calibration and decoding method thereof
CN113847868A (en) * 2021-08-05 2021-12-28 乐歌人体工学科技股份有限公司 Positioning method and system of material bearing device with rectangular support legs
CN113847868B (en) * 2021-08-05 2024-04-16 乐仓信息科技有限公司 Positioning method and system for material bearing device with rectangular support legs
CN114302173B (en) * 2021-12-31 2022-07-15 广东工业大学 Two-dimensional image splicing system and method for planar coding target
CN114299172B (en) * 2021-12-31 2022-07-08 广东工业大学 Planar coding target for visual system and real-time pose measurement method thereof
CN114302173A (en) * 2021-12-31 2022-04-08 广东工业大学 Planar coding target and image splicing system and method applying same
CN114299172A (en) * 2021-12-31 2022-04-08 广东工业大学 Planar coding target for visual system and real-time pose measurement method thereof
US11689737B1 (en) 2021-12-31 2023-06-27 Guangdong University Of Technology Plane coding target and image splicing system and method applying the same
US11699244B2 (en) 2021-12-31 2023-07-11 Guangdong University Of Technology Planar coding target for vision system and real-time pose measurement method thereof

Also Published As

Publication number Publication date
CN110763204B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN110763204B (en) Planar coding target and pose measurement method thereof
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN110517325B (en) Coordinate transformation and method and system for positioning objects around vehicle body through coordinate transformation
CN101839692B (en) Method for measuring three-dimensional position and stance of object with single camera
CN110146038B (en) Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN108594245A (en) A kind of object movement monitoring system and method
JP2000227309A (en) Three-dimensional position posture sensing device
Boochs et al. Increasing the accuracy of untaught robot positions by means of a multi-camera system
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN110672020A (en) Stand tree height measuring method based on monocular vision
CN110163918A (en) A kind of line-structured light scaling method based on projective geometry
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN112200203A (en) Matching method of weak correlation speckle images in oblique field of view
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
CN111402330A (en) Laser line key point extraction method based on plane target
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
CN116740187A (en) Multi-camera combined calibration method without overlapping view fields
CN112365545A (en) Calibration method of laser radar and visible light camera based on large-plane composite target
CN110415292A (en) A kind of athletic posture vision measuring method of annulus mark and its application
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant