CN112270716A - Decoding positioning method of artificial visual landmark - Google Patents

Decoding positioning method of artificial visual landmark Download PDF

Info

Publication number
CN112270716A
CN112270716A CN202011190911.7A CN202011190911A CN112270716A CN 112270716 A CN112270716 A CN 112270716A CN 202011190911 A CN202011190911 A CN 202011190911A CN 112270716 A CN112270716 A CN 112270716A
Authority
CN
China
Prior art keywords
coding
center
central
calibration ring
artificial visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011190911.7A
Other languages
Chinese (zh)
Other versions
CN112270716B (en
Inventor
喻擎苍
费焕强
龚征绛
陈武
查杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011190911.7A priority Critical patent/CN112270716B/en
Publication of CN112270716A publication Critical patent/CN112270716A/en
Application granted granted Critical
Publication of CN112270716B publication Critical patent/CN112270716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of robot positioning. The method aims to provide a decoding method and an artificial vision positioning method of an artificial vision landmark, and provides a technical basis for the application of a vision robot. The technical scheme is as follows: a method for decoding and locating an artificial visual landmark, the artificial visual landmark comprising: 1) 4 and more than 4 coding pieces which are at least arranged into two rows and two columns, wherein all the coding pieces are arranged in a cross way in every two rows and every two columns; the coding pieces in each row are linearly arranged in the horizontal direction, and the coding pieces in each column are linearly arranged in the inclined direction; 2) the coding assembly on any one coding piece comprises a central calibration ring and six coding blocks arranged around the central calibration ring; 3) the central calibration ring is a circular pattern and adopts one of four colors of red, green, blue and black; 4) the six coding blocks are six hexagonal patterns surrounding the central calibration ring, and each coding block adopts one of four colors of red, green, blue and black.

Description

Decoding positioning method of artificial visual landmark
Technical Field
The invention relates to the field of robot positioning, in particular to an artificial vision landmark decoding method and an artificial vision positioning method.
Background
Under the background of the 4.0 era of industry with intelligent manufacturing as the core, the industrial robot industry market shows an explosive growth trend, and in the operations of robot cooperation, kinematic parameter identification, robot off-line programming and the like, the problem to be solved firstly is the positioning problem of the robot.
Providing information via landmarks is the main method for positioning by robots. A landmark refers to a carrier that is capable of providing external navigational positioning information for the robot. Landmarks are classified into artificial visual landmarks and natural landmarks. Compared with natural landmarks, the artificial visual landmarks have remarkable characteristics, random placement positions and more practical application of embeddable specific information.
The robot error correction can be better carried out by decoding and positioning specific artificial visual landmarks.
Disclosure of Invention
The invention aims to provide a decoding method and an artificial vision positioning method of an artificial vision landmark, and provides a technical basis for the application of a vision robot.
The technical scheme provided by the invention is as follows:
a method for decoding and locating an artificial visual landmark, the artificial visual landmark comprising:
1) 4 and more than 4 coding pieces which are at least arranged into two rows and two columns, wherein all the coding pieces are arranged in a cross way in every two rows and every two columns; the coding pieces in each row are linearly arranged in the horizontal direction, and the coding pieces in each column are linearly arranged in the inclined direction;
2) the coding assembly on any one coding piece comprises a central calibration ring and six coding blocks arranged around the central calibration ring;
3) the central calibration ring is a circular pattern and adopts one of four colors of red, green, blue and black;
4) the six coding blocks are six hexagonal patterns surrounding the central calibration ring, and each coding block adopts one of red, green, blue and black colors;
the method is characterized in that: the decoding method of the artificial visual landmark comprises the following steps:
1) shooting an artificial visual landmark image by using a camera, wherein the center point of the image is the center point of a camera;
2) decoding the image obtained in the previous step;
the decoding method of step 2) includes:
(1) extracting qualified artificial visual landmark images; the qualified artificial visual landmark comprises at least one complete code slice image;
(2) preprocessing the artificial visual landmark image, converting an RGB color space into an HSV color space, setting an HSV value range to determine four colors of black, red, green and blue, and realizing multivalue of the image;
(3) performing data extraction on the artificial visual landmark image, comprising: extracting outline chain connected domains of the whole artificial visual landmark image, extracting the number of pixel points and the mass center of each connected domain, and extracting the mass center coordinate of a central calibration ring closest to the central point of the camera and the mass center coordinates of six coding blocks surrounding the central calibration ring;
(4) identifying the code word; the position of each coding block in each coding slice is defined, and then the coding blocks are decoded sequentially and sequentially according to the decoding rule and the position sequence.
Converting the RGB color space into HSV color space by adopting the following formula:
Figure RE-GDA0002822283770000021
Figure RE-GDA0002822283770000031
V=max (3)
in the formula:
h is hue, S is brightness, V is brightness, R is red, G is green, and B is blue;
max is the number with the largest numerical value in R, G and B, min is the number with the smallest numerical value in R, G and B;
when H > -60& & H <60, red pixel points are extracted and are marked as 1;
when H >110& & H <150, extracting green pixel points, and recording the green pixel points as 2;
when H >180& & H <260, extracting blue pixel points, and recording the blue pixel points as 3;
when V > is 0& & V < 180, black pixel points are extracted and are marked as 0.
Establishing a coordinate system on the qualified artificial visual landmark image, so that the image is positioned in a first quadrant of the coordinate system; extracting all connected domains and extracting the number of pixel points and the mass center of each connected domain in the image, wherein the method comprises the following steps:
carrying out Image processing on the qualified artificial visual landmark Image by using an Image motions algorithm, and extracting the number of pixel points and the mass center of all connected domains; the geometric distance formula of the Image Moments algorithm is as follows:
Figure RE-GDA0002822283770000032
extracting the center of mass coordinate of a center calibration ring closest to the central point of the camera and the six coordinates of the center of mass of the coding blocks surrounding the center calibration ring:
the method for extracting the central calibration ring closest to the central point of the camera comprises the following steps: traversing and selecting a part of connected domains with more pixel points than other connected domains, and performing difference operation on the centroid coordinates of the part of connected domains and the image coordinate origin, wherein the connected domain with the minimum value is the central calibration ring closest to the central point of the camera; the center coordinates of the center calibration ring are the coordinates of the center of mass of the center calibration ring;
the extraction method of the coordinates of the mass centers of the six coding blocks surrounding the central calibration ring comprises the following steps: and performing difference operation on the center coordinate of mass of the central calibration ring and the center coordinates of mass of all other connected domains, wherein six connected domains with the minimum absolute value are six coding blocks surrounding the central calibration ring.
The position definition method of six coding blocks surrounding the central calibration ring is as follows:
(1) traversing the six coding blocks and comparing with the centroid coordinate of the central calibration ring, wherein the smallest abscissa is marked as the position No. 4, and the largest abscissa is marked as the position No. 1;
(2) traversing the other four coding blocks and comparing the coordinates of the center of mass of the central calibration ring, wherein the position with the smallest abscissa and the largest ordinate is marked as the position No. 5, and the position with the largest abscissa and the smallest ordinate is marked as the position No. 3;
(3) and traversing the last two coding blocks, wherein the position with the largest vertical coordinate is the position No. 0, and the position with the smallest vertical coordinate is the position No. 2.
The decoding rule is as follows: sequentially and clockwise extracting the coding components at the positions 0,1, 2, 3, 4 and 5; the extracted black coding block is taken as the first bit, the position of the black coding block is recorded, coding piece decoding is sequentially carried out in the clockwise direction, the coding form obtained by decoding comprises two parts, namely color and position, the first 5 bits of the coding form represent the pixel color of the coding block which is extracted by the coding component clockwise except the black coding block, the 6 th bit of the coding form represents the pixel color of the central coding piece extracted by the coding component, and the 8 th bit of the coding form word represents the position number of the black coding block. When H > -60& & H <60, the extracted red pixel point is 1, when H >110& & H <150, the extracted green pixel point is 2, when H >180& & H <260, the extracted blue pixel point is 3, and when V > & & V < (180), the extracted black pixel point is 0.
The transformation from pixel coordinates to real coordinates is carried out by fitting a quadric surface, and distortion correction is carried out simultaneously, so that the central point of the camera is accurately positioned, and the correction of the motion error of the vision robot is facilitated;
the method comprises the following steps:
(1) processing the contour chain of six hexagonal coding blocks around a central calibration ring closest to the central point of the camera, extracting the coordinates of the vertex of each acute angle in the six coding blocks, and extracting the central line of each hexagonal coding block;
(2) obtaining six characteristic points through center line fitting;
(3) performing quadratic surface fitting, including surface fitting in the abscissa direction and surface fitting in the ordinate direction;
(4) and (5) carrying out quadric surface conversion to convert the image center coordinates into camera center real coordinates, thereby realizing camera center point positioning.
The step of extracting the central line of each hexagonal coding block is as follows:
(1) calculating a pixel point closest to the centroid coordinate of the hexagonal coding block, and traversing the number of pixel points of a half of the contour chain by taking the pixel point as a starting point;
(2) calculating a pixel point which is farthest away from the coordinate of the centroid of the hexagonal coding block on the half contour chain, wherein the pixel point is an acute angle vertex for solving the fitting problem;
(3) traversing the other half of the contour chain from the pixel point closest to the coordinate of the centroid of the hexagonal coding block;
(4) calculating a pixel point which is farthest away from the centroid coordinate of the hexagonal coding block on the half contour chain, wherein the pixel point is another acute angle vertex for solving the fitting problem;
(5) and (4) connecting the two acute-angle vertexes in the steps (2) and (4), wherein the connecting line is the central line of the hexagonal coding blocks, namely the connecting line of two points which are positioned on the outline chain of each hexagonal coding block and have larger distance from the centroid of the coding block.
The steps of obtaining six feature points through centerline fitting are as follows: after acute angle vertex coordinate data of the six coding blocks are obtained, least square fitting is carried out on the data, least square fitting of straight lines is rapidly achieved, intersection points are formed between every two adjacent straight lines after fitting, the six intersection points are six feature points, and the intersection points are intersection points of every two connecting lines of six central lines of the six coding blocks.
The quadric surface fitting is completed by substituting the coordinates of the six characteristic points into a quadric surface equation set for solving and then fitting by using Matlab software.
The invention has the beneficial effects that: the method provided by the invention can be used for correcting the distortion of the camera by using the existing software tool to fit the quadric surface, and simultaneously, the artificial visual landmark is quickly decoded and positioned, so that a foundation is provided for correcting the motion error of a subsequent robot, and a technical foundation is provided for the application of the visual robot.
Drawings
Fig. 1 is a schematic diagram of a decoding position of a single coded slice where a center point of a camera is located.
Fig. 2 is a schematic diagram of acute vertex search.
FIG. 3 is a schematic diagram of line fitting.
Fig. 4 is a schematic diagram of a quadratic surface in the X direction.
Fig. 5 is a schematic view of a Y-direction quadric surface.
Fig. 6 is a schematic diagram of an artificial visual landmark (the X coordinate and the Y coordinate of the coordinate system on the figure are real coordinates, and the unit is mm).
Fig. 7 is a diagram of the decoding positioning result.
Fig. 8 is a schematic view of the operation state of the camera.
Fig. 9 is a schematic diagram of a camera center point on an artificial visual landmark.
Fig. 10 is a schematic diagram of an origin of an artificial visual landmark image captured by a camera (x coordinate and y coordinate of an image coordinate system on the schematic diagram are pixel coordinates, and the unit is px).
FIG. 11 is a flowchart of the Image Moments algorithm.
FIG. 12 is an illustration of an artificial visual landmark according to an embodiment.
Fig. 13 is a qualified image of an artificial visual landmark acquired by a camera.
Fig. 14 is a graph of the X-direction quadric surface after fitting of the artificial visual landmark shown in fig. 6.
Fig. 15 is a graph of the Y-direction quadric surface after fitting of the artificial visual landmarks shown in fig. 6.
Detailed Description
Several concepts are first defined:
1. center calibration ring: the circular pattern has four colors of red, green, blue and black.
2. Outer ring coding block, coding block: six hexagonal patterns surrounding the central calibration ring have four colors of red, green, blue and black and are the same in size.
3. Coding a slice: a pattern consisting of one central calibration ring and six code blocks surrounding the central calibration ring.
4. An encoding component: a central calibration ring and six code blocks in a coded slice are referred to as coding assemblies.
5. Artificial visual landmarks: the pattern is a pattern which has certain characteristics on the ground and can be recognized from the air and processed by image processing.
6. Connected domain: and areas with the same color pixel points and adjacent positions.
7. Contour chain: all pixels with the same color on the edge of the image are connected to form a contour chain.
8. DFS: depth-first traversal, which means to go from a certain point to deep, and to go back to the previous step after going to the point that can not go down, until a solution is found or all points are gone.
9. Center of mass: here the center of mass of the connected domain.
10. RGB color space: RGB is the color space we have most exposed to, and represents one image by three channels, red (R), green (G) and blue (B), respectively.
11. HSV color space: what is used in image processing is much HSV color space, expressing hue (H), vividness (S), and brightness (V) of a color.
12. The coding form is as follows: refers to the representation of the code, which is eight bits in total, represented by the number + "-" symbol.
13. Code word: refers to the first six bits of the encoded form.
14. Traversing: means that each node is visited once in turn along a certain search route.
15. Least square method: the method is a mathematical optimization technique, and unknown data can be simply obtained by using a least square method, and the sum of squares of errors between the obtained data and actual data is minimized.
16. Quadric surface: a surface expressed by a ternary quadratic equation is referred to as a quadratic surface.
17. The camera central point: the intersection point of the optical axis of the camera and the artificial visual landmark is shown, and the center point (black point) of the camera in fig. 8 is only shown schematically and is not really displayed on the artificial visual landmark.
18. And (3) real coordinates: refers to the coordinates of an artificial visual landmark in reality in millimeters.
19. Pixel coordinates are as follows: the coordinate of the image of the artificial visual landmark is shot by the camera, and the unit is px.
20. Binary image: each pixel is a black or white image.
The invention is further described below with reference to the accompanying drawings.
The artificial visual landmark is formed by splicing 4 or more than 4 code pieces. The artificial visual landmark, 3 x3, shown in fig. 6, is formed by splicing three rows and three columns of 9 code segments.
The premise of decoding and positioning is that the coding component of the coding sheet where the central point of the camera is located is extracted, and the steps are as follows:
(1) the camera is hung right above the artificial visual landmark for image shooting, the artificial visual landmark is formed by splicing 4 or more than 4 code sheets, and the shot image is regarded as a successfully shot qualified image as long as 1 or more than 1 complete code sheet exists; this image is used as an artificial visual landmark image, and the image coordinate axis and the origin (0,0) are set, and the image is divided into first quadrants as shown in fig. 10.
(2) The method comprises the steps of preprocessing an artificial visual landmark image, namely, conducting multivalued processing on an image obtained by converting an RGB color space into an HSV color space, setting an HSV value range to determine four colors of black, red, green and blue, realizing multivalued of the image, when H > -60& & H <60, extracting a red pixel point, and marking the red pixel point as 1, when H >110& & H <150, extracting a green pixel point, and marking the green pixel point as 2, when H >180& & H <260, extracting a blue pixel point, and marking the blue pixel point as 3, when V > & & V < (180), extracting a black pixel point, and marking the black pixel point as 0, and conducting integration and segmentation on the pixel points with the same color to form a closed section of outline chain. The process of converting the RGB color space into the HSV color space uses the formula 1, the formula 2 and the formula 3. Wherein max is the number with the largest numerical value in R, G and B, and min is the number with the smallest numerical value in R, G and B.
Figure RE-GDA0002822283770000091
Figure RE-GDA0002822283770000092
V=max (3)
(3) Then, connected domain extraction is carried out on the whole artificial visual landmark Image, the number and the mass center of the pixel points of all the connected domains are extracted by using an Image Moments algorithm, the flow chart of the algorithm is shown in FIG. 11, and the geometric distance formula is shown as formula 4, wherein a1 is the value with the minimum horizontal coordinate value of the pixel points of the connected domains, a2 is the value with the maximum horizontal coordinate value of the pixel points of the connected domains, b1 is the value with the minimum vertical coordinate value of the pixel points of the connected domains, and b2 is the value with the maximum vertical coordinate value of the pixel points of the connected domainsThe value x is the value of the abscissa of the pixel point, y is the value of the ordinate of the pixel point, p, q are the order of x and y respectively, and the x is the abscissa of the connected domain centroid represented by formula 50And formula 6 represents the ordinate value y of the connected domain centroid0
Figure RE-GDA0002822283770000093
Figure RE-GDA0002822283770000094
Figure RE-GDA0002822283770000095
(4) Traversing and selecting a part of connected domains with more pixel points than other connected domains, and performing difference operation on the centroid coordinate of the part of connected domains and the image center coordinate (320,240), wherein the connected domain with the minimum value is a center calibration ring closest to the center point of the camera, and the difference operation principle is formula 7, wherein A and B are centroids of two connected domains, (x is the centroid of two connected domains1,y1),(x2,y2) The coordinates of the centers of mass of the two parts are respectively A and B (the number of connected domain pixels of the central calibration ring in the designed artificial visual landmark is more than that of the connected domain pixels of the hexagonal coding block, so that the central calibration ring is ensured, the width of an image shot by the camera is 640 pixels, and the height of the image is 480 pixels).
Figure RE-GDA0002822283770000101
(5) And performing difference operation on the center-of-mass coordinate of the central calibration ring and the center-of-mass coordinates of all other connected domains, taking an absolute value, extracting six connected domains with the minimum value, namely six outer ring coding blocks around the central calibration ring, wherein the difference operation principle is a formula 7, and thus extracting the coding slice where the central point of the camera is located.
Further, the code word is identified, and decoding is completed. Coordinate axes and original points of the coding blocks are set, as shown in fig. 1, and the positions of the coding blocks are defined as follows:
(1) traversing six coding blocks and comparing with a center calibration ring centroid coordinate, wherein the smallest abscissa is marked as the position No. 4, and the largest abscissa is marked as the position No. 1.
(2) Traversing the other four coding blocks and comparing the coordinate with the center of mass of the central calibration ring, wherein the position with the smallest abscissa and the largest ordinate is marked as the position No. 5, and the position with the largest abscissa and the smallest ordinate is marked as the position No. 3.
(3) And traversing the last two coding blocks, wherein the position with the largest vertical coordinate is the position No. 0, and the position with the smallest vertical coordinate is the position No. 2.
(4) And then, clockwise arranging positions 0,1, 2, 3, 4 and 5 to sequentially extract the coding components, taking the extracted black coding block as a first position and recording the position of the black coding block, sequentially decoding the coding slices clockwise, wherein the coding form obtained by decoding comprises two parts of color and position, the first 5 bits of the coding form are the colors of the pixel points of the coding blocks extracted by the clockwise coding component except the black coding block, the 6 th bit of the coding form is the color of the pixel point of the central coding slice extracted by the coding component, and the 8 th bit of the coding form word represents the position number of the black coding block. The decoded form shown in fig. 1 is 232211-5, the code word is 232211, and the position where the black coding block is located is position No. 5.
Furthermore, in order to facilitate the correction of the motion error of the vision robot, the center point of the camera needs to be positioned. Because the camera lens has distortion, the camera distortion is corrected to prevent the real coordinate of the positioned camera central point from being inaccurate. The coordinates on the artificial visual landmark images are in pixel units, the real coordinates of the central points of the cameras are in millimeter units, the transformation from the pixel coordinates to the real coordinates can be realized by fitting a quadric surface, and meanwhile, the quadric surface can be utilized for distortion correction to realize accurate positioning.
A quadric fit requires six feature points. The characteristic points can be obtained through centerline fitting, the extraction of the centerline is mainly used for extracting the acute vertex coordinates of the hexagonal coding blocks, and the contour chain of the hexagonal coding blocks is processed.
(1) And calculating a pixel point g closest to the coordinate of the centroid of the hexagonal coding block, and traversing the points of the number of the pixel points of a half of the contour chain by taking the pixel point g as a starting point.
(2) And calculating a pixel point h which is farthest away from the coordinate of the centroid of the hexagonal coding block on the half contour chain, wherein the pixel point h is an acute angle vertex for solving the fitting problem.
(3) And traversing the other half of the contour chain from the pixel point g.
(4) And calculating a pixel point k which is farthest away from the coordinate of the centroid of the hexagonal coding block on the half contour chain, wherein the pixel point k is an acute vertex for solving the fitting problem, as shown in fig. 3.
The above acute vertex search is performed for a plurality of times of experiments to obtain data (x coordinate and y coordinate) of the acute vertices of six coding blocks, the data is subjected to least square fitting, the principle is formula 8 (where a and b are coefficients of a + bx), the least square fitting of straight lines is rapidly realized, after fitting, every two adjacent straight lines have intersection points, the six intersection points are the obtained feature points, and the result is shown in fig. 3.
Figure RE-GDA0002822283770000111
And (3) performing quadratic surface fitting, wherein a general quadratic surface equation is shown as formula 9, considering that the coordinate error is required to be adjusted, keeping the value of the height Z of the camera from the ground unchanged, converting the formula 9 into a formula 10, and then obtaining a formula 11 and a formula 12. The x and y coordinates of the six feature points are not necessarily equal, and the linear independence is satisfied, that is, the full rank is realized, so equations 11 and 12 have solutions. Wherein X and Y are characteristic point pixel coordinates, and X and Y are characteristic point real coordinates. Substituting the coordinates of the six characteristic points into an equation 11, solving an equation system by using an equation 12, and fitting by using Matlab software to obtain a quadric surface.
Figure RE-GDA0002822283770000121
Figure RE-GDA0002822283770000122
Figure RE-GDA0002822283770000123
Figure RE-GDA0002822283770000124
Further, the coordinates (X1, y1), (X2, y2), (X3, y3), (X4, y4), (X5, y5), (X6, y6) of the six feature points and the real coordinates X1, X2, X3, X4, X5, and X6 are solved by an equation set to obtain unknowns a1, b1, c1, d1, e1, and f1, so that quadratic surface fitting in the abscissa direction is performed. Similarly, a quadratic surface fitting in the ordinate direction is performed, and the coordinates (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5), (X6, Y6) of the six feature points and the real coordinates Y1, Y2, Y3, Y4, Y5, and Y6 are solved to obtain a curved surface, where (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5), (X6, and Y6) are real coordinates, and are measured by a micrometer and accurate to three bits after a decimal point. The pixel unit is px, and the unit at the time of measurement is cm.
Furthermore, the fitted quadric surface can correct the distortion generated by the camera, so that the complex steps are saved, and the conversion from pixel coordinates to real coordinates is realized. The real coordinate of the central point of the camera is obtained through conversion, the central point of the camera is positioned, and a foundation is provided for correcting the motion error of the follow-up vision robot.
Further, an example is given to how much error exists between the real coordinates of the center point of the camera obtained through the quadric surface and the real coordinates of the center point of the camera actually on the artificial visual landmark.
The pixel coordinates of the six feature points used in the example of the artificial visual landmark (shown in fig. 6) are (327,229), (352, 244), (352, 273), (325, 288), (300, 272), (300, 243). The actual coordinates of the six feature points are (13.546, 9.826), (14.516, 10.364), (14.516, 11.482), (13.546, 12.046), (12.528, 11.482), (12.528, 10.364), and quadratic curves in the X and Y directions are fitted as shown in fig. 4 and 5.
The camera is controlled to perform linear horizontal movement for 1.5 cm at the position 10 cm away from the artificial visual landmark for the first time, as shown in fig. 8; the image center coordinates (320,240) are transformed into camera center real coordinates (12.6452, 10.7187) by a quadratic surface transformation, as shown in fig. 7. After the camera is controlled to horizontally move for 1.5 cm along the same straight line for the second time, the real coordinates of the center point of the camera are (12.7775, 11.1262), the horizontal coordinates of the two movements are different by 1.323 mm, and the vertical coordinates of the two movements are different by 4.075 mm. And controlling the camera to horizontally move 1.5 cm along the same straight line for the third time, predicting the actual coordinate of the center point of the camera to be (12.9098, 11.5337), actually obtaining the actual coordinate of the center point of the camera to be (130.015,115.908), wherein the X direction of the abscissa has an error of 0.917 mm, and the Y direction of the ordinate has an error of 0.571 mm.
Furthermore, the secondary curved surface model can meet the requirement of the subsequent motion error correction precision of the vision robot by correcting the distortion of the camera, and a foundation is laid for the subsequent work.
The steps of decoding and positioning the artificial visual landmark are further described below by specific embodiments.
The used artificial visual landmark is shown in fig. 12, and the qualified image of the artificial visual landmark shot by the camera is shown in fig. 13; it can be seen that the image of the artificial visual landmark in fig. 13 has only one complete code slice.
The decoded code form is 321221-4, and the codeword is 321221.
The actual coordinates of the six feature points are (8.064,5.158), (8.574, 5.432), (8.574, 5.982), (8.064, 4), (7.604, 5.982), (7.604, 5.432). The coordinates of the six feature point pixels obtained by straight line fitting are (301,129), (383,173), (386,264), (306,312), (223,268) and (220,177).
The pixel coordinates and the actual coordinates of the six feature points are substituted into equations 11 and 12, and Matlab software is used to obtain a quadratic curve as shown in FIGS. 14 and 15. The image center coordinates (320,240) are transformed into camera center real coordinates (-5.3674, 137.5978) by a quadratic surface transformation. Thereby the accurate positioning of camera central point has been realized.

Claims (10)

1. A method for decoding and locating an artificial visual landmark, the artificial visual landmark comprising:
1) 4 and more than 4 coding pieces which are at least arranged into two rows and two columns, wherein all the coding pieces are arranged in a cross way in every two rows and every two columns; the coding pieces in each row are linearly arranged in the horizontal direction, and the coding pieces in each column are linearly arranged in the inclined direction;
2) the coding assembly on any one coding piece comprises a central calibration ring and six coding blocks arranged around the central calibration ring;
3) the central calibration ring is a circular pattern and adopts one of four colors of red, green, blue and black;
4) the six coding blocks are six hexagonal patterns surrounding the central calibration ring, and each coding block adopts one of red, green, blue and black colors;
the method is characterized in that: the decoding method of the artificial visual landmark comprises the following steps:
1) shooting an artificial visual landmark image by using a camera, wherein the center point of the image is the center point of a camera;
2) decoding the image obtained in the previous step;
the decoding method of step 2) includes:
(1) extracting qualified artificial visual landmark images; the qualified artificial visual landmark comprises at least one complete code slice image;
(2) preprocessing the artificial visual landmark image, converting an RGB color space into an HSV color space, setting an HSV value range to determine four colors of black, red, green and blue, and realizing multivalue of the image;
(3) performing data extraction on the artificial visual landmark image, comprising: extracting outline chain connected domains of the whole artificial visual landmark image, extracting the number of pixel points and the mass center of each connected domain, and extracting the mass center coordinate of a central calibration ring closest to the central point of the camera and the mass center coordinates of six coding blocks surrounding the central calibration ring;
(4) identifying the code word; the position of each coding block in each coding slice is defined, and then the coding blocks are decoded sequentially and sequentially according to the decoding rule and the position sequence.
2. The method for decoding and locating an artificial visual landmark according to claim 1, wherein: converting the RGB color space into HSV color space by adopting the following formula:
Figure FDA0002752751160000021
Figure FDA0002752751160000022
V=max (3)
in the formula:
h is hue, S is brightness, V is brightness, R is red, G is green, and B is blue;
max is the number with the largest numerical value in R, G and B, min is the number with the smallest numerical value in R, G and B;
when H > -60& & H <60, red pixel points are extracted and are marked as 1;
when H >110& & H <150, extracting green pixel points, and recording the green pixel points as 2;
when H >180& & H <260, extracting blue pixel points, and recording the blue pixel points as 3;
when V > is 0& & V < 180, black pixel points are extracted and are marked as 0.
3. The method for decoding and locating an artificial visual landmark according to claim 2, wherein:
establishing a coordinate system on the qualified artificial visual landmark image, so that the image is positioned in a first quadrant of the coordinate system; extracting all connected domains and extracting the number of pixel points and the mass center of each connected domain in the image, wherein the method comprises the following steps:
and (4) carrying out Image processing on the qualified artificial visual landmark Image by using an Image motions algorithm, and extracting the number of pixel points and the mass center of all connected domains.
4. The method for decoding and locating an artificial visual landmark according to claim 3, wherein:
extracting the center of mass coordinate of a center calibration ring closest to the central point of the camera and the six coordinates of the center of mass of the coding blocks surrounding the center calibration ring:
the method for extracting the central calibration ring closest to the central point of the camera comprises the following steps: traversing and selecting a part of connected domains with more pixel points than other connected domains, and performing difference operation on the centroid coordinates of the part of connected domains and the image coordinate origin, wherein the connected domain with the minimum value is the central calibration ring closest to the central point of the camera; the center coordinates of the center calibration ring are the coordinates of the center of mass of the center calibration ring;
the extraction method of the coordinates of the mass centers of the six coding blocks surrounding the central calibration ring comprises the following steps: and performing difference operation on the center coordinate of mass of the central calibration ring and the center coordinates of mass of all other connected domains, wherein six connected domains with the minimum absolute value are six coding blocks surrounding the central calibration ring.
5. The method of claim 4, wherein: the position definition method of six coding blocks surrounding the central calibration ring is as follows:
(1) traversing the six coding blocks and comparing with the centroid coordinate of the central calibration ring, wherein the smallest abscissa is marked as the position No. 4, and the largest abscissa is marked as the position No. 1;
(2) traversing the other four coding blocks and comparing the coordinates of the center of mass of the central calibration ring, wherein the position with the smallest abscissa and the largest ordinate is marked as the position No. 5, and the position with the largest abscissa and the smallest ordinate is marked as the position No. 3;
(3) and traversing the last two coding blocks, wherein the position with the largest vertical coordinate is the position No. 0, and the position with the smallest vertical coordinate is the position No. 2.
6. The method for decoding and locating an artificial visual landmark according to claim 5, wherein: the decoding rule is as follows: sequentially and clockwise extracting the coding components at the positions 0,1, 2, 3, 4 and 5; the extracted black coding block is taken as the first bit, the position of the black coding block is recorded, coding piece decoding is sequentially carried out in the clockwise direction, the coding form obtained by decoding comprises two parts, namely color and position, the first 5 bits of the coding form represent the pixel color of the coding block which is extracted by the coding component clockwise except the black coding block, the 6 th bit of the coding form represents the pixel color of the central coding piece extracted by the coding component, and the 8 th bit of the coding form word represents the position number of the black coding block. When H > -60& & H <60, the extracted red pixel point is 1, when H >110& & H <150, the extracted green pixel point is 2, when H >180& & H <260, the extracted blue pixel point is 3, and when V > & & V < (180), the extracted black pixel point is 0.
7. The method for decoding and locating an artificial visual landmark according to claim 1, wherein: the transformation from pixel coordinates to actual coordinates is carried out by fitting a quadric surface, and distortion correction is carried out simultaneously, so that the central point of the camera is accurately positioned, and the error correction of the mechanical arm is facilitated;
the method comprises the following steps:
(1) processing the contour chain of six hexagonal coding blocks around a central calibration ring closest to the central point of the camera, extracting the coordinates of the vertex of each acute angle in the six coding blocks, and extracting the central line of each hexagonal coding block;
(2) obtaining six characteristic points through center line fitting;
(3) performing quadratic surface fitting, including surface fitting in the abscissa direction and surface fitting in the ordinate direction;
(4) and (5) carrying out quadric surface conversion to convert the image center coordinates into camera center real coordinates, thereby realizing camera center point positioning.
8. The method for decoding and locating an artificial visual landmark according to claim 8, wherein: the step of extracting the central line of each hexagonal coding block is as follows:
(1) calculating a pixel point closest to the centroid coordinate of the hexagonal coding block, and traversing the number of pixel points of a half of the contour chain by taking the pixel point as a starting point;
(2) calculating a pixel point which is farthest away from the coordinate of the centroid of the hexagonal coding block on the half contour chain, wherein the pixel point is an acute angle vertex for solving the fitting problem;
(3) traversing the other half of the contour chain from the pixel point closest to the coordinate of the centroid of the hexagonal coding block;
(4) calculating a pixel point which is farthest away from the centroid coordinate of the hexagonal coding block on the half contour chain, wherein the pixel point is another acute angle vertex for solving the fitting problem;
(5) and (4) connecting the two acute-angle vertexes in the steps (2) and (4) to form the central line of the hexagonal coding block.
9. The method for decoding and locating an artificial visual landmark according to claim 8, wherein: the steps of obtaining six feature points through centerline fitting are as follows: after acute angle vertex coordinate data of the six coding blocks are obtained, least square fitting is carried out on the data, least square fitting of straight lines is rapidly achieved, intersection points are formed between every two adjacent straight lines after fitting, the six intersection points are six feature points, and the intersection points are intersection points of every two connecting lines of six central lines of the six coding blocks.
10. The method for decoding and locating an artificial visual landmark according to claim 98, wherein:
the quadric surface fitting is completed by substituting the coordinates of the six characteristic points into a quadric surface equation set for solving and then fitting by using Matlab software.
CN202011190911.7A 2020-10-30 2020-10-30 Decoding and positioning method for artificial visual landmarks Active CN112270716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011190911.7A CN112270716B (en) 2020-10-30 2020-10-30 Decoding and positioning method for artificial visual landmarks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011190911.7A CN112270716B (en) 2020-10-30 2020-10-30 Decoding and positioning method for artificial visual landmarks

Publications (2)

Publication Number Publication Date
CN112270716A true CN112270716A (en) 2021-01-26
CN112270716B CN112270716B (en) 2024-01-05

Family

ID=74345754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011190911.7A Active CN112270716B (en) 2020-10-30 2020-10-30 Decoding and positioning method for artificial visual landmarks

Country Status (1)

Country Link
CN (1) CN112270716B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052903A (en) * 2021-03-17 2021-06-29 浙江大学 Vision and radar fusion positioning method for mobile robot
CN114222143A (en) * 2021-12-04 2022-03-22 东南大学 Encoding and decoding mode based on tiny images

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600648A (en) * 2016-12-06 2017-04-26 合肥工业大学 Stereo coding target for calibrating internal parameter and distortion coefficient of camera and calibration method thereof
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN109739237A (en) * 2019-01-09 2019-05-10 华南理工大学 A kind of AGV vision guided navigation and localization method based on novel coding mark
CN109814562A (en) * 2019-01-28 2019-05-28 安徽师范大学 A kind of AGV localization method of multisensor
CN110472451A (en) * 2019-07-05 2019-11-19 南京航空航天大学 A kind of artificial landmark and calculation method towards AGV positioning based on monocular camera
CN111197984A (en) * 2020-01-15 2020-05-26 重庆邮电大学 Vision-inertial motion estimation method based on environmental constraint
CN111427360A (en) * 2020-04-20 2020-07-17 珠海市一微半导体有限公司 Map construction method based on landmark positioning, robot and robot navigation system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600648A (en) * 2016-12-06 2017-04-26 合肥工业大学 Stereo coding target for calibrating internal parameter and distortion coefficient of camera and calibration method thereof
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN109739237A (en) * 2019-01-09 2019-05-10 华南理工大学 A kind of AGV vision guided navigation and localization method based on novel coding mark
CN109814562A (en) * 2019-01-28 2019-05-28 安徽师范大学 A kind of AGV localization method of multisensor
CN110472451A (en) * 2019-07-05 2019-11-19 南京航空航天大学 A kind of artificial landmark and calculation method towards AGV positioning based on monocular camera
CN111197984A (en) * 2020-01-15 2020-05-26 重庆邮电大学 Vision-inertial motion estimation method based on environmental constraint
CN111427360A (en) * 2020-04-20 2020-07-17 珠海市一微半导体有限公司 Map construction method based on landmark positioning, robot and robot navigation system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜海涛;田国会;薛英花;李荣宽;: "新型人工地标的设计、识别、定位及应用", 山东大学学报(工学版), no. 02 *
李俊杰;黄翔;李泷杲;曾琪;主逵;: "基于人工地标的移动机器人定位与调整技术", 航空制造技术, no. 05 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052903A (en) * 2021-03-17 2021-06-29 浙江大学 Vision and radar fusion positioning method for mobile robot
CN113052903B (en) * 2021-03-17 2023-03-10 浙江大学 Vision and radar fusion positioning method for mobile robot
CN114222143A (en) * 2021-12-04 2022-03-22 东南大学 Encoding and decoding mode based on tiny images
CN114222143B (en) * 2021-12-04 2024-02-06 东南大学 Encoding and decoding method based on tiny image

Also Published As

Publication number Publication date
CN112270716B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
CN112270716B (en) Decoding and positioning method for artificial visual landmarks
CN111223133A (en) Registration method of heterogeneous images
CN112347882A (en) Intelligent sorting control method and intelligent sorting control system
CN112560704B (en) Visual identification method and system for multi-feature fusion
CN103049731A (en) Decoding method for point-distributed color coding marks
CN110238820A (en) Hand and eye calibrating method based on characteristic point
CN110202560A (en) A kind of hand and eye calibrating method based on single feature point
CN113012096B (en) Display screen sub-pixel positioning and brightness extraction method, device and storage medium
CN111524195A (en) Camera calibration method in positioning of cutting head of heading machine
CN111553948A (en) Heading machine cutting head positioning system and method based on double tracers
CN112184825B (en) Calibration plate and calibration method
CN110838146A (en) Homonymy point matching method, system, device and medium for coplanar cross-ratio constraint
CN114299172B (en) Planar coding target for visual system and real-time pose measurement method thereof
CN113421311A (en) Regular hexagon coding mark and coding method thereof
CN114648588A (en) Lens calibration and correction method based on neural network
CN112270715B (en) Artificial visual landmark and coding method thereof
CN112734843B (en) Monocular 6D pose estimation method based on regular dodecahedron
CN113129394B (en) Parallelogram coding mark based on region division coding and coding method thereof
CN113188524B (en) Parallelogram coding sign based on graphic geometric relation and coding method thereof
CN110363127A (en) Robot identifies the method with positioning to workpiece key point
CN113112548B (en) Rapid calibration method for internal and external parameters of binocular camera based on coded three-dimensional target
CN113298880B (en) Camera calibration board, camera calibration method and device
CN113834488B (en) Robot space attitude calculation method based on remote identification of structured light array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant