CN113487675B - Rapid feature point detection method for incomplete checkerboard cooperation icon - Google Patents

Rapid feature point detection method for incomplete checkerboard cooperation icon Download PDF

Info

Publication number
CN113487675B
CN113487675B CN202110827007.0A CN202110827007A CN113487675B CN 113487675 B CN113487675 B CN 113487675B CN 202110827007 A CN202110827007 A CN 202110827007A CN 113487675 B CN113487675 B CN 113487675B
Authority
CN
China
Prior art keywords
checkerboard
feature points
points
feature
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110827007.0A
Other languages
Chinese (zh)
Other versions
CN113487675A (en
Inventor
刘吉龙
王惠林
冯涛
张文博
吴凡
谢雨婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian institute of Applied Optics
Original Assignee
Xian institute of Applied Optics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian institute of Applied Optics filed Critical Xian institute of Applied Optics
Priority to CN202110827007.0A priority Critical patent/CN113487675B/en
Publication of CN113487675A publication Critical patent/CN113487675A/en
Application granted granted Critical
Publication of CN113487675B publication Critical patent/CN113487675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of computer vision camera position calibration and pose estimation, and particularly relates to a rapid feature point detection method for incomplete checkerboard cooperative icons. In the method, under the condition that the checkerboard feature points are lost, the posterior detection is carried out on the initially detected checkerboard feature points to obtain posterior feature points positioned at the edge of the checkerboard, the homography matrix estimation is carried out, and then the backward positioning and the position matching are carried out on the unknown feature points, so that the one-to-one correspondence of the pixel coordinates of all the unknown feature points and the corresponding physical coordinates of the unknown feature points is realized; the scheme has the following beneficial effects: the method solves the one-to-one correspondence between the pixel coordinates and the world coordinates under the condition that the characteristic points are lost, can realize the detection of the characteristic points of the complete checkerboard and the characteristic points of the incomplete checkerboard, and has strong robustness and strong environmental interference resistance.

Description

Rapid feature point detection method for incomplete checkerboard cooperation icon
Technical Field
The invention belongs to the technical field of computer vision camera position calibration and pose estimation, and particularly relates to a rapid feature point detection method for incomplete checkerboard cooperative icons.
Background
The checkerboard is used as a common artificial cooperation image, and is widely applied to calibration of internal and external parameters of a camera and estimation of position and posture. The detection of the checkerboard feature points (corner points) is an important step of camera calibration and pose estimation, and provides necessary data information for the camera calibration and pose estimation.
In general, the purpose of checkerboard feature point detection is to obtain pixel coordinates of all checkerboard feature points, and to make one-to-one correspondence between the detected pixel coordinates of the checkerboard feature points and physical coordinates thereof in a world coordinate system. Only if a one-to-one corresponding relation is established, camera calibration or pose estimation equations are established based on a camera imaging model, and the internal and external parameters of the camera or the position and pose values of the camera can be obtained through calculation.
Currently, there are many checkerboard feature point detection methods, most typically, checkerboard feature point detection methods carried by MATLAB and OPENCV applications. Both methods have respective disadvantages, the former has high calculation complexity and long calculation time; the latter needs to know the number of characteristic points of the checkerboard, and the automation degree is low. In addition, scholars at home and abroad propose improved checkerboard feature point detection methods aiming at the problems, such as checkerboard feature point detection methods based on self-correction, convolutional neural network, growth, robust Fourier transform and circle boundary detection. However, the above proposed methods all have the same drawbacks: only complete checkerboard feature points can be detected, i.e. only when all feature points of a checkerboard to be detected are detected, the one-to-one correspondence between pixel coordinates of the detected feature points and physical coordinates thereof can be established. If the feature points are lost, the corresponding relation between the pixel coordinates of the remaining detected feature points and the respective physical coordinates cannot be established because the position information of the lost feature points is not known, and further the camera calibration or pose estimation cannot be performed.
In practical application, due to the influence of external environment, including imaging light, illumination, shielding, backlight and the like, it is possible to cause unsatisfactory checkerboard imaging, and it is necessary to cause a failure in detecting characteristic points of the checkerboard, and such a situation (incomplete checkerboard) may occur.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to provide a rapid feature point detection method for an incomplete checkerboard cooperation icon, which solves the problem that pixel coordinates of detected feature points cannot correspond to corresponding physical coordinates one by one under the interference of an external environment of the checkerboard cooperation icon, and improves the robustness and adaptability of the feature point detection of the checkerboard cooperation icon.
(II) technical scheme
In order to solve the technical problem, the invention provides a method for detecting a quick feature point of an incomplete checkerboard cooperation icon, which comprises the following steps:
step 1: carrying out initial feature point detection on the checkerboard cooperation icon by using a Harris or Hessen feature operator, and forming initial feature points of the checkerboard cooperation icon by using pixel coordinates of the detected feature points;
step 2: screening the initial characteristic points of the checkerboard cooperation icon, and filtering the false-detection non-checkerboard characteristic points;
and step 3: using the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set, and detecting to obtain all the checkerboard edge feature points;
and 4, step 4: forming an edge feature point set by the detected checkerboard edge feature points, detecting feature points positioned on the edge of the same checkerboard as checkerboard posterior feature points, and forming an unknown feature point set by using the remaining feature points in the checkerboard feature point set as unknown feature points;
and 5: establishing an equation set according to the one-to-one correspondence relationship between the pixel coordinates of the posterior feature points and the physical coordinates of the posterior feature points in the world coordinate system, and solving a homography matrix of the current checkerboard cooperation icon imaging image;
step 6: and carrying out backward positioning and position matching on all unknown characteristic points in the unknown characteristic point set according to the solved homography matrix of the imaging image of the current checkerboard cooperation icon, so as to realize the one-to-one correspondence between the pixel coordinates of all unknown characteristic points and the physical coordinates thereof.
In the step 2, the initial feature points of the checkerboard cooperation icon are screened by using a checkerboard cooperation icon feature point template matching verification method, and false-detection non-checkerboard feature points are filtered.
Wherein, the step 2 specifically comprises the following steps:
step 21: establishing two checkerboard cooperation icon feature point templates, wherein each template is 10 multiplied by 10 pixels in size and is averagely divided into 4 regions, the gray values of the upper left region and the lower right region of the first template are 255, the gray values of the upper right region and the lower left region are 1, the gray values of the upper left region and the lower right region of the second template are 1, and the gray values of the upper right region and the lower left region are 255;
step 22: extracting a feature point subgraph with the same size as the feature point template by taking the initial feature point as a center, and performing convolution operation with the two checkerboard cooperation icon feature point templates respectively to obtain two similarity values between the feature point subgraph and the feature point template;
step 23: if one of the two similarity values is larger than the similarity threshold of the checkerboard feature points, the initial feature points are the checkerboard feature points, otherwise, the initial feature points are not the checkerboard feature points, and the initial feature points are deleted;
step 24: and traversing all initial feature points, and filtering all initial feature points which are not the checkerboard feature points.
In step 3, all the feature points on the edge of the checkerboard are detected and obtained by using a vector outer product detection method for the feature point set of the checkerboard.
Wherein, the step 3 comprises the following steps:
step 31: taking the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set;
step 32: establishing a vector to be detected by taking a first characteristic point in the checkerboard characteristic point set as a starting point and a second characteristic point as an end point, and sequentially calculating the outer product of the vector to be detected, the first characteristic point and other characteristic points;
step 33: if the outer product result of the vector to be detected, the first characteristic point and other characteristic points is the same positive or the same negative, then two characteristic points forming the vector to be detected are the checkerboard edge characteristic points; otherwise, selecting the next characteristic point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first characteristic point and other characteristic points to form a vector, and judging again until all the characteristic points are traversed;
step 34: and with the second feature point as a starting point, continuously repeating the step 32 and the step 33, and so on until all the feature points are traversed to obtain edge feature points of all the checkerboards.
In the step 4, the detected feature points on the edges of the checkerboards form an edge feature point set, and feature points on the edges of the same checkerboard are detected by using a vector outer product detection method and are used as the posterior feature points of the checkerboards.
Wherein, the step 4 comprises the following steps:
step 41: forming an edge feature point set by the detected checkerboard edge feature points;
step 42: establishing a vector to be detected by taking a first edge feature point in the set as a starting point and a second edge feature point as an end point, and sequentially calculating an outer product of the vector to be detected, the first edge feature point and other edge feature points;
step 43: if the outer product of the vector to be detected, the first edge feature point and other edge feature points is smaller than a given collinear threshold value, two edge feature points forming the vector to be detected are collinear, namely the two edge feature points are on the same checkerboard edge; otherwise, selecting the next edge feature point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first edge feature point and other edge feature points to form a vector, and judging again until all edge feature points are traversed;
step 44: and (3) continuously repeating the step (42) and the step (43) by taking the second edge feature point as a starting point, repeating the steps by analogy until all edge feature points are traversed, taking the feature points with the number of the detected feature points on the same edge equal to the number of the known edge feature points as checkerboard posterior feature points, and taking the rest feature points in the checkerboard feature point set as unknown feature points to form an unknown feature point set.
Wherein, the step 5 comprises the following steps:
since the correspondence between the pixel coordinates of the posterior feature points and the physical coordinates thereof is known, the pixel coordinates are determined from the posterior feature points(u p ,v p ) With its physical coordinates (X) in the world coordinate system w ,Y w ) The following homography matrix solution equations are established:
Figure BDA0003174107880000051
wherein H is a homography matrix of the imaging image of the current checkerboard cooperation icon, H ij Is the ith row and the jth column element of the H matrix;
from the above formula, it can be seen that the homography matrix H has 8 unknown elements in total, two equations can be established for each pair of posterior feature point pixel coordinates and the physical coordinates thereof, and only 4 pairs of posterior feature point pixel coordinates and the physical coordinates thereof are needed to solve the 8 unknown elements of the homography matrix H.
Wherein, the step 6 comprises the following steps:
step 61: according to the homography matrix H of the current checkerboard cooperation icon imaging image, the pixel coordinate (u) of the unknown characteristic point is calculated d ,v d ) Backward positioning is carried out by using the following formula, and the estimated value of the physical coordinate of the unknown characteristic point is obtained by solving
Figure BDA0003174107880000052
Figure BDA0003174107880000053
Wherein, h' ij The ith row and the jth column of elements of an inverse matrix of the homography matrix H;
step 62: calculating the distance d between the physical coordinates of the unknown characteristic points and the physical coordinates of all the checkerboard characteristic points according to the estimated values of the physical coordinates of the unknown characteristic points ij
Figure BDA0003174107880000054
Wherein (X) ij ,Y ij ) Is the ith row and the jth column of the checkerboardPhysical coordinates of the feature points in a world coordinate system;
and step 63: will be a distance d ij Minimum distance value d min The physical coordinates of the checkerboard feature points are used as the physical coordinates of the unknown feature point pixel coordinates in the world coordinate system;
step 64: and traversing all unknown characteristic points in the unknown characteristic point set until physical coordinates corresponding to pixel coordinates of all unknown characteristic points are obtained, and completing the backward position matching of all unknown characteristic points.
The method solves the one-to-one correspondence between the pixel coordinates and the world coordinates under the condition that the characteristic points are lost, can detect the characteristic points of the complete checkerboard and the characteristic points of the incomplete checkerboard, and has strong robustness and strong environmental interference resistance.
(III) advantageous effects
Compared with the prior art, the rapid feature point detection method for the incomplete checkerboard cooperation icon, provided by the technical scheme of the invention, comprises the steps of carrying out posterior detection on initially detected checkerboard feature points under the condition that the checkerboard feature points are lost to obtain posterior feature points positioned at the edges of the checkerboards, carrying out homography matrix estimation, further carrying out backward positioning and position matching on unknown feature points, and thus realizing one-to-one correspondence between pixel coordinates of all unknown feature points and corresponding physical coordinates thereof; the scheme has the following beneficial effects:
(1) The technical scheme of the invention solves the one-to-one correspondence between the pixel coordinates and the world coordinates under the condition of the loss of the characteristic points, can realize the detection of the characteristic points of the complete checkerboard and the characteristic points of the incomplete checkerboard, has strong robustness and strong anti-environmental interference capability;
(2) The feature point detection algorithm related to the technical scheme of the invention does not need iterative computation, has low algorithm complexity, small computation amount and high real-time performance, and can realize rapid checkerboard cooperation icon feature point detection.
Drawings
FIG. 1 is a schematic diagram of a checkerboard cooperation icon feature point detection method of the present invention;
FIG. 2 is a flow chart of a method for detecting feature points of a checkerboard cooperation icon in accordance with the present invention;
FIG. 3 is a flow chart of a non-checkerboard feature point filtering method of the present invention;
FIG. 4 is a checkerboard collaboration icon feature point matching template of the present invention;
FIG. 5 is a flow chart of a checkerboard edge feature point detection method of the present invention;
FIG. 6 is a flow chart of a checkerboard posterior feature point detection method of the present invention;
FIG. 7 is a flowchart of the unknown feature point backward localization and location matching method of the present invention;
Detailed Description
In order to make the objects, contents, and advantages of the present invention more apparent, the following detailed description of the present invention will be made in conjunction with the accompanying drawings and examples.
The technical scheme of the invention is used for estimating the current homography matrix by carrying out posterior detection on the checkerboard feature points under the condition that the checkerboard feature points are lost, and then carrying out backward positioning and position matching on the residual feature points, thereby realizing the one-to-one correspondence of the pixel coordinates of all the detected checkerboard feature points and the corresponding physical coordinates thereof.
Referring to fig. 1, the invention provides a method for detecting a fast feature point of an incomplete checkerboard cooperation icon, which has the following principle:
as shown in fig. 1, if the feature point shown in the hollow circle of the checkerboard cooperation icon is lost, since the position information of the lost feature point is unknown, the corresponding relationship between the pixel coordinates of the remaining detected feature points and the respective physical coordinates cannot be established, that is, the world coordinates in the world coordinate system corresponding to the pixel coordinates of the detected feature points are unknown. However, since the pixel coordinates of the checkerboard cooperation icon correspond to the world coordinates one to one, and the correspondence is the checkerboard imaging homography matrix. Therefore, by detecting 4 posterior feature points (c 1-c 4) of the current checkerboard cooperation icon, the corresponding relation between the posterior feature points and the physical coordinates (w 1-w 4) is known, then calculating by using a homography estimation equation to obtain a homography matrix of the current checkerboard imaging, homography backward positioning can be carried out on the pixel coordinates of the remaining detected checkerboard feature points (c 5-c 6), the physical coordinates of the detected checkerboard feature points are calculated by using the homography matrix, and one-to-one correspondence between the pixel coordinates (c 5-c 6) of the detected checkerboard feature points and the physical coordinates (w 5-w 6) thereof can be realized according to physical coordinate position matching.
Referring to fig. 2, the invention provides a method for detecting a fast feature point of an incomplete checkerboard cooperation icon, which comprises the following steps:
step 1: carrying out initial feature point detection on the checkerboard cooperation icon by using a Harris or Hessen feature operator, and forming initial feature points of the checkerboard cooperation icon by using pixel coordinates of the detected feature points;
step 2: screening the initial characteristic points of the checkerboard cooperation icon, and filtering false-detection non-checkerboard characteristic points;
and step 3: using the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set, and detecting to obtain all checkerboard edge feature points;
and 4, step 4: forming an edge feature point set by the detected checkerboard edge feature points, detecting feature points positioned on the edge of the same checkerboard as checkerboard posterior feature points, and forming an unknown feature point set by using the remaining feature points in the checkerboard feature point set as unknown feature points;
and 5: establishing an equation set according to the one-to-one correspondence relationship between the pixel coordinates of the posterior feature points and the physical coordinates of the posterior feature points in the world coordinate system, and solving a homography matrix of the current checkerboard cooperation icon imaging image;
step 6: and carrying out backward positioning and position matching on all unknown characteristic points in the unknown characteristic point set according to the solved homography matrix of the imaging image of the current checkerboard cooperation icon, so as to realize one-to-one correspondence between the pixel coordinates of all unknown characteristic points and the physical coordinates thereof.
In the step 2, the initial feature points of the checkerboard cooperation icon are screened by using a checkerboard cooperation icon feature point template matching verification method, and false-detection non-checkerboard feature points are filtered.
Referring to fig. 3, the step 2 specifically includes the following steps:
step 21: establishing two checkerboard cooperation icon feature point templates, as shown in fig. 4, each template is 10 × 10 pixels in size, and is averagely divided into 4 regions, the gray values of the upper left region and the lower right region of the first template are 255 and the gray values of the upper right region and the lower left region are 1, the gray values of the upper left region and the lower right region of the second template are 1 and the gray values of the upper right region and the lower left region are 255;
step 22: extracting a feature point subgraph with the same size as the feature point template by taking the initial feature point as a center, and performing convolution operation with the two checkerboard cooperation icon feature point templates respectively to obtain two similarity values between the feature point subgraph and the feature point template;
step 23: if one of the two similarity values is larger than the similarity threshold of the checkerboard feature points, the initial feature points are the checkerboard feature points, otherwise, the initial feature points are not the checkerboard feature points, and the initial feature points are deleted;
step 24: and traversing all initial feature points, and filtering all initial feature points which are not the checkerboard feature points.
In step 3, all the feature points on the edge of the checkerboard are detected and obtained by using a vector outer product detection method for the feature point set of the checkerboard.
Referring to fig. 5, the step 3 includes the following steps:
step 31: taking the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set;
step 32: establishing a vector to be detected by taking a first characteristic point in the checkerboard characteristic point set as a starting point and a second characteristic point as an end point, and sequentially calculating an outer product of the vector to be detected, the first characteristic point and other characteristic points;
step 33: if the outer product result of the vector to be detected, the first characteristic point and other characteristic points is the same positive or the same negative, then two characteristic points forming the vector to be detected are the checkerboard edge characteristic points; otherwise, selecting the next characteristic point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first characteristic point and other characteristic points to form a vector, and judging again until all the characteristic points are traversed;
step 34: and with the second feature point as a starting point, continuously repeating the step 32 and the step 33, and so on until all the feature points are traversed to obtain edge feature points of all the checkerboards.
In the step 4, the detected feature points on the edges of the checkerboards form an edge feature point set, and feature points on the edges of the same checkerboard are detected by using a vector outer product detection method and are used as the posterior feature points of the checkerboards.
Referring to fig. 6, the step 4 includes the following steps:
step 41: forming an edge feature point set by the detected checkerboard edge feature points;
step 42: establishing a vector to be detected by taking a first edge feature point in the set as a starting point and a second edge feature point as an end point, and sequentially calculating the outer product of the vector to be detected, the first edge feature point and other edge feature points;
step 43: if the outer product of the vector to be detected, the first edge feature point and other edge feature points is smaller than a given collinear threshold value, two edge feature points forming the vector to be detected are collinear, namely the two edge feature points are on the same checkerboard edge; otherwise, selecting the next edge feature point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first edge feature point and other edge feature points to form a vector, and judging again until all edge feature points are traversed;
step 44: and (3) continuously repeating the step (42) and the step (43) by taking the second edge feature point as a starting point, repeating the steps by analogy until all edge feature points are traversed, taking the feature points with the number of the detected feature points on the same edge equal to the number of the known edge feature points as checkerboard posterior feature points, and taking the rest feature points in the checkerboard feature point set as unknown feature points to form an unknown feature point set.
Wherein, the step 5 comprises the following steps:
since the correspondence between the pixel coordinates of the posterior feature points and their physical coordinates is known, the posterior feature point pixel coordinates (u) are based on p ,v p ) With its physical coordinates (X) in the world coordinate system w ,Y w ) The following homography matrix solution equations are established:
Figure BDA0003174107880000101
wherein H is a homography matrix of the imaging image of the current checkerboard cooperation icon, H ij Is the ith row and the jth column element of the H matrix;
from the above formula, it can be seen that the homography matrix H has 8 unknown elements in total, two equations can be established for each pair of posterior feature point pixel coordinates and the physical coordinates thereof, and only 4 pairs of posterior feature point pixel coordinates and the physical coordinates thereof are needed to solve the 8 unknown elements of the homography matrix H.
Referring to fig. 7, the step 6 is implemented as follows:
step 61: according to the homography matrix H of the current checkerboard cooperation icon imaging image, the pixel coordinate (u) of the unknown characteristic point is calculated d ,v d ) Backward positioning is carried out by using the following formula, and the estimated value of the physical coordinate of the unknown characteristic point is obtained by solving
Figure BDA0003174107880000111
Figure BDA0003174107880000112
/>
Wherein, h' ij The ith row and the jth column of elements of an inverse matrix of the homography matrix H;
step 62: calculating the distance d between the physical coordinates of all the checkerboard feature points according to the estimated value of the physical coordinates of the unknown feature points ij
Figure BDA0003174107880000113
Wherein (X) ij ,Y ij ) The physical coordinates of the feature points in the ith row and the jth column of the checkerboard in a world coordinate system;
and step 63: will be a distance d ij Minimum distance value d min The physical coordinates of the checkerboard feature points are used as the physical coordinates of the unknown feature point pixel coordinates in the world coordinate system;
step 64: and traversing all unknown characteristic points in the unknown characteristic point set until physical coordinates corresponding to pixel coordinates of all unknown characteristic points are obtained, and completing the backward position matching of all unknown characteristic points.
The method solves the one-to-one correspondence between the pixel coordinates and the world coordinates under the condition that the characteristic points are lost, can detect the characteristic points of the complete checkerboard and the characteristic points of the incomplete checkerboard, and has strong robustness and strong environmental interference resistance.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, it is possible to make various improvements and modifications without departing from the technical principle of the present invention, and those improvements and modifications should be considered as the protection scope of the present invention.

Claims (3)

1. A quick feature point detection method for an incomplete checkerboard cooperation icon is characterized by comprising the following steps:
step 1: carrying out initial feature point detection on the checkerboard cooperation icon by using a Harris or Hessen feature operator, and forming initial feature points of the checkerboard cooperation icon by using pixel coordinates of the detected feature points;
step 2: screening the initial characteristic points of the checkerboard cooperation icon, and filtering the false-detection non-checkerboard characteristic points;
and step 3: using the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set, and detecting to obtain all the checkerboard edge feature points;
and 4, step 4: forming an edge feature point set by the detected checkerboard edge feature points, detecting feature points positioned on the edge of the same checkerboard as checkerboard posterior feature points, and forming an unknown feature point set by using the remaining feature points in the checkerboard feature point set as unknown feature points;
and 5: establishing an equation set according to the one-to-one correspondence relationship between the pixel coordinates of the posterior feature points and the physical coordinates of the posterior feature points in the world coordinate system, and solving a homography matrix of the current checkerboard cooperation icon imaging image;
step 6: according to the solved homography matrix of the imaging image of the current checkerboard cooperation icon, backward positioning and position matching are carried out on all unknown feature points in the unknown feature point set, and the one-to-one correspondence between the pixel coordinates of all the unknown feature points and the physical coordinates of the unknown feature points is realized;
in the step 2, screening the initial feature points of the checkerboard cooperation icons by using a checkerboard cooperation icon feature point template matching verification method, and filtering the false-detection non-checkerboard feature points;
the step 2 specifically comprises the following steps:
step 21: establishing two checkerboard cooperation icon feature point templates, wherein each template is 10 multiplied by 10 pixels in size and is averagely divided into 4 regions, the gray values of the upper left region and the lower right region of the first template are 255, the gray values of the upper right region and the lower left region of the first template are 1, the gray values of the upper left region and the lower right region of the second template are 1, and the gray values of the upper right region and the lower left region of the second template are 255;
step 22: extracting a feature point subgraph with the same size as the feature point template by taking the initial feature point as a center, and performing convolution operation with the two checkerboard cooperation icon feature point templates respectively to obtain two similarity values between the feature point subgraph and the feature point template;
step 23: if one of the two similarity values is greater than the similarity threshold of the checkerboard feature points, the initial feature points are the checkerboard feature points, otherwise, the initial feature points are not the checkerboard feature points, and the initial feature points are deleted;
and step 24: traversing all initial feature points, and filtering all initial feature points which are not the checkerboard feature points;
in the step 3, all the feature points on the edges of the checkerboards are detected and obtained by using a vector outer product detection method aiming at the feature point set of the checkerboards;
the step 3 comprises the following steps:
step 31: taking the remaining filtered initial feature points of the checkerboard cooperation icons as the checkerboard feature points to form a checkerboard feature point set;
step 32: establishing a vector to be detected by taking a first characteristic point in the checkerboard characteristic point set as a starting point and a second characteristic point as an end point, and sequentially calculating an outer product of the vector to be detected, the first characteristic point and other characteristic points;
step 33: if the outer product result of the vector to be detected, the first characteristic point and other characteristic points is the same positive or the same negative, then two characteristic points forming the vector to be detected are the checkerboard edge characteristic points; otherwise, selecting the next characteristic point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first characteristic point and other characteristic points to form a vector, and judging again until all the characteristic points are traversed;
step 34: continuously repeating the step 32 and the step 33 by taking the second feature point as a starting point, and repeating the steps in the same way until all the feature points are traversed to obtain edge feature points of all the checkerboards;
in the step 4, the detected feature points on the edges of the checkerboards form an edge feature point set, and feature points on the edges of the same checkerboards are detected by using a vector outer product detection method and are used as the posterior feature points of the checkerboards;
the step 4 comprises the following steps:
step 41: forming an edge feature point set by the detected checkerboard edge feature points;
step 42: establishing a vector to be detected by taking a first edge feature point in the set as a starting point and a second edge feature point as an end point, and sequentially calculating an outer product of the vector to be detected, the first edge feature point and other edge feature points;
step 43: if the outer product of the vector formed by the vector to be detected, the first edge feature point and other edge feature points is smaller than a given collinear threshold value, two edge feature points forming the vector to be detected are collinear, namely the two edge feature points are positioned on the same checkerboard edge; otherwise, selecting the next edge feature point as an end point to establish a vector to be detected, sequentially calculating the outer product of the vector to be detected, the first edge feature point and other edge feature points to form a vector, and judging again until all edge feature points are traversed;
step 44: and continuously repeating the step 42 and the step 43 by taking the second edge feature point as a starting point, repeating the steps in the same manner until all edge feature points are traversed, taking the feature points with the same number of the detected feature points on the same edge as the known edge feature points as checkerboard posterior feature points, and taking the rest feature points in the checkerboard feature point set as unknown feature points to form an unknown feature point set.
2. The method for fast feature point detection of an incomplete checkerboard cooperative icon as claimed in claim 1, wherein said step 5 is implemented as the following steps:
since the correspondence between the pixel coordinates of the posterior feature points and their physical coordinates is known, the posterior feature point pixel coordinates (u) are based on p ,v p ) With its physical coordinates (X) in the world coordinate system w ,Y w ) The following homography matrix solution equations are established:
Figure FDA0004046601970000031
wherein H is a homography matrix of the imaging image of the current checkerboard cooperation icon, H ij Is the ith row and the jth column element of the H matrix;
from the above formula, it can be seen that the homography matrix H has 8 unknown elements in total, two equations can be established for each pair of posterior feature point pixel coordinates and the physical coordinates thereof, and only 4 pairs of posterior feature point pixel coordinates and the physical coordinates thereof are needed to solve the 8 unknown elements of the homography matrix H.
3. The method for fast detecting the feature points of the incomplete checkerboard cooperative icon as claimed in claim 2, wherein said step 6 is implemented as the following steps:
step 61: according to the homography matrix H of the current checkerboard cooperation icon imaging image, the pixel coordinate (u) of the unknown characteristic point is calculated d ,v d ) Backward positioning is carried out by using the following formula, and the estimated value of the physical coordinate of the unknown characteristic point is obtained by solving
Figure FDA0004046601970000041
Figure FDA0004046601970000042
Wherein, h' ij The ith row and the jth column of elements of an inverse matrix of the homography matrix H;
step 62: calculating the distance d between the physical coordinates of all the checkerboard feature points according to the estimated value of the physical coordinates of the unknown feature points ij
Figure FDA0004046601970000043
Wherein (X) ij ,Y ij ) The physical coordinate of the feature point in the ith row and the jth column of the checkerboard in a world coordinate system is shown;
and step 63: will be a distance d ij Minimum distance value d min The physical coordinates of the checkerboard feature points are used as the physical coordinates of the unknown feature point pixel coordinates in the world coordinate system;
step 64: and traversing all unknown characteristic points in the unknown characteristic point set until physical coordinates corresponding to pixel coordinates of all unknown characteristic points are obtained, and completing the backward position matching of all unknown characteristic points.
CN202110827007.0A 2021-07-21 2021-07-21 Rapid feature point detection method for incomplete checkerboard cooperation icon Active CN113487675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110827007.0A CN113487675B (en) 2021-07-21 2021-07-21 Rapid feature point detection method for incomplete checkerboard cooperation icon

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110827007.0A CN113487675B (en) 2021-07-21 2021-07-21 Rapid feature point detection method for incomplete checkerboard cooperation icon

Publications (2)

Publication Number Publication Date
CN113487675A CN113487675A (en) 2021-10-08
CN113487675B true CN113487675B (en) 2023-04-07

Family

ID=77942850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110827007.0A Active CN113487675B (en) 2021-07-21 2021-07-21 Rapid feature point detection method for incomplete checkerboard cooperation icon

Country Status (1)

Country Link
CN (1) CN113487675B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222825B (en) * 2022-09-15 2022-12-16 湖南视比特机器人有限公司 Calibration method, computer storage medium and calibration system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243032A (en) * 2020-01-10 2020-06-05 大连理工大学 Full-automatic checkerboard angular point detection method
CN111260731A (en) * 2020-01-10 2020-06-09 大连理工大学 Checkerboard sub-pixel level corner point self-adaptive detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504290B (en) * 2016-10-20 2019-10-18 北京化工大学 A kind of high-precision video camera dynamic calibrating method
JP7326720B2 (en) * 2018-10-26 2023-08-16 富士通株式会社 Mobile position estimation system and mobile position estimation method
CN111260732A (en) * 2020-01-10 2020-06-09 大连理工大学 Corner point detection method based on visible light and infrared universal calibration board

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243032A (en) * 2020-01-10 2020-06-05 大连理工大学 Full-automatic checkerboard angular point detection method
CN111260731A (en) * 2020-01-10 2020-06-09 大连理工大学 Checkerboard sub-pixel level corner point self-adaptive detection method

Also Published As

Publication number Publication date
CN113487675A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
US11080892B2 (en) Computer-implemented methods and system for localizing an object
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN112967339B (en) Vehicle pose determining method, vehicle control method and device and vehicle
CN112132874B (en) Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
CN110349215B (en) Camera pose estimation method and device
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN111696072B (en) Method and device for detecting straight line of color image, electronic equipment and storage medium
CN103577828B (en) A kind of Approach for road detection based on edge feature
CN107452028B (en) Method and device for determining position information of target image
CN110111330B (en) Mobile phone screen detection method
CN114419349B (en) Image matching method and device
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN106875430B (en) Single moving target tracking method and device based on fixed form under dynamic background
CN113487675B (en) Rapid feature point detection method for incomplete checkerboard cooperation icon
CN114037970A (en) Sliding window-based lane line detection method, system, terminal and readable storage medium
WO2020014913A1 (en) Method for measuring volume of object, related device, and computer readable storage medium
CN110084743A (en) Image mosaic and localization method based on more air strips starting track constraint
CN111664845A (en) Traffic sign positioning method, visual map making method, device and positioning system
CN107403448B (en) Cost function generation method and cost function generation device
CN109741370B (en) Target tracking method and device
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN113592947B (en) Method for realizing visual odometer by semi-direct method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant