CN113269207B - Image feature point extraction method for grid structure light vision measurement - Google Patents

Image feature point extraction method for grid structure light vision measurement Download PDF

Info

Publication number
CN113269207B
CN113269207B CN202110589316.9A CN202110589316A CN113269207B CN 113269207 B CN113269207 B CN 113269207B CN 202110589316 A CN202110589316 A CN 202110589316A CN 113269207 B CN113269207 B CN 113269207B
Authority
CN
China
Prior art keywords
grid
extracting
roi
pattern
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110589316.9A
Other languages
Chinese (zh)
Other versions
CN113269207A (en
Inventor
刘斌
杨帆
初录
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN202110589316.9A priority Critical patent/CN113269207B/en
Publication of CN113269207A publication Critical patent/CN113269207A/en
Application granted granted Critical
Publication of CN113269207B publication Critical patent/CN113269207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an image characteristic point extraction method used in grid structure light vision measurement, which comprises the following steps: extracting bones from the grid structure light patterns to obtain single-pixel grid structure light bone patterns; extracting a plurality of corner points of a grid-structured photo-skeleton pattern of a single pixel to obtain a plurality of corner point position coordinates; determining an ROI (region of interest) region in the grid structured light pattern respectively by the position coordinates of each corner point, wherein the characteristic point of the grid structured light pattern is the intersection point of crossed grid lines in each ROI region; and extracting centers of grid horizontal and vertical lines in each ROI region in the horizontal and vertical directions, and fitting to determine a linear equation respectively to obtain pixel coordinates of each feature point. The extraction method provided by the invention can be used for better ensuring the image feature extraction precision in the grid structure optical vision measurement, and has the advantages of simple image processing algorithm and higher execution efficiency.

Description

Image feature point extraction method for grid structure light vision measurement
Technical Field
The invention belongs to the technical field of image data analysis, and particularly relates to an image feature point extraction method for grid structure light vision measurement.
Background
In recent years, three-dimensional measurement technology is rapidly developed as an important means for obtaining object morphology, and is widely applied to different industry fields. With the increasing maturity of photoelectric sensing devices and computer technologies, three-dimensional vision measurement technologies have been enriched and developed. The structured light vision measurement technology is widely applied due to the advantages of high precision, easy extraction of light bar image information, strong real-time performance, active control and the like.
According to the form of the projected structured light, the measurement can be divided into point structured light measurement, line structured light measurement, and surface structured light measurement. Although the point structured light can obtain the spatial coordinates of the point when the laser point is accurately obtained, the measurement efficiency is low. The line structure light measurement method replaces points with lines, and although the scanning efficiency is improved, the requirement of real-time dynamic measurement cannot be met. The surface structure light measurement method utilizes a complete pattern to measure an object, has high measurement efficiency compared with point structure light and line structure light measurement, and can meet the requirement of dynamic real-time measurement.
Encoding the structured light projection pattern allows for better identification of the feature points. Common coding schemes are generally classified into temporal-based coding and spatial-based coding. The time-based coding is to continuously project a group of patterns on a measured object, and the characteristic points are determined by a plurality of projected images, so that the measurement precision is high. This encoding method is not suitable for dynamic measurement because the measured object cannot move because a plurality of patterns are projected continuously. The spatial coding is to make the local coding in the image have uniqueness, and the coding mode has faster measuring speed and is more advantageous in dynamic measurement. The most common of which is color coding. When the albedo of the measured object, the nonlinearity of the projector spectrum, the spectral effect of the camera and the complex influence of colors in the measurement environment are considered, the extracted colors and the projected colors have deviation, and the wrong classification of the colors can cause the reduction of the matching precision. However, the above coding modes all require fine design of patterns to be able to complete the extraction and matching of the image feature points. Therefore, there is a need for a structured light pattern that does not require fine coding, and that enables feature point extraction and matching by a single projection of the pattern. The measurement requirements of dynamic measurement and high measurement precision can be considered.
Through retrieval, a patent document related to the grid structure light measurement technology is found, and a Chinese patent with the publication number of CN110285831A discloses a calibration method of a grid structure light projector, which comprises the following steps: 1) Constructing a grid structure optical vision measuring system; 2) Establishing each coordinate system; 3) Acquiring pixel coordinates of the three calibration points A, P, B in the step 1; 4) Calculating the coordinates of the three calibration points in the auxiliary coordinate system and calculating the length of the position vector of the three calibration points; 5) The coordinates of the three calibration points under the coordinate system of the camera are solved by utilizing the pinhole imaging principle; 6) Repeating the steps to obtain the coordinates of the redundant two groups of calibration points under the camera coordinate system; 7) Fitting all target coordinates by using a least square method to obtain a structured light plane equation; 8) And repeating the steps to obtain the calibration of all light planes in the grid structure light projector.
Through analysis and comparison, the technical problem that the above-mentioned patent document will solve is that the grid structure light projector is markd, and its scheme of solving the technical problem has a great deal of difference with the present case, and the design idea of this application is: the extraction of the characteristic points is an important link in the active visual measurement based on the grid structured light, so the extraction precision of the characteristic points is the guarantee of the precision of a measurement system. Therefore, the scheme is mainly used for carrying out related design and providing a measurement scheme on feature point extraction in the grid structure light active vision measurement.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image feature point extraction method for grid structure optical vision measurement, which is convenient to operate and has both dynamic measurement and high measurement precision.
The technical scheme adopted by the invention is as follows:
an image feature point extraction method for grid structure light vision measurement comprises the following steps:
(1) Extracting bones from the grid structure light patterns to obtain single-pixel grid structure light bone patterns;
(2) Extracting a plurality of corner points of the grid-structured optical skeleton pattern of the single pixel to obtain a plurality of corner point position coordinates;
(3) Determining an ROI (region of interest) in the grid structured light pattern by the position coordinates of each corner point, wherein the characteristic points of the grid structured light pattern are intersection points of crossed grid lines in each ROI;
(4) And extracting centers of grid horizontal and vertical lines in each ROI region in the horizontal and vertical directions, and fitting to determine a linear equation respectively to obtain pixel coordinates of each feature point.
In the step (1), the skeleton extraction is carried out by collecting the grid structure light pattern I by a camera o Extracting skeleton to obtain pattern I with grid light stripe width of single pixel bone
In the step (2), the step of extracting the corner points of the bone pattern is to extract the corner points of the bone image to obtain a corner point coordinate P cor (u cor ,v cor )。
And, the steps(3) Wherein said ROI area is determined by each corner coordinate P cor (u cor ,v cor ) Can be located in the original image I o To determine a rectangular region R cor Enlarging R cor The region forms a new rectangle R new Region, region ROI = R new -R cor Namely the light strip center extraction area.
In the step (4), the horizontal and vertical central extraction of the grid horizontal and vertical lines in the ROI area is to extract the central coordinate P of the light bar in the ROI area in the vertical direction vcen (u vcen ,v vcen ) Extracting the center coordinates P of the light stripe in the horizontal direction in the ROI area according to the extraction of the center of the light stripe in the horizontal direction in the ROI area hcen (u hcen ,v hcen )。
In step (4), fitting the extracted horizontal light bar center in the ROI to determine a straight line equation l h :a h x+b h y+c h =0。
In step (4), fitting the extracted vertical light bar centers in the ROI to determine a straight line equation l v :a v x+b v y+c v =0。
Furthermore, in step (4), the equation of the line l is calculated v :a v x+b v y+c v =0 and l h :a h x+b h y+c h Intersection point coordinate P of =0 f (u f ,v f ) And the focal point coordinate is the coordinate of the characteristic point to be extracted in the grid pattern.
The invention has the advantages and positive effects that:
according to the method, grid-structured light frameworks are adopted for extraction, angular point position information is extracted from the frameworks, the region where each feature point is located can be simply determined on an original image by means of the angular point information of the frameworks, the light strip region ROI in the feature point local range can be determined only by expanding the region in a small range, and the extraction of the light strip center can be completed by means of a gray scale gravity center method in each local ROI. And (4) performing straight line fitting on the centers of the light bars, wherein the intersection point of the horizontal straight line and the vertical straight line is the position of the characteristic point.
Compared with the common method, the method for extracting the characteristic points of the image by the grid-structured light skeleton has the advantages that the real gray distribution information of the light bars in the original image is better reserved in the image processing process based on the original image, and the introduction of errors in the multiple image processes, such as the deviation of the central point positions of the light bars, is avoided. By adopting the idea of fitting local light bars, the whole light bar is not extracted, only the light bars in a small range area around each characteristic point are extracted to carry out linear fitting to obtain the intersection point, the precision can be better ensured, the image processing algorithm is simple, and the execution efficiency is higher.
Drawings
FIG. 1 is a flow chart of an image feature point extraction algorithm of the present invention for use in grid structured light vision measurement;
FIG. 2 is a grid structured light pattern to which the present invention relates;
FIG. 3 is a schematic diagram of the original image bone extraction process according to the present invention;
FIG. 4 is a schematic diagram of a process for extracting corner points of a bone image according to the present invention;
FIG. 5 is a schematic diagram of a process for determining a central extraction area of a light bar in accordance with the present invention;
FIG. 6 is a schematic diagram of a process of extracting and fitting a straight line from the center of a light bar according to the present invention;
FIG. 7 is a schematic diagram of feature point locations and topological coordinates according to the present invention;
FIG. 8 is a schematic diagram of a process of extracting feature points by projecting light with a grid structure onto a measured object with a planar surface according to the present invention;
FIG. 9 is a schematic diagram of a process of extracting feature points by projecting light with a grid structure onto a measured object with a curved surface according to the present invention;
FIG. 10 is an enlarged view of the effect of extracting feature points of an image by using gray level top hat transformation, binary segmentation and binary morphology opening operations;
fig. 11 is an enlarged view of the effect of image feature point extraction for use in the grid-structured light vision measurement employing the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments, which are illustrative only and not limiting, and the scope of the present invention is not limited thereby.
An image feature point extraction method for grid-structured light vision measurement, as shown in fig. 1 and fig. 2, includes the following steps:
extracting skeleton from the grid structured light pattern, as shown in FIG. 3, with a camera collecting the grid structured light pattern I o Extracting skeleton to obtain pattern I with grid light stripe width of single pixel bone Obtaining a grid-structured photo-skeleton pattern of a single pixel;
referring to fig. 4, a plurality of corner points are extracted from a single-pixel grid-structured photo-skeleton pattern to obtain a plurality of corner point position coordinates P cor (u cor ,v cor ) Determining a ROI area in the grid-structured light pattern by each corner position coordinate P cor (u cor ,v cor ) Can be located in the original image I o To determine a rectangular region R cor Enlarging R cor The region forms a new rectangle R new Region, region ROI = R new -R cor The characteristic points of the grid structured light pattern are the intersection points of the crossed grid lines in each ROI area;
referring to fig. 5 and 6, horizontal and vertical center extraction is performed on grid horizontal and vertical lines in each ROI region, and horizontal and vertical center extraction is performed on grid horizontal and vertical lines in the ROI region by extracting light bar center coordinates P in the horizontal direction and the vertical direction in the ROI region respectively hcen (u hcen ,v hcen ) And P vcen (u vcen ,v vcen ) Respectively fitting to determine a linear equation, and fitting the centers of the horizontal light bars extracted from the ROI region to determine a linear equation l h :a h x+b h y+c h =0, fitting the center of the extracted vertical light stripe in the ROI region to determine the equation of a straight line l v :a v x+b v y+c v =0, calculating the linear equation l v :a v x+b v y+c v =0 and l h :a h x+b h y+c h Intersection point coordinate P of =0 f (u f ,v f ) The focus coordinate is the coordinate of the feature point to be extracted in the grid pattern, and the pixel coordinate of each feature point can be obtained.
See fig. 8 and 9, which are schematic diagrams of image feature point extraction effects for use in the mesh structured light vision measurement provided by the present invention, respectively for a plane and a curved surface projected on a surface by the mesh structured light.
The method has the starting point that the common method for extracting the grid characteristic points roughly positions the position of each line by adopting the classical digital image processing methods such as gray level top hat transformation, binary segmentation, binary morphology opening operation and the like. And establishing a fine extraction energy function model, and calculating to obtain the accurate positions of all transverse lines and vertical lines through a minimum energy function. And finally, determining the intersection point position of the horizontal and vertical lines as the position of the characteristic point. The method is complicated after multiple image processing processes, grid light bars may be deformed in the self-adaptive binarization process, the grid light bars are broken when binary morphology opening operation is carried out, and finally certain errors exist when the intersection point of the whole horizontal line and the whole vertical line is directly calculated to serve as the position of the feature point. Referring to fig. 11, by adopting the idea of fitting local light bars, the whole light bar is not extracted, only light bars in a small area around each feature point are extracted to perform linear fitting to obtain an intersection point, and the precision can be better ensured. The image processing algorithm is simple, and the execution efficiency is high.
Although the embodiments of the present invention and the accompanying drawings are disclosed for illustrative purposes, those skilled in the art will appreciate that: various substitutions, changes and modifications are possible without departing from the spirit and scope of the invention and appended claims, and therefore, the scope of the invention is not limited to the disclosure of the embodiments and drawings.

Claims (8)

1. An image feature point extraction method for grid structure light vision measurement is characterized in that: the method comprises the following steps:
(1) Carrying out skeleton extraction on the grid structure light pattern to obtain a single-pixel grid structure light skeleton pattern;
(2) Extracting a plurality of corner points of the grid-structured optical skeleton pattern of the single pixel to obtain a plurality of corner point position coordinates;
(3) Determining an ROI (region of interest) in the grid structured light pattern by the position coordinates of each corner point, wherein the characteristic points of the grid structured light pattern are intersection points of crossed grid lines in each ROI;
(4) And extracting centers of grid horizontal and vertical lines in each ROI region in the horizontal and vertical directions, and fitting to determine a linear equation respectively to obtain pixel coordinates of each feature point.
2. The image feature point extraction method for use in grid structured light vision measurement according to claim 1, characterized in that: the skeleton extraction of the grid structured light pattern is that a camera acquires a grid structured light pattern I o Extracting skeleton to obtain pattern I with grid light stripe width of single pixel bone
3. The image feature point extraction method for use in grid structured light vision measurement according to claim 2, characterized in that: the angular point extraction of the skeleton pattern is to extract the angular point of the skeleton image to obtain an angular point P cor (u cor ,v cor ) A coordinate position.
4. The image feature point extraction method for use in grid structured light vision measurement according to claim 3, characterized in that: the ROI area is determined by each corner coordinate P cor (u cor ,v cor ) At the position of the original image I o To determine a rectangular region R cor Enlarging R cor The region forms a new rectangle R new Region, region ROI = R new -R cor Namely the light strip center extraction area.
5. According to claim 4The method for extracting the image characteristic points in the grid structure light vision measurement is characterized in that: the step (4) of extracting the centers of the grid horizontal and vertical lines in the ROI area in the horizontal and vertical directions is to extract the center coordinate P of the light bar in the ROI area in the vertical direction vcen (u vcen ,v cen ) Extracting the center of the optical strip in the ROI area in the horizontal direction, and extracting the coordinate P of the center of the optical strip in the ROI area in the horizontal direction hcen (u hcen ,v hcen )。
6. The image feature point extraction method for use in grid-structured light vision measurement according to claim 5, characterized in that: fitting and determining a straight line equation l for the centers of the horizontal light bars extracted from the ROI area in the step (4) h :a h x+b h y+c h =0。
7. The image feature point extraction method for use in grid-structured light vision measurement according to claim 6, characterized in that: fitting and determining a straight line equation l for the centers of the vertical light bars extracted from the ROI area in the step (4) v :a v x+b v y+c v =0。
8. The image feature point extraction method for use in grid structured light vision measurement according to claim 7, characterized in that: in the step (4), the linear equation l is calculated v :a v x+b v y+c v =0 and l h :a h x+b h y+c h Intersection point coordinate P of =0 f (u f ,v f ) And the intersection point coordinate is the coordinate of the feature point to be extracted in the grid pattern.
CN202110589316.9A 2021-05-28 2021-05-28 Image feature point extraction method for grid structure light vision measurement Active CN113269207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110589316.9A CN113269207B (en) 2021-05-28 2021-05-28 Image feature point extraction method for grid structure light vision measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110589316.9A CN113269207B (en) 2021-05-28 2021-05-28 Image feature point extraction method for grid structure light vision measurement

Publications (2)

Publication Number Publication Date
CN113269207A CN113269207A (en) 2021-08-17
CN113269207B true CN113269207B (en) 2023-04-18

Family

ID=77233521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110589316.9A Active CN113269207B (en) 2021-05-28 2021-05-28 Image feature point extraction method for grid structure light vision measurement

Country Status (1)

Country Link
CN (1) CN113269207B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063087A (en) * 2017-03-13 2017-08-18 浙江优迈德智能装备有限公司 It is a kind of based on hand-held teaching machine paint central point information measuring method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104400265B (en) * 2014-10-08 2017-06-06 吴长兴 A kind of extracting method of the welding robot corner connection characteristics of weld seam of laser vision guiding
CN106023247B (en) * 2016-05-05 2019-06-14 南通职业大学 A kind of Light stripes center extraction tracking based on space-time tracking
CN106228549B (en) * 2016-07-14 2019-04-19 嘉兴学院 A kind of triangle gridding tooth dividing method based on path planning
CN106846462B (en) * 2016-12-30 2019-12-17 北京农业信息技术研究中心 insect recognition device and method based on three-dimensional simulation
CN109483018A (en) * 2018-11-06 2019-03-19 湖北书豪智能科技有限公司 The active vision bootstrap technique of weld seam in automatic welding of pipelines
CN111784643B (en) * 2020-06-10 2024-01-09 武汉珈悦科技有限公司 Cross line structured light-based tire tangent plane acquisition method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063087A (en) * 2017-03-13 2017-08-18 浙江优迈德智能装备有限公司 It is a kind of based on hand-held teaching machine paint central point information measuring method

Also Published As

Publication number Publication date
CN113269207A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
TWI419081B (en) Method and system for providing augmented reality based on marker tracing, and computer program product thereof
CN108228798A (en) The method and apparatus for determining the matching relationship between point cloud data
CN105069789B (en) Structure light dynamic scene depth acquisition methods based on coding grid template
JP2011198330A (en) Method and program for collation in three-dimensional registration
JP2018055199A (en) Image processing program, image processing device, and image processing method
Wang et al. A method for detecting windows from mobile LiDAR data
JP5274173B2 (en) Vehicle inspection device
Ravanelli et al. A high-resolution photogrammetric workflow based on focus stacking for the 3D modeling of small Aegean inscriptions
CN108895979A (en) The structure optical depth acquisition methods of line drawing coding
CN105096314A (en) Binary grid template-based method for obtaining structured light dynamic scene depth
Farsangi et al. Rectification based single-shot structured light for accurate and dense 3D reconstruction
CN113269207B (en) Image feature point extraction method for grid structure light vision measurement
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN115546016B (en) Method for acquiring and processing 2D (two-dimensional) and 3D (three-dimensional) images of PCB (printed Circuit Board) and related device
CN111121637A (en) Grating displacement detection method based on pixel coding
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
JPH05280941A (en) Inputting apparatus of three-dimensional shape
CN113674360A (en) Covariant-based line structured light plane calibration method
Li et al. Structured light based high precision 3D measurement and workpiece pose estimation
MacDonald et al. Accuracy of 3D reconstruction in an illumination dome
CN110428458A (en) Depth information measurement method based on the intensive shape coding of single frames
Zhao et al. Binocular vision measurement for large-scale weakly textured ship hull plates using feature points encoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared