CN111307069A - Light three-dimensional scanning method and system for dense parallel line structure - Google Patents
Light three-dimensional scanning method and system for dense parallel line structure Download PDFInfo
- Publication number
- CN111307069A CN111307069A CN202010282081.4A CN202010282081A CN111307069A CN 111307069 A CN111307069 A CN 111307069A CN 202010282081 A CN202010282081 A CN 202010282081A CN 111307069 A CN111307069 A CN 111307069A
- Authority
- CN
- China
- Prior art keywords
- point
- line
- line segment
- points
- projector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/254—Projection of a pattern, viewing through a pattern, e.g. moiré
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2545—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a light three-dimensional scanning method and a light three-dimensional scanning system of a dense parallel line structure, which comprises the steps of designing a dense parallel line single-frame pattern; projecting pre-designed single-frame dense parallel lines to the surface of a measured object through a projector; shooting a modulated dense parallel line pattern by a binocular camera, and performing high-precision line extraction and segmentation; determining a candidate projector column number value of each extraction point on the subsection line segment by using a relative relation between a binocular camera and a space plane formed by dense parallel lines; deleting candidate projector column number values by utilizing the consistency of the column numbers of the extracted point projectors on the same line segment; verifying the validity of the column number value of the candidate projector by using the continuity of the column numbers of the adjacent line segments; and point-by-point reconstruction is carried out on the point cloud according to the image point coordinates on the line segment and the corresponding projector column number. The method solves the problems that the traditional multi-line structured light scanning reconstruction result is sparse and uneven in distribution, needs marker point auxiliary positioning, influences the reconstruction result by multi-line intersection and the like.
Description
Technical Field
The invention relates to the field of three-dimensional measurement, in particular to a light three-dimensional scanning method and system for a dense parallel line structure.
Background
In the common image modeling, a light source is an uncoded light source such as ambient light or white light, and image recognition completely depends on the feature points of a shot object, so matching is always a difficulty in image modeling. The difference of the structured light method is that the projection light source is encoded, and the image projected onto the object by the encoded light source and modulated by the depth of the object surface is photographed. Because the structured light source has a plurality of coding features, the matching of the feature points can be conveniently carried out, namely, the structured light method actively provides a plurality of feature points for matching, and the feature points of the shot object are not required to be used, so that a better matching result can be provided. In addition, because the objects shot in the common image modeling are various, each image matching faces different images, and the feature points need to be extracted again; the same pattern is projected by the structured light method, the characteristic is fixed, and the change is not needed according to the change of the scene, so that the matching difficulty is reduced, and the matching efficiency is improved.
The most important of the structured light method is the design and identification of codes, as shown in fig. 1, which can be roughly classified into six types, i.e., spatial codes, temporal codes, space-time codes, multi-wavelength composite codes, direct codes, and linear structured light scanning, according to the difference of the coding modes. The spatial coding technology hides the coded information in a single-frame projection image space; the time coding technology projects a plurality of different coding patterns according to time sequence, combines corresponding coding image sequence groups for decoding; the space-time coding technology integrates a time coding scheme and a space coding scheme, the coding capacity problem is solved through the time coding technology, and the reconstruction problem is solved through the space coding technology; the multi-wavelength composite coding technology is a technology for acquiring different coding information on different wave bands at the same time for reconstruction; the direct coding method directly sets a code word for each pixel of the coding pattern by using the characteristics of the projection light (such as color change, gray scale change and the like); the line structured light scanning technology projects a plurality of lines of laser to the surface of a measured object, and the line structured features in the image are extracted by an algorithm to realize matching and reconstruction.
The multi-line structured light scanning does not carry out any coding, at present, two lines of parallel structured light rays which are mutually crossed are mostly adopted, and the structured light rays are distinguished through a double camera, so that the matching is finished, as shown in fig. 2. The multi-line structured light method has the advantages that 1) the surface shape, the color and the material of the object are not easy to damage; 2) the reconstruction needs extremely small image space, and the geometric detail recovery capability is strong; 3) the structure light is not easy to be interfered by noise and basically does not interfere with each other; 4) decoding is not needed, double-camera verification is adopted, and the structured light is beneficial to high-precision extraction; 5) the connection relationship of the internal points of the structured light can be adopted to improve the reconstruction quality. However, the conventional multi-line structured light scanning still has the following defects that 1) only sparse multi-line structured light can be adopted to obtain sparse structural information which is distributed very unevenly; 2) mark points are required to be marked on the object for auxiliary positioning, which is not beneficial to equipment use; 3) the result of reconstruction near the multiline crossing position is inaccurate, and needs to be judged and deleted, which wastes reconstruction information.
Disclosure of Invention
The invention provides a dense parallel line structured light three-dimensional scanning method aiming at the problems of sparse reconstruction result, uneven distribution, need of marker point auxiliary positioning, influence of multi-line intersection on reconstruction result and the like of the traditional multi-line structured light scanning, which not only retains the advantages of the traditional multi-line structured light, but also solves the problem of the traditional multi-line structured light, and comprises the following steps:
s1, designing a dense parallel line single-frame pattern;
s2, projecting the pre-designed single-frame dense parallel lines to the surface of the measured object through a projector;
s3, shooting a modulated dense parallel line pattern by a binocular camera, and performing high-precision line extraction and segmentation;
s4, determining the candidate projector column number value of each extraction point on the segment line by using the relative relation of the binocular camera and the space plane formed by the dense parallel lines;
s5, selecting candidate projector column number values by utilizing the consistency of the projector column numbers of the extracted points on the same line segment;
s6, verifying the validity of the candidate projector column number value by using the continuity of the adjacent line segment column numbers;
and S7, reconstructing point cloud point by point according to the image point coordinates on the line segment and the corresponding projector serial number.
Based on the above technical solution, a preferred embodiment of S1 is: in the dense parallel line pattern, parallel lines are arranged in parallel in the column direction, the parallel line column pitch Grid is 8 pixels, and for a projection pattern of 1800 × 1360 pixels, total num is 224 parallel lines.
Based on the above technical solution, a preferred embodiment of S2 is: the final pattern is obtained by photoetching, and then is projected to the surface of an object through an LED projection system, and the influence of background light on projection pattern collection can be reduced by adopting multi-line laser as a light source.
On the basis of the above technical solution, the preferable S3 specifically includes the following steps:
s31, the binocular camera photographs the object-modulated parallel line patterns from different angles. One camera is used as a left camera, the other camera is used as a right camera, and modulation parallel line images acquired by the cameras respectively are marked as I1,I2。
And S32, Gaussian filtering. Using Gauss filtering kernel template with 3X 3 window size to respectively pair I1,I2Filtering to obtain filtered image I1',I2'。
S33, a two-dimensional Gaussian function is generated to convolve the kernel templates with respect to the column direction first-order partial derivatives. Two-dimensional Gaussian function ofWherein (x, y) is the coordinate value of the row and column of the image point, sigma is the standard deviation of the Gaussian kernel, and the standard deviation is set to be 0.75. The first order partial derivation is solved for the two-dimensional Gaussian function x to obtain a column direction first order partial derivation convolution kernel template about the two-dimensional Gaussian function as
And S34, convolving the image by using the first-order partial derivative convolution kernel template to obtain a gradient image. Convolution kernel templates using first order partial derivativesAre respectively to I1',I2' performing convolution operation to obtain gradient image G1,G2。
And S35, performing peak detection and performing row-column coordinate fine extraction. Distribution mark matrix M1Size and I1' unanimous, initial value set to False, along the linePixel by pixel search, for left camera I1' Upper integer Pixel Point P1(x, y) where (x, y) is its integer pixel coordinate, from the filtered image I1' gradient image G1To obtain P1Gray value of i1Gradient value of g1,P1The gray values of the points 2 pixels away from the left and the right are i-i respectively+,P1The gradient values of points 1 pixel away from the left and right are g-g+When the following three conditions are satisfied simultaneously, the peak detection is successful, and the matrix element M will be marked1(x, y) is set to True: ① i1Greater than 50, ②③g+X g-0. For the point P where the peak detection is successful1(x, y) correcting the x-coordinate to obtain u, when g+×g1When the ratio is less than 0, the reaction mixture is,when g.times.g1When the ratio is less than 0, the reaction mixture is,the y coordinate need not be corrected, but is denoted v, so I can be expressed1' the detected peak point is denoted as P1(x, y, u, v), wherein (u, v) is its sub-pixel coordinates. Let I1' Total detection of p1A peak point, denoted as Pi 1(x,y,u,v),0≤i≤p1. The same can be done for the right camera I2' carrying out the same operation to obtain the corresponding peak point Pi 2(x,y,u,v),0≤i≤p2And its mark matrix M2Wherein p is2The number of peak points detected by the right camera.
S36, detecting the eight neighborhood relations of the peak points, wherein the adjacent peak points form line segments. For Pi 1(x,y,u,v),0≤i≤p1Assign tag array B1Array size of p1Setting the initial value to True, and taking Pi 1(x,y,u,v),0≤i≤p1First element P1 1The mark array corresponding to the elementSet to False, the entire pixel coordinate (x, y) of this element is taken as the mark matrix M1Searching eight neighborhoods of the target, if M of eight neighborhood points is searched1The value is True, and the neighborhood point information is stored in a linear segment linked list L1 1And will correspond to M1Setting the value as False, taking the False as a new search point, continuing searching until no search point exists, and obtaining a first segment L at the moment1 1Line segment L1 1Point j of (a) is denoted as Q1j 1(x,y,u,v),Wherein l1 1Is a line segment L1 1The number of dots. Get back from Pi 1(x,y,u,v),0≤i≤p1Fetch tag array B1For the element of True, it is marked with array B1Setting False, starting from this point, eight neighborhood growth is carried out to obtain new segment, and the process is repeated until B1All elements are False, and the number of segments is recorded as l1Set of line segments L1I th line segment Li 1J point of (a) is Qij 1(x,y,u,v),0≤i≤l1,In the same way, P can be obtainedi 2(x,y,u,v),0≤i≤p2The corresponding line segment set is L2I th line segment Li 2J point of (a) is Qij 2(x,y,u,v),0≤i≤l2,
On the basis of the above technical solution, the preferable S4 specifically includes the following steps:
and S41, solving the coordinates of the object intersection points of the left camera point and all the parallel lines according to the calibration parameters of the parallel lines of the left camera and the projector. For left cameraExtracted line segment points Qij 1(x,y,u,v),0≤i≤l1,According to the calibration parameters of the parallel lines of the left camera and the projector which are calibrated in advance, the line segment point Q is intersected in a line-surface intersection modeij 1Intersecting planes formed by parallel lines of all projectors to obtain a series of object coordinate points marked as Sijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (X, Y and Z) are corresponding object space point coordinates, and k is the column number of the parallel lines of the projector. And performing the operation on all the line segment points of the left camera to obtain the object space point coordinates of the line segment points of all the left cameras.
And S42, projecting the object space intersection point coordinates to the right camera according to the right camera calibration parameters to obtain the back projection coordinates of the right camera. According to the calibrated parameters of the right camera calibrated in advance, the S is calculatedijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, and the image point coordinate on the right camera is obtained and is marked as Tijk 1(u,v),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (u, v) is the coordinate value of the image space of the right camera obtained by back projection. And performing the operation on all the line segment points of the left camera to obtain the back projection coordinates of the line segment points of all the left camera on the right camera.
And S43, the difference between the back projection coordinates and the nearest distance of the right camera line segment is within a certain threshold range, and the corresponding column number is used as the candidate projector column number of the left camera point. For Tijk 1At right camera line segment point Qij 2(x,y,u,v),0≤i≤l2,0≤j≤li 2Find the nearest line segment point, andfinding Tijk 1The shortest pixel distance value Dis to the line segment where the line segment point is located, when Dis < Thre, the k value is used as Qij 1The invention is provided with Thre which is 0.5 and Q which is stored as one of the alternative projector column numbersij 1All candidate projector column numbers of dots are CANDij(c1,…,cm) Sequence in which m is Qij 1Number of candidate projector column numbers for a point. And performing the operation on all the line segment points of the left camera to obtain the candidate projector column number sequence of the line segment points of all the left cameras.
Based on the above technical solution, a preferred embodiment of S5 is: traverse the line segment sequence of the left camera, for the ith line segment Li 1Candidate projector column number CAND of line segment pointij(c1,…,cm) The sequences are combined into a sequenceFor UCANDiMerging and counting the same projector column numbers, sorting according to the statistics of the column numbers, if the statistics of the column numbers with the most number is more than two times of the second most column number statistics, assigning the most column number value c to the line segment and the point associated with the line segment, and respectively recording the most column number value c as L1 1(c) And Qij 1(x,y,u,v,c),0≤i≤l1,Otherwise, deleting the current straight line; and performing the operation on all the line segments of the left camera to obtain the column number values of the line segments of all the left cameras.
Based on the above technical solution, a preferred embodiment of S6 is: distributing a statistical array R with a size l to each line segment by adopting a line scanning method1For point Q on the same lineij 1(x,y,u,v,c),0≤i≤l1,Sorting the column coordinates, and judging the column number sequence of the points to beWhether the points are arranged in sequence or not, if the points are judged to have the mutation points, the line segment L where the points are located is showni 1(c) The column number value of (1) has error, and the corresponding statistic array value R is accumulated by 1. After all the lines of the left image are scanned, the number l of line segment points of the ith line segment is judgedi 1And the ratio of the R of the line segment is less than 4, the line segment is discarded, and otherwise, the line segment is retained.
Based on the above technical solution, a preferred embodiment of S7 is: for the extracted left image line segment point Qij 1(x,y,u,v,c),0≤i≤l1,According to the calibrated parameters of the left camera and the projector which are calibrated in advance, the coordinates of object side points are intersected by utilizing the sub-pixel coordinates (u, v) of the image points and the column number c of the projector by adopting a line-plane intersection method, and the point cloud reconstruction of the single-frame image is completed.
Moreover, the invention also provides a dense parallel line structured light three-dimensional scanning system which is used for the dense parallel line structured light three-dimensional scanning method.
Compared with the prior art, the light three-dimensional scanning method and the light three-dimensional scanning system of the dense parallel line structure have the following beneficial effects that:
1) adopting dense parallel line structured light to obtain dense and uniformly distributed structural information;
2) the positioning can be carried out by adopting geometric characteristics, so that the problem that the traditional method relies on the positioning of the mark points is solved;
3) and by adopting parallel line structured light, the judgment and influence of line intersection on the reconstruction result are avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of structured light method classification;
FIG. 2 is a schematic view of a multi-line structured light scanning pattern;
FIG. 3 is a flow chart of a light three-dimensional scanning method with dense parallel line structure according to the present invention;
FIG. 4 is a partially schematic view of closely spaced parallel lines;
FIG. 5 is a schematic diagram of a single frame pattern of the design;
FIG. 6 is a flow chart of parallel line pattern high accuracy extraction and segmentation;
fig. 7 is a flow chart of line segment point candidate projector column number acquisition.
Detailed Description
The technical solutions of the present invention are described in detail below with reference to the drawings and examples, and the technical solutions in the embodiments of the present invention are clearly and completely described. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention provides a dense parallel line structured light three-dimensional scanning method aiming at the problems of sparse reconstruction result, uneven distribution, need of marker point auxiliary positioning, influence of multi-line intersection on reconstruction result and the like of the traditional multi-line structured light scanning, which not only retains the advantages of the traditional multi-line structured light, but also solves the problem of the traditional multi-line structured light, and as shown in figure 3, the method comprises the following steps:
and S1, designing a dense parallel line single-frame pattern. Compared with the traditional pattern mode with multi-line intersection, the invention adopts the dense parallel line pattern mode, and parallel lines are arranged in parallel along the column direction, thereby avoiding the problem of inaccurate reconstruction result near the multi-line intersection position. When dense parallel lines are designed, the parallel line column spacing Grid is 8 pixels, and for a projection pattern of 1800 × 1360 pixels, num is 224 parallel lines in total, which is 16 times of the capacity of a traditional multi-line cross structured light 2 × 7-14 line mode. Fig. 4 is a partial schematic view of a dense parallel line single frame pattern.
And S2, projecting the pre-designed single-frame dense parallel lines to the surface of the measured object through the projector. As shown in fig. 5, the final pattern is obtained by photolithography and then projected onto the surface of the object by an LED projection system, or a multi-line laser can be used as a light source to reduce the influence of background light on the collection of the projected pattern.
S3, shooting the modulated dense parallel line pattern by the binocular camera, and carrying out high-precision line extraction and segmentation. The specific flow is shown in fig. 6.
S31, the binocular camera photographs the object-modulated parallel line patterns from different angles. One camera is used as a left camera, the other camera is used as a right camera, and modulation parallel line images acquired by the cameras respectively are marked as I1,I2。
And S32, Gaussian filtering. Using Gauss filtering kernel template with 3X 3 window size to respectively pair I1,I2Filtering to obtain filtered image I1',I2'。
S33, a two-dimensional Gaussian function is generated to convolve the kernel templates with respect to the column direction first-order partial derivatives. Two-dimensional Gaussian function ofWherein (x, y) is the coordinate value of the row and column of the image point, sigma is the standard deviation of Gaussian kernel, and the standard deviation is set to be 0.75; the first order partial derivation is solved for the two-dimensional Gaussian function x to obtain a column direction first order partial derivation convolution kernel template about the two-dimensional Gaussian function as
And S34, convolving the image by using the first-order partial derivative convolution kernel template to obtain a gradient image. Convolution kernel templates using first order partial derivativesAre respectively to I1',I2' performing convolution operation to obtain gradient image G1,G2。
And S35, performing peak detection and performing row-column coordinate fine extraction. Distribution mark matrix M1Size and I1' consensus, initial value set to False, search pixel by pixel along a line, for left camera I1' Upper integer Pixel Point P1(x, y) where (x, y) is its integer pixel coordinate, from the filtered image I1' gradient image G1To obtain P1Gray value of i1Gradient value of g1,P1The gray values of the points 2 pixels away from the left and the right are i-i respectively+,P1The gradient values of points 1 pixel away from the left and right are g-g+When the following three conditions are satisfied simultaneously, the peak detection is successful, and the matrix element M will be marked1(x, y) is set to True: ① i1Greater than 50, ②③g+X g-0. For the point P where the peak detection is successful1(x, y) correcting the x-coordinate to obtain u, when g+×g1When the ratio is less than 0, the reaction mixture is,when g-x g1When the ratio is less than 0, the reaction mixture is,the y coordinate need not be corrected, but is denoted v, so I can be expressed1' the detected peak point is denoted as P1(x, y, u, v), wherein (u, v) is its sub-pixel coordinates. Let I1' Total detection of p1A peak point, denoted as Pi 1(x,y,u,v),0≤i≤p1. The same can be done for the right camera I2' carrying out the same operation to obtain the corresponding peak point Pi 2(x,y,u,v),0≤i≤p2And its mark matrix M2Wherein p is2The number of peak points detected by the right camera.
S36, detecting the eight neighborhood relations of the peak points, wherein the adjacent peak points form line segments. For Pi 1(x,y,u,v),0≤i≤p1Assign tag array B1Array size of p1Setting the initial value to True, and taking Pi 1(x,y,u,v),0≤i≤p1First element P1 1The mark array B corresponding to the element1 1Set to False, the entire pixel coordinate (x, y) of this element is taken as the mark matrix M1Searching eight neighborhoods of the target, if M of eight neighborhood points is searched1The value is True, which indicates that the connection relation exists, and the neighborhood point information is stored in the link list L1 1And will correspond to M1Setting the value as False, taking the False as a new search point, continuing searching until no search point exists, and obtaining a first segment L at the moment1 1Line segment L1 1Point j of (a) is denoted as Q1j 1(x,y,u,v),Wherein l1 1Is a line segment L1 1The number of dots. Get back from Pi 1(x,y,u,v),0≤i≤p1Fetch tag array B1For the element of True, it is marked with array B1Setting False, starting from this point, eight neighborhood growth is carried out to obtain new segment, and the process is repeated until B1All elements are False, and the number of segments recorded at this time is l1Set of line segments L1I th line segment Li 1J point of (a) is Qij 1(x,y,u,v),0≤i≤l1,In the same way, P can be obtainedi 2(x,y,u,v),0≤i≤p2The corresponding line segment set is L2I th line segment Li 2J point of (a) is Qij 2(x,y,u,v),0≤i≤l2,
And S4, determining the candidate projector column number value of each extraction point on the segment line by using the relative relation between the binocular camera and the space plane formed by the dense parallel lines. The specific flow is shown in fig. 7.
And S41, solving the coordinates of the object intersection points of the left camera point and all the parallel lines according to the calibration parameters of the parallel lines of the left camera and the projector. For line segment point Q extracted on the left cameraij 1(x,y,u,v),0≤i≤l1,According to the calibration parameters of the parallel lines of the left camera and the projector which are calibrated in advance, the line segment point Q is intersected in a line-surface intersection modeij 1Intersecting planes formed by parallel lines of all projectors to obtain a series of object coordinate points marked as Sijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (X, Y and Z) are corresponding object space point coordinates, and k is the column number of the parallel lines of the projector. And performing the operation on all the line segment points of the left camera to obtain the object space point coordinates of the line segment points of all the left cameras.
And S42, projecting the object space intersection point coordinates to the right camera according to the right camera calibration parameters to obtain the back projection coordinates of the right camera. According to the calibrated parameters of the right camera calibrated in advance, the S is calculatedijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, and the image point coordinate on the right camera is obtained and is marked as Tijk 1(u,v),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (u, v) is the coordinate value of the image space of the right camera obtained by back projection. And performing the operation on all the line segment points of the left camera to obtain the back projection coordinates of the line segment points of all the left camera on the right camera.
And S43, the difference between the back projection coordinates and the nearest distance of the right camera line segment is within a certain threshold range, and the corresponding column number is used as the candidate projector column number of the left camera point.For Tijk 1At right camera line segment point Qij 2(x,y,u,v),0≤i≤l2,0≤j≤li 2Finding the nearest line segment point and solving Tijk 1The shortest pixel distance value Dis to the line segment where the line segment point is located, when Dis < Thre, the k value is used as Qij 1The invention is provided with Thre which is 0.5 and Q which is stored as one of the alternative projector column numbersij 1All candidate projector column numbers of dots are CANDij(c1,…,cm) Sequence in which m is Qij 1Number of candidate projector column numbers for a point. And performing the operation on all the line segment points of the left camera to obtain the candidate projector column number sequence of the line segment points of all the left cameras.
S5, the candidate projector column number value is deleted by using the consistency of the projector column numbers of the extracted points on the same line segment. Since the column numbers of the extracted point projectors on the same line segment are identical, the line segment sequence of the left camera is traversed for the ith line segment Li 1Candidate projector column number CAND of line segment pointij(c1,…,cm) The sequences are combined into a sequenceFor UCANDiMerging and counting the same projector column numbers, sorting according to the statistics of the column numbers, if the statistics of the column numbers with the most number is more than two times of the second most column number statistics, assigning the most column number value c to the line segment and the point associated with the line segment, and respectively recording the most column number value c as L1 1(c) And Qij 1(x,y,u,v,c),0≤i≤l1,Otherwise, deleting the current straight line. And performing the operation on all the line segments of the left camera to obtain the column number values of the line segments of all the left cameras.
S6, the validity of the candidate projector column number value is verified using the continuity of the adjacent line segment column numbers. When adjacent line segments are arranged left and right, the line segments have continuity of column numbersThe continuity can not be changed due to different parallaxes, therefore, a line scanning method is adopted, each line segment is distributed with a statistical array R with the size l1For point Q on the same lineij 1(x,y,u,v,c),0≤i≤l1,Sorting the column coordinates, judging whether the column number sequences of the points are arranged in sequence, if so, indicating the line segment L of the pointi 1(c) The column number value of (1) has error, and the corresponding statistic array value R is accumulated by 1. After all the lines of the left image are scanned, the number l of line segment points of the ith line segment is judgedi 1And the ratio of the R of the line segment is less than 4, the line segment is discarded, and otherwise, the line segment is retained.
And S7, reconstructing point cloud point by point according to the image point coordinates on the line segment and the corresponding projector serial number. For the extracted left image line segment point Qij 1(x,y,u,v,c),0≤i≤l1,According to the calibrated parameters of the left camera and the projector which are calibrated in advance, the coordinates of object side points are intersected by utilizing the sub-pixel coordinates (u, v) of the image points and the column number c of the projector by adopting a line-plane intersection method, and the point cloud reconstruction of the single-frame image is completed.
From the implementation steps, compared with the traditional method, the method has the following remarkable advantages:
1) adopting dense parallel line structured light to obtain dense and uniformly distributed structural information;
2) the positioning can be carried out by adopting geometric characteristics, so that the problem that the traditional method relies on the positioning of the mark points is solved;
3) and by adopting parallel line structured light, the judgment and influence of line intersection on the reconstruction result are avoided.
In addition, the invention is not only suitable for a double-camera structured light three-dimensional measurement system, but also suitable for a multi-camera structured light three-dimensional measurement system. In specific implementation, the above processes can be automatically operated by adopting a computer software mode, and a system device for operating the method also needs to be in a protection range.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (9)
1. A light three-dimensional scanning method of a dense parallel line structure is characterized by comprising the following steps:
s1, designing a dense parallel line single-frame pattern;
s2, projecting the pre-designed single-frame dense parallel lines to the surface of the measured object through a projector;
s3, shooting a modulated dense parallel line pattern by a binocular camera, and performing high-precision line extraction and segmentation;
s4, determining the candidate projector column number value of each extraction point on the segment line by using the relative relation of the binocular camera and the space plane formed by the dense parallel lines;
s5, selecting candidate projector column number values by utilizing the consistency of the projector column numbers of the extracted points on the same line segment;
s6, verifying the validity of the candidate projector column number value by using the continuity of the adjacent line segment column numbers;
and S7, reconstructing point cloud point by point according to the image point coordinates on the line segment and the corresponding projector serial number.
2. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S1 is: in the dense parallel line pattern, parallel lines are arranged in parallel in the column direction, the parallel line column pitch Grid is 8 pixels, and for a projection pattern of 1800 × 1360 pixels, total num is 224 parallel lines.
3. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S2 is: the final pattern is obtained by photoetching and then projected to the surface of an object through an LED projection system, and the influence of background light on projection pattern collection can be reduced by adopting multi-line laser as a light source.
4. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S3 is as follows:
s31, shooting the parallel line patterns modulated by the object from different angles by the binocular camera; one camera is used as a left camera, the other camera is used as a right camera, and modulation parallel line images acquired respectively are marked as I1,I2;
S32, Gaussian filtering; using Gauss filtering kernel template with 3X 3 window size to respectively pair I1,I2Filtering to obtain filtered image I1',I2';
S33, generating a convolution kernel template of the two-dimensional Gaussian function with respect to the column direction first-order partial derivative; two-dimensional Gaussian function ofWherein (x, y) is the coordinate value of the row and column of the image point, sigma is the standard deviation of Gaussian kernel, and the standard deviation is set to be 0.75; the first order partial derivation is solved for the two-dimensional Gaussian function x to obtain a column direction first order partial derivation convolution kernel template about the two-dimensional Gaussian function as
S34, convolving the image by using a first-order partial derivative convolution kernel template to obtain a gradient image; convolution kernel templates using first order partial derivativesAre respectively to I1',I2' performing convolution operation to obtain gradient image G1,G2;
S35, performing peak detection and performing row-column coordinate fine extraction; distribution mark matrix M1Size and I1' consensus, initial value set to False, search pixel by pixel along a line, for left camera I1' Upper integer Pixel Point P1(x, y) where (x, y) is its integer pixel coordinate, from the filtered image I1' gradient image G1To obtain P1Gray value of i1Gradient value of g1,P1The gray values of the points 2 pixels away from the left and the right are i-i respectively+,P1The gradient values of points 1 pixel away from the left and right are g-g+When the following three conditions are satisfied simultaneously, the peak detection is successful, and the matrix element M will be marked1(x, y) is set to True: ① i1Greater than 50, ②③g+X g-0; for the point P where the peak detection is successful1(x, y) correcting the x-coordinate to obtain u, when g+×g1When the ratio is less than 0, the reaction mixture is,when g-x g1When the ratio is less than 0, the reaction mixture is,the y coordinate need not be corrected, but is denoted v, so I can be expressed1' the detected peak point is denoted as P1(x, y, u, v), wherein (u, v) is its sub-pixel coordinates; let I1' Total detection of p1A peak point, denoted as Pi 1(x,y,u,v),0≤i≤p1(ii) a The same can be done for the right camera I2' carrying out the same operation to obtain the corresponding peak point Pi 2(x,y,u,v),0≤i≤p2And its mark matrix M2Wherein p is2The number of peak points detected by the right camera;
s36, detecting the eight neighborhood relations of the peak points, wherein the adjacent peak points form line segments; for Pi 1(x,y,u,v),0≤i≤p1Assign tag array B1Array size of p1Setting the initial value to True, and taking Pi 1(x,y,u,v),0≤i≤p1First element P1 1The mark array B corresponding to the element1 1Set to False, the entire pixel coordinate (x, y) of this element is taken as the mark matrix M1Searching eight neighborhoods of the target, if M of eight neighborhood points is searched1The value is True, and the neighborhood point information is stored in a linear segment linked list L1 1And will correspond to M1Setting the value as False, taking the False as a new search point, continuing searching until no search point exists, and obtaining a first segment L at the moment1 1Line segment L1 1Point j of (a) is denoted as Q1j 1(x,y,u,v),0≤j≤l1 1Wherein l is1 1Is a line segment L1 1The number of dots possessed; get back from Pi 1(x,y,u,v),0≤i≤p1Fetch tag array B1For the element of True, it is marked with array B1Setting False, starting from this point, eight neighborhood growth is carried out to obtain new segment, and the process is repeated until B1All elements are False, and the number of segments is recorded as l1Set of line segments L1I th line segment Li 1J point of (a) is Qij 1(x,y,u,v),0≤i≤l1,0≤j≤li1;
5. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S4 is as follows:
s41, solving the coordinates of the object space intersection points of the left camera point and all the parallel lines according to the calibration parameters of the parallel lines of the left camera and the projector; for line segment point Q extracted on the left cameraij 1(x,y,u,v),0≤i≤l1,According to the calibration parameters of the parallel lines of the left camera and the projector which are calibrated in advance, the line segment point Q is intersected in a line-surface intersection modeij 1Intersecting planes formed by parallel lines of all projectors to obtain a series of object coordinate points marked as Sijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (X, Y and Z) are corresponding object space point coordinates, and k is the column number of the parallel lines of the projector; performing the operation on all the line segment points of the left camera to obtain object space point coordinates of the line segment points of all the left cameras;
s42, projecting the object space intersection point coordinates to the right camera according to the right camera calibration parameters to obtain the back projection coordinates of the right camera; according to the calibrated parameters of the right camera calibrated in advance, the S is calculatedijk 1(X,Y,Z),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, and the image point coordinate on the right camera is obtained and is marked as Tijk 1(u,v),0≤i≤l1,K is more than or equal to 0 and less than or equal to num, wherein (u, v) are coordinate values of the image space of the right camera obtained by back projection; performing the operation on all the line segment points of the left camera to obtain back projection coordinates of the line segment points of the left camera on the right camera;
s43, the difference between the back projection coordinates and the nearest distance of the right camera line segment is within a certain threshold range, and the corresponding column number is used as the candidate projector column number of the left camera point; for Tijk 1At right camera line segment point Qij 2(x,y,u,v),0≤i≤l2,Finding the nearest line segment point and solving Tijk 1The shortest pixel distance value Dis to the line segment where the line segment point is located, when Dis < Thre, the k value is used as Qij 1The invention is provided with Thre which is 0.5 and Q which is stored as one of the alternative projector column numbersij 1All candidate projector column numbers of dots are CANDij(c1,…,cm) Sequence in which m is Qij 1The number of candidate projector column numbers of the points; and performing the operation on all the line segment points of the left camera to obtain the candidate projector column number sequence of the line segment points of all the left cameras.
6. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S5 is: traverse the line segment sequence of the left camera, for the ith line segment Li 1Candidate projector column number CAND of line segment pointij(c1,…,cm) The sequences are combined into a sequenceFor UCANDiMerging and counting the same projector column numbers, sorting according to the statistics of the column numbers, if the statistics of the column numbers with the most number is more than two times of the second most column number statistics, assigning the most column number value c to the line segment and the point associated with the line segment, and respectively recording the most column number value c as L1 1(c) And Qij 1(x,y,u,v,c),0≤i≤l1,Otherwise, deleting the current straight line; and performing the operation on all the line segments of the left camera to obtain the column number values of the line segments of all the left cameras.
7. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S6 is: distributing a statistical array R with a size l to each line segment by adopting a line scanning method1For point Q on the same lineij 1(x,y,u,v,c),0≤i≤l1,Sorting the column coordinates, judging whether the column number sequences of the points are arranged in sequence, if so, indicating the line segment L of the pointi 1(c) The column number value of the error is wrong, and the corresponding statistic array value R is accumulated to be 1; after all the lines of the left image are scanned, the number l of line segment points of the ith line segment is judgedi 1And the ratio of the R of the line segment is less than 4, the line segment is discarded, and otherwise, the line segment is retained.
8. A method for three-dimensional scanning of dense parallel line structured light, wherein the implementation manner of S7 is: for the extracted left image line segment point Qij 1(x,y,u,v,c),0≤i≤l1,According to the calibrated parameters of the left camera and the projector which are calibrated in advance, the coordinates of object side points are intersected by utilizing the sub-pixel coordinates (u, v) of the image points and the column number c of the projector by adopting a line-plane intersection method, and the point cloud reconstruction of the single-frame image is completed.
9. The utility model provides a dense parallel line structure light three-dimensional scanning system which characterized in that: a three-dimensional scanning method of light with dense parallel line structure as claimed in claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010282081.4A CN111307069B (en) | 2020-04-11 | 2020-04-11 | Compact parallel line structured light three-dimensional scanning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010282081.4A CN111307069B (en) | 2020-04-11 | 2020-04-11 | Compact parallel line structured light three-dimensional scanning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111307069A true CN111307069A (en) | 2020-06-19 |
CN111307069B CN111307069B (en) | 2023-06-02 |
Family
ID=71152050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010282081.4A Active CN111307069B (en) | 2020-04-11 | 2020-04-11 | Compact parallel line structured light three-dimensional scanning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111307069B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112833816A (en) * | 2020-12-31 | 2021-05-25 | 武汉中观自动化科技有限公司 | Positioning method and system with mixed landmark positioning and intelligent reverse positioning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150054946A1 (en) * | 2013-08-21 | 2015-02-26 | Faro Technologies, Inc. | Real-time inspection guidance of triangulation scanner |
US20150229911A1 (en) * | 2014-02-13 | 2015-08-13 | Chenyang Ge | One method of binocular depth perception based on active structured light |
CN106524943A (en) * | 2016-11-10 | 2017-03-22 | 华南理工大学 | Three-dimensional reconstruction device and method of dual-rotation laser |
CA3022442A1 (en) * | 2017-10-24 | 2019-01-02 | Shining 3D Tech Co., Ltd. | Three-dimensional reconstruction method and device based on monocular three-dimensional scanning system |
-
2020
- 2020-04-11 CN CN202010282081.4A patent/CN111307069B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150054946A1 (en) * | 2013-08-21 | 2015-02-26 | Faro Technologies, Inc. | Real-time inspection guidance of triangulation scanner |
US20150229911A1 (en) * | 2014-02-13 | 2015-08-13 | Chenyang Ge | One method of binocular depth perception based on active structured light |
CN106524943A (en) * | 2016-11-10 | 2017-03-22 | 华南理工大学 | Three-dimensional reconstruction device and method of dual-rotation laser |
CA3022442A1 (en) * | 2017-10-24 | 2019-01-02 | Shining 3D Tech Co., Ltd. | Three-dimensional reconstruction method and device based on monocular three-dimensional scanning system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112833816A (en) * | 2020-12-31 | 2021-05-25 | 武汉中观自动化科技有限公司 | Positioning method and system with mixed landmark positioning and intelligent reverse positioning |
Also Published As
Publication number | Publication date |
---|---|
CN111307069B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110068270B (en) | Monocular vision box volume measuring method based on multi-line structured light image recognition | |
CN103400366B (en) | Based on the dynamic scene depth acquisition methods of fringe structure light | |
US20070057946A1 (en) | Method and system for the three-dimensional surface reconstruction of an object | |
CN110580481B (en) | Light field image key position detection method based on EPI | |
WO2018219156A1 (en) | Structured light coding method and apparatus, and terminal device | |
CN109540023B (en) | Object surface depth value measurement method based on two-value grid coding formwork structure light | |
CN114972633B (en) | Fast scanning point cloud interpolation method under constraint of crossed laser lines | |
CN112991517B (en) | Three-dimensional reconstruction method for texture image coding and decoding automatic matching | |
US9245375B2 (en) | Active lighting for stereo reconstruction of edges | |
CN113505626A (en) | Rapid three-dimensional fingerprint acquisition method and system | |
CN109064536B (en) | Page three-dimensional reconstruction method based on binocular structured light | |
CN111307069A (en) | Light three-dimensional scanning method and system for dense parallel line structure | |
CN113689397A (en) | Workpiece circular hole feature detection method and workpiece circular hole feature detection device | |
CN111023994B (en) | Grating three-dimensional scanning method and system based on multiple measurement | |
US11348261B2 (en) | Method for processing three-dimensional point cloud data | |
CN111336949B (en) | Space coding structured light three-dimensional scanning method and system | |
CN108895979A (en) | The structure optical depth acquisition methods of line drawing coding | |
CN115018735B (en) | Crack width identification method and system based on Hough transformation correction two-dimensional code image | |
CN111121637A (en) | Grating displacement detection method based on pixel coding | |
CN111783877B (en) | Depth information measurement method based on single-frame grid composite coding template structured light | |
Jia et al. | One‐Shot M‐Array Pattern Based on Coded Structured Light for Three‐Dimensional Object Reconstruction | |
Li et al. | Structured light based high precision 3D measurement and workpiece pose estimation | |
CN113326838A (en) | Mobile phone light guide plate model number identification method based on deep learning network | |
CN110428458A (en) | Depth information measurement method based on the intensive shape coding of single frames | |
CN201707037U (en) | Optical three-dimensional imager |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |