CN111336949B - Space coding structured light three-dimensional scanning method and system - Google Patents
Space coding structured light three-dimensional scanning method and system Download PDFInfo
- Publication number
- CN111336949B CN111336949B CN202010281877.8A CN202010281877A CN111336949B CN 111336949 B CN111336949 B CN 111336949B CN 202010281877 A CN202010281877 A CN 202010281877A CN 111336949 B CN111336949 B CN 111336949B
- Authority
- CN
- China
- Prior art keywords
- decoding
- point
- projector
- image
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/254—Projection of a pattern, viewing through a pattern, e.g. moiré
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/2433—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2509—Color coding
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2518—Projection by scanning of the object
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a space coding structured light three-dimensional scanning method and a system, which comprise the steps of designing a single-frame space coding pattern, wherein the single-frame space coding pattern comprises decoding points for decoding and encryption points for encrypting; projecting a pre-coded single-frame space coding pattern to the surface of a measured object through a projector; the camera shoots the space coding pattern modulated by the object, and extracts the image decoding point and the encryption point; determining the candidate projector column numbers of the decoding points by utilizing the relation between horizontal epipolar line constraint and neighborhood relative displacement; constructing and updating a statistic weight array of the decoding point, and deleting the column numbers of the candidate projectors of the decoding point according to the statistic weight array; deducing the projector column number of the encryption point according to the projector column number of the decoding point; reconstructing a point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector column numbers. The invention solves the problems of low light robustness, poor precision, larger image space decoding requirement, poor expansibility and the like of the traditional space coding structure.
Description
Technical Field
The invention relates to the field of three-dimensional measurement, in particular to a space coding structured light three-dimensional scanning method and system.
Background
In the common image modeling, the light source is an ambient light or white light which is not coded, and the image recognition is completely dependent on the characteristic points of the photographed object, so that matching is always a difficulty in image modeling. The structured light method is different in that the projection light source is coded, and an image of the coded light source projected onto the object and modulated by the depth of the object surface is photographed, as shown in fig. 1. Because the structured light source is provided with a plurality of coding features, the matching of the feature points can be conveniently carried out, namely, the structured light method actively provides a plurality of feature points for matching, and the feature points of the shot object are not needed, so that a better matching result can be provided. In addition, as the photographed objects in the common image modeling are various, each image matching faces different images, and the characteristic points need to be extracted again; the same pattern is projected by the structured light method, the characteristics are fixed, the change is not needed according to the scene change, the matching difficulty is reduced, and the matching efficiency is improved.
The most important of the structured light method is to design and identify codes, and as shown in fig. 2, the method can be classified into six types, namely space coding, time coding, space-time coding, multi-wavelength composite coding, direct coding and line structured light scanning according to the different coding modes. The space coding technology conceals coding information in a single-frame projection image space; the time coding technology projects a plurality of different coding patterns according to time sequence, and corresponding coding image sequences are combined for decoding; the space-time coding technology combines a time coding scheme and a space coding scheme, solves the problem of coding capacity through the time coding technology, and solves the problem of reconstruction through the space coding technology; the multi-wavelength composite coding technology is a technology for acquiring different coding information for reconstruction on different wavebands at the same time; the direct coding method directly sets a codeword for each pixel of the coding pattern by using the characteristics of the projected light (such as color change, gray change, etc.); the line structure light scanning technology projects a plurality of line lasers to the surface of the measured object, and the algorithm extracts line structure characteristics in the image to realize matching and reconstruction.
Compared with other structured light methods, the spatial coding method is more suitable for dynamic and handheld measurement environments due to the advantages of single-frame encoding and decoding. Document US7768656B2 uses a large rectangular code element encoding, wherein rectangular code elements along the column direction have a direct connection relationship, a special projector and a camera mounting condition are adopted to form a row coordinate difference which approximates a epipolar constraint, the large rectangular code elements are expanded into 6 different code element types, a one-dimensional DeBruijn pseudo-random sequence is adopted to form 216 unique code values according to 6 code elements, and the code elements of three adjacent columns are required to be used for decoding during decoding; the disadvantage of this approach is that it places severe demands on hardware installation conditions and the coding space is not scalable. The document CN101627280B adopts diagonal square blocks, adopts 2*3 row-column coding mode according to diagonal mode and corner square black and white color, has the advantages of adding symbol verification, having certain reliability guarantee, and has the disadvantages that the 2-element 6-order DeBruijn pseudo-random sequence has only 64-column coding space, and the coding space of the method is inextensible. The document CN103400366B describes the encoded value by using lines with different thicknesses, which has the disadvantage that the line thickness is easily subject to extraction errors caused by noise, an extraction method, object surface depth modulation, and the like, and an erroneous encoded result is obtained. Document CN104457607a adopts a symmetric hourglass shape, forms different symbols by 0 °/45 °/90 °/135 ° rotation, and encodes 9 consecutive symbols in a specific order of 3*3 rows and columns, which is essentially an M-Array encoding method that does not fully exploit the epipolar constraint relationship, and has a very limited encoding capacity. Document WO2018219227A1 uses a pattern coding scheme, where the coding is performed by using different patterns in white lattices of a checkerboard pattern, which has the disadvantage that a epipolar relationship is not utilized, and is a two-dimensional coding method with a very limited coding capacity; meanwhile, the pattern coding mode is particularly easy to be interfered by noise, and is not beneficial to the extraction of high-precision matching points. Document CN109242957a uses a coding principle similar to that of document US7768656B2 and the same basic symbols, except for the additional constraint of adding 2*3 rank space for M-Array coding. Document CN109540023a uses binary thin line coding, coding by the presence or absence of thin lines on the four diagonals, the drawback being that the symbols are too thin, easily disturbed by noise; thin lines on the diagonal line are not beneficial to high-precision center extraction; thin lines between adjacent symbols are prone to interference with each other.
In summary, the currently existing spatial coding methods have the following drawbacks:
1) The coding pattern is easily damaged by the shape, color and material of the surface of the object;
2) The reconstruction requires a larger image space, and the geometric detail recovery capability is weaker; the DeBruijn pseudo-random sequence and the M-Array require a large image space for stable decoding, so that the measurement of complex objects is relatively lacking;
3) The coding mode is particularly easy to be interfered by noise, and even mutual interference exists among code elements of some methods;
4) The encoding only considers the uniqueness of the decoding pattern, and neglects the influence of pattern design on the extraction of high-precision matching points;
5) The coding mode has low flexibility, fixed coding capacity space and no expansibility.
Disclosure of Invention
Aiming at the problems of low light robustness, poor precision, large decoding requirement image space, poor expansibility and the like of the traditional space coding structure, the invention provides a space coding structure light three-dimensional scanning method, which comprises the following steps:
s1, designing a single-frame space coding pattern, wherein the single-frame space coding pattern comprises decoding points used for decoding and encryption points used for encrypting;
s2, projecting the pre-coded single-frame space coding pattern to the surface of the object to be measured through a projector;
s3, the camera shoots the space coding pattern modulated by the object, and extracts image decoding points and encryption points;
s4, determining the candidate projector column numbers of the decoding points by utilizing the relation between the horizontal epipolar line constraint and the neighborhood relative displacement;
s5, constructing and updating a statistic weight array of the decoding point, and deleting the column numbers of the candidate projectors of the decoding point according to the statistic weight array;
s6, deducing the projector column number of the encryption point according to the projector column number of the decoding point;
s7, reconstructing the point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector column numbers.
On the basis of the technical scheme, the preferable S1 specifically comprises the following steps:
s11, designing basic code elements. Aiming at the defects of low robustness, poor precision, large decoding requirement on image space and the like of the traditional space coding pattern, the decoding point code element adopts simple dots. Assuming that the projector image space occupied by each symbol of single frame space coding is Grid size, the invention sets grid=8 pixels, the spacing between symbols along row and column direction is Grid, and the radius of the decoding point symbol on the projector image coordinate is recorded as
S12, designing adjacent code element arrangement. The horizontal epipolar constraint is used to encode relative displacement relationship with symbol neighborhood. The horizontal epipolar constraint is determined by the installation of the camera and projector, by pre-labelingAnd (3) determining calibration parameters of the projector and the camera, converting projector image coordinates of each coding point on the projector into epipolar line image coordinates of the projector, and recording row coordinate values of the epipolar line coordinates as E values. The symbol neighborhood relative displacement relationship is determined by the misplaced pixels of adjacent symbols; distance of upper and lower offset of adjacent symbol relative to current symbol with current symbol as centerThe neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood code element is flush with the current code element in the vertical direction, and is marked as 0; the neighbor symbol is located below the current symbol in the vertical direction, and is marked as 1; according to the relative displacement relation of symbol neighborhood, the situation of up-down dislocation is marked as D value in 9 cases in total; when the space coding pattern is generated, randomly generating the vertical dislocation condition of each column, determining the corresponding D value, and storing the D value in each column; the decoding point of the ith projector is recorded as M i (D (COL), E) is more than 0 and less than m, wherein m is the total number of decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
S13, designing a decoding point and an encryption point. The relatively small dots are designed as the projector encryption points, and the radius of the encryption point dots is set asThe projector decoding point is used for decoding operation of post-processing, and column number information corresponding to the point is judged; the projector encryption point does not participate in the decoding operation of the post-processing, and the method for judging the belonged column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods.
Based on the above technical solution, a preferred embodiment of S2 is: the final coding pattern is obtained through photoetching and projected to the surface of an object through a projection system; the projection system can adopt common LED white light or a light source with a specific wave band so as to reduce the influence of background light on projection pattern acquisition.
On the basis of the technical scheme, the preferable S3 specifically comprises the following steps:
s31, the camera shoots the spatial coding pattern I modulated by the object from different angles.
S32, roughly extracting decoding points. By usingThe convolution operation is carried out on the image I by using a central dot convolution template with the window size to obtain a convolution image C; performing local non-maximum suppression search on the convolution image C in a Grid multiplied by 2Grid window, wherein the obtained local maximum point is an image decoding point; assuming that the number of the detected image decoding points is S, recording the integral pixel coordinates S of the image decoding points i (x, y) 0.ltoreq.i.ltoreq.s, where (x, y) is an integer pixel coordinate value;
s33, extracting decoding points. Coordinate fine extraction is carried out on the whole pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then the image decoding point is recorded as S i (x,y,u,v),0≤i≤s;
S34, roughly extracting the encryption points. For image decoding point S i (x, y, u, v) in the rectangular region of the imageIn accordance with->The window carries out local non-maximum value inhibition search on the convolution image C, and the obtained local maximum value point is the image encryption point; assuming that the number of the detected image encryption points is T, recording the integral pixel coordinates T of the image encryption points i (x, y) 0.ltoreq.i.ltoreq.t, where (x, y) is an integer pixel coordinate value.
S35, finely extracting the encryption points. Coordinate fine extraction is carried out on the whole pixel coordinates of the image encryption points by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption points, and then the image encryption points are marked as T i (x,y,u,v),0≤i≤t。
On the basis of the technical scheme, the preferable S4 specifically comprises the following steps:
s41, determining the candidate projector column numbers of the decoding points by using the horizontal epipolar constraint relation. According to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and row coordinate values e of the epipolar image coordinates are obtained i Value and projector code value M j E of (D (COL), E) j Comparing the values one by one, when meeting |e i -E j When the condition +.epsilon (epsilon=0.3 pixel in the present invention), M is defined as j Corresponding projector column number COL j Imparting a decoding point S i The column number of the candidate projector for recording the ith decoding point is CAND i (c 1 ,…,c k ) Where k is the number of candidate projector column numbers, (c) 1 ,…,c k ) A sequence of column number values for the candidate projector; the process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed; if the candidate projector column numbers cannot be found according to the horizontal epipolar constraint relation, the current decoding point is directly deleted.
S42, deleting the candidate projector column numbers of the decoding points by using the neighborhood relative displacement relation. For the i-th decoding point S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,…,c k ) According to the candidate projector column number c k The corresponding D value, denoted herein as D, is obtained k When d i =D k When the candidate projector column number is reserved, otherwise, deleting the candidate projector column number; the column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,…,c n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,…,c n ) A sequence of column number values for the candidate projector; the above process is continuously carried out until the deletion of the candidate projector column numbers of all the image decoding points is completed; if the ith decoding point does not obtain the D value, this step is not done, and all possible candidate projector column numbers are directly reserved.
On the basis of the technical scheme, the preferable S5 specifically comprises the following steps:
s51, constructing a reference projector column number of the decoding point. Searching the current decoding point S by adopting an 8-neighbor searching method i (x, y, U, v) the 8 decoding points closest to the image space are denoted as U 1 ,…,U 8 The 8 neighbor decoding points all have respective candidate projector column numbers, noted CAND u1 ,…,CAND u8 The method comprises the steps of carrying out a first treatment on the surface of the Combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is recorded as UCAND i (u 1 ,…,u w ) Wherein w is the sum of the number of elements of the column numbers of the candidate projectors of the adjacent decoding points, and the sequence is called the column number of the reference projector of the current decoding point;
s52, constructing a statistical weight array for the candidate projector column numbers by using the reference projector column numbers. For the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n; for the j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison byAccumulating; continuously performing until all candidate coding column numbers are traversed, and obtaining a statistical array V; and traversing all decoding points to complete the construction of respective statistical weight arrays.
S53, updating the statistic weight array of the candidate projector column numbers by using the reference projector column numbers. For the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) The j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison byPerforming an update in whichV w Is element u w The corresponding weight value; continuously performing until all candidate coding column numbers are traversed, and updating a statistical array V; and traversing all decoding points to finish updating respective statistical weight arrays.
And S54, iterating the step S53 for 5 times to obtain a stable statistical weight array result.
S55, deleting the candidate projector column number sequences according to the statistic weight array. Ordering the statistic weight arrays of the coding points, selecting the column number with the largest weight as the column number value of the current coding point, and recording the coding point as S at the moment i (x, y, u, v, c), c being the column number value of the resulting decoding point.
Based on the above technical solution, a preferred embodiment of S6 is: traversing the image encryption points T one by one i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching the upper, lower, left and right adjacent decoding points of the image encryption point according to the neighbor, and marking the column numbers of the four adjacent decoding points as c respectively Upper part 、c Lower part(s) 、c Left side 、c Right side The method comprises the steps of carrying out a first treatment on the surface of the If c is satisfied Upper part =c Lower part(s) =c Left side +1=c Right side -1, the current image encryption point T i Column number c of (x, y, u, v) Upper part At this time, the encryption point is recorded as T i (x, y, u, v, c), c being the column number value of the resulting encryption point; otherwise, deleting the current encryption point; this step is continued until the judgment of all the encryption points is completed.
Based on the above technical solution, a preferred embodiment of S7 is: for the image decoding point S i (x, y, u, v, c) and image encryption point T i And (x, y, u, v and c), according to calibration parameters of a camera and a projector which are calibrated in advance, utilizing the image point coordinates (u, v) of the image points and the column number c of the projector, adopting a line-plane intersection method to obtain corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
The invention also provides a space coding structured light three-dimensional scanning system which is used for the space coding structured light three-dimensional scanning method.
Compared with the prior art, the space coding structured light three-dimensional scanning method and system have the following beneficial effects:
1) Because the dots are used as basic code elements, the coding pattern is not easily damaged by the shape, the color and the material of the surface of the object;
2) Because the dot basic code element is smaller, the reconstruction does not need a larger image space, and the geometric detail recovery capability is strong;
3) Because the round dots are adopted as basic code elements, and a set of effective decoding rules are designed, the coding mode is not easy to be interfered by noise, and the phenomenon of mutual interference of the code elements can not occur;
4) Because the round dots are used as basic code elements, the design of the coding pattern is favorable for extracting high-precision matching points;
5) Because a set of effective decoding rules are designed and round dots are adopted as basic code elements, the coding mode is flexible, and the coding capacity space has expansibility.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the structured light principle;
FIG. 2 is a schematic diagram of structured light classification;
FIG. 3 is a flow chart of a method for three-dimensionally scanning spatially encoded structured light according to the present invention;
FIG. 4 is a flow chart of a single frame spatial coding pattern design;
FIG. 5 is a graph of classification of relative position of neighbors of encoding points;
FIG. 6 is a schematic diagram of decoding points and encryption points;
FIG. 7 is a schematic diagram of a designed single frame spatial coding pattern;
FIG. 8 is a decoding point and encryption point extraction flow chart;
FIG. 9 is a schematic diagram of a center dot convolution template;
FIG. 10 is a flow chart of decoding point candidate projector column number puncturing;
FIG. 11 is a schematic diagram of a three-dimensional object to be measured;
FIG. 12 is a diagram of a camera capturing a spatially encoded pattern modulated by a three-dimensional object to be measured;
fig. 13 is a schematic diagram of a single frame network structure after decoding and reconstructing the acquired spatial encoding pattern.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in detail with reference to the accompanying drawings and examples, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Aiming at the problems of low light robustness, poor precision, large decoding requirement image space, poor expansibility and the like of the traditional space coding structure, the invention provides a space coding structure light three-dimensional scanning method, which is shown in figure 3 and comprises the following steps:
s1, designing a single-frame space coding pattern, wherein the single-frame space coding pattern comprises decoding points for decoding and encryption points for encrypting. The specific flow is shown in fig. 4.
S11, designing basic code elements. Aiming at the defects of low robustness, poor precision, large decoding requirement image space and the like of the traditional space coding pattern, the decoding point code element adopts simple dots, the projector image space occupied by each code element of single-frame space coding is assumed to be Grid size, grid=8 pixels are arranged in the invention, the spacing between the code elements along the row and column directions is Grid, and the radius of the decoding point code element on the projector image coordinate is recorded asThe advantages of the dots are that firstly, the features of the dots are obvious and the dots are easy to extract; secondly, the dot extraction precision is high, which is favorable for high-precision measurement; finally, the image space occupied by the dots is smallThe method is little influenced by the shape, the color and the material of the surface of the object, and has strong geometric detail recovery capability. By adopting the code element design, a plurality of problems existing in the existing space coding pattern can be solved.
S12, designing adjacent code element arrangement. Since the basic code element is only provided with a round dot, the code cannot be coded by using a DeBruijn pseudo-random sequence or an M-array, and therefore, the code is considered to be coded by using the relation of horizontal epipolar constraint and relative displacement of the code element neighborhood. The horizontal epipolar line constraint is determined by the installation of a camera and a projector, the projector image coordinates of each coding point on the projector are converted into epipolar line image coordinates of the projector by calibrating calibration parameters of the projector and the camera in advance, the line coordinate values of the epipolar line coordinates are recorded as E values, and the E values of the coding points are different. The symbol neighborhood relative displacement relationship is determined by the displaced pixels of adjacent symbols. As shown in FIG. 5, the adjacent symbols are shifted up and down with respect to the current symbol by a distance centered on the current symbolThe neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood code element is flush with the current code element in the vertical direction, and is marked as 0; the neighbor symbol is located vertically below the current symbol and is marked as 1. According to the symbol neighborhood relative displacement relationship, 9 cases are totally represented as case 1 (0, 0), case 2 (0, 1), case 3 (-1, 1), case 4 (-1, -1), case 5 (1, -1), case 6 (1, 0), case 7 (0, -1), case 8 (1, 1), case 9 (-1, 0), and the cases with the vertical displacement are respectively represented as D values. When the space coding pattern is generated, the vertical dislocation condition of each column is randomly generated, after the generation, the D value corresponding to each column is determined, the D value is stored in each column, and the final local dislocation condition is shown in fig. 6. From the above description, the decoding point of the ith projector is noted as M i (D (COL), E) is more than 0 and less than m, wherein m is the total number of decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
S13, decoding pointAnd the design of the encryption point. If all the points are projector decoding points, the calculation amount is relatively large, and in consideration of continuity along the column direction, relatively small dots are designed as projector encryption points, and the encryption point dot radius is set to beAs shown in fig. 6, the decoding points and the encryption points are staggered in the row direction. The projector decoding point is used for decoding operation of post-processing, and column number information corresponding to the point is judged; the projector encryption point does not participate in the decoding operation of the post-processing, and the method for judging the belonged column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods. The design of the decoding point and the encryption point effectively reduces the calculation amount of decoding and simultaneously reduces the confusion of code elements.
S2, projecting the pre-coded single-frame space coding pattern to the surface of the object to be measured through a projector. As shown in fig. 7, the final coding pattern is obtained by photolithography and then projected onto the object surface by a projection system, where the projection system may use common LED white light or a light source with a specific wavelength band to reduce the influence of the background light on the projection pattern acquisition.
S3, the camera shoots the space coding pattern modulated by the object, and extracts the image decoding point and the encryption point. The specific flow is shown in fig. 8.
S31, the camera shoots a space coding pattern I modulated by an object from different angles;
s32, roughly extracting decoding points. By usingThe convolution operation is performed on the I image to obtain a convolution image C, and the local non-maximum suppression search is performed on the convolution image C by using a grid×2Grid window, so that the obtained local maximum point is the image decoding point, as shown in fig. 9, which is a typical 5*5 window. Assuming that the number of the detected image decoding points is S, recording the integral pixel coordinates S of the image decoding points i (x, y) 0.ltoreq.i.ltoreq.s, where (x, y) is the integer pixel coordinateValues.
S33, extracting decoding points. Coordinate fine extraction is carried out on the whole pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then the image decoding point is recorded as S i (x,y,u,v),0≤i≤s。
S34, roughly extracting the encryption points. For image decoding point S i (x, y, u, v) in the rectangular region of the imageIn accordance with->And carrying out local non-maximum value inhibition search on the convolution image C by the window, wherein the obtained local maximum value point is the image encryption point. Assuming that the number of the detected image encryption points is T, recording the integral pixel coordinates T of the image encryption points i (x, y) 0.ltoreq.i.ltoreq.t, where (x, y) is an integer pixel coordinate value.
S35, finely extracting the encryption points. Coordinate fine extraction is carried out on the whole pixel coordinates of the image encryption points by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption points, and then the image encryption points are marked as T i (x,y,u,v),0≤i≤t。
S4, determining the candidate projector column numbers of the decoding points by utilizing the relation between the horizontal epipolar line constraint and the neighborhood relative displacement.
S41, determining the candidate projector column numbers of the decoding points by using the horizontal epipolar constraint relation. According to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and row coordinate values e of the epipolar image coordinates are obtained i Value and projector code value M j E of (D (COL), E) j Comparing the values one by one, when meeting |e i -E j When the condition +.epsilon (epsilon=0.3 pixel in the present invention), M is defined as j Corresponding projector column number COL j Imparting a decoding point S i The column number of the candidate projector for recording the ith decoding point is CAND i (c 1 ,…,c k ) Wherein k is the number of candidate projector column numbersc 1 ,…,c k ) Is a sequence of column number values for the candidate projector. The above process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed. It is worth to say that if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, the decoding point extracted at present is indicated to have a larger error and is directly deleted.
S42, deleting the candidate projector column numbers of the decoding points by using the neighborhood relative displacement relation. For the i-th decoding point S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,…,c k ) According to the candidate projector column number c k The corresponding D value, denoted herein as D, is obtained k When d i =D k If so, the candidate projector column number is reserved, otherwise, the candidate projector column number is deleted. The column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,…,c n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,…,c n ) Is a sequence of column number values for the candidate projector. The above process is continuously carried out until the deletion of the candidate projector column numbers of all the image decoding points is completed. It should be noted that if the ith decoding point does not obtain the D value, all possible candidate projector column numbers are directly reserved without doing so.
S5, constructing and updating a statistic weight array of the decoding point, and deleting the column numbers of the candidate projectors of the decoding point according to the statistic weight array. The specific flow is shown in fig. 10.
S51, constructing a reference projector column number of the decoding point. Searching the current decoding point S by adopting an 8-neighbor searching method i (x, y, U, v) the 8 decoding points closest to the image space are denoted as U 1 ,…,U 8 The 8 neighbor decoding points all have respective candidate projector column numbers, noted CAND u1 ,…,CAND u8 Combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is recorded as UCAND i (u 1 ,…,u w ) Where w is the sum of the number of neighbor decoding point candidate projector column number elements, and this sequence is referred to as the reference projector column number of the current decoding point.
S52, constructing a statistical weight array for the candidate projector column numbers by using the reference projector column numbers. For the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the j candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison byAccumulating; and continuously performing until all candidate coding column numbers are traversed, and obtaining a statistical weight array V. And traversing all decoding points to complete the construction of respective statistical weight arrays.
S53, updating the statistic weight array of the candidate projector column numbers by using the reference projector column numbers. For the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) The j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison byUpdate, wherein V w Is element u w The corresponding weight value; and continuously performing until all candidate coding column numbers are traversed, and updating the statistical weight array V. And traversing all decoding points to finish updating respective statistical weight arrays.
And S54, iterating the step S53 for 5 times to obtain a stable statistical weight array result.
S55, deleting the candidate projector column number sequences according to the statistic weight array. Ordering the statistic weight arrays of the coding points, selecting the column number with the largest weight as the column number value of the current coding point,at this time, the decoding point is S i (x, y, u, v, c), c being the column number value of the resulting decoding point.
S6, deducing the projector column number of the encryption point according to the projector column number of the decoding point. Traversing image encryption point T i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching the upper, lower, left and right adjacent decoding points of the image encryption point according to the neighbor, and marking the column numbers of the four adjacent decoding points as c respectively Upper part 、c Lower part(s) 、c Left side 、c Right side If c is satisfied Upper part =c Lower part(s) =c Left side +1=c Right side -1, the current image encryption point T i Column number c of (x, y, u, v) Upper part At this time, the encryption point is recorded as T i (x, y, u, v, c), c being the column number value of the obtained encryption point, otherwise deleting the current encryption point. This step is continued until the judgment of all the encryption points is completed.
S7, reconstructing the point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector column numbers. For the image decoding point S i (x, y, u, v, c) and image encryption point T i And (x, y, u, v and c), according to calibration parameters of a camera and a projector which are calibrated in advance, utilizing the image point coordinates (u, v) of the image points and the column number c of the projector, adopting a line-plane intersection method to obtain corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
According to the above flow, as shown in fig. 11, which is an original schematic diagram of an object to be measured, a device that is fixedly connected with a camera and a projector is used to project and shoot a pattern of the object to be measured, so as to obtain a modulated and encoded image as shown in fig. 12, and the decoding and reconstructing method of the present invention is used to obtain a network structure result of reconstructing point cloud as shown in fig. 13.
As can be seen from the implementation steps, compared with the traditional method, the method has the following remarkable advantages:
1) Because the dots are used as basic code elements, the coding pattern is not easily damaged by the shape, the color and the material of the surface of the object;
2) Because the dot code elements are smaller, the reconstruction does not need a larger image space, and the geometric detail recovery capability is strong;
3) Because the round dots are adopted as basic code elements, and a set of effective decoding rules are designed, the coding mode is not easy to be interfered by noise, and the phenomenon of mutual interference of the code elements can not occur;
4) Because the round dots are used as basic code elements, the design of the coding pattern is favorable for extracting high-precision matching points;
5) Because a set of effective decoding rules are designed and round dots are adopted as basic code elements, the coding mode is flexible, and the coding capacity space has expansibility.
In addition, the invention is suitable for a single-camera structured light three-dimensional measurement system and a multi-camera structured light three-dimensional measurement system. In specific implementation, the above flow can be automatically operated in a computer software mode, and a system device for operating the method should be within a protection range.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.
Claims (8)
1. The space coding structured light three-dimensional scanning method is characterized by comprising the following steps of:
s1, designing a single-frame space coding pattern, wherein the single-frame space coding pattern comprises decoding points used for decoding and encryption points used for encrypting; the code element of the decoding point adopts simple dots, the horizontal epipolar constraint is utilized to encode the relative displacement relation between the code element neighborhood and the code element neighborhood, and the relative displacement relation between the code element neighborhood and the code element neighborhood is determined by the dislocation pixels of the adjacent code elements; the round dot with smaller relative decoding point is designed as the projector encryption point, and the encryption point does not participate in the decoding operation of the post-processing;
s2, projecting the pre-coded single-frame space coding pattern to the surface of the object to be measured through a projector;
s3, the camera shoots the space coding pattern modulated by the object, and extracts image decoding points and encryption points;
s4, determining the candidate projector column numbers of the decoding points by utilizing the relation between the horizontal epipolar line constraint and the neighborhood relative displacement;
s5, constructing and updating a statistic weight array of the decoding point, and deleting the column numbers of the candidate projectors of the decoding point according to the statistic weight array;
s6, deducing the projector column number of the encryption point according to the projector column number of the decoding point;
s7, reconstructing the point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector column numbers.
2. The method for three-dimensional scanning of spatially encoded structured light according to claim 1, wherein S1 comprises the steps of:
s11, designing a basic code element; aiming at the defects of low robustness, poor precision, large decoding requirement on image space and the like of the traditional space coding pattern, a simple dot is adopted for a decoding point code element; assuming that the projector image space occupied by each symbol of single frame space coding is Grid size, and grid=8 pixels, the inter-symbol spacing along the row and column directions is Grid, and the radius of the decoding point symbol on the projector image coordinate is
S12, designing adjacent code element arrangement; encoding by utilizing the relation between horizontal epipolar line constraint and symbol neighborhood relative displacement; the horizontal epipolar line constraint is determined by the installation of a camera and a projector, the projector image coordinates of each coding point on the projector are converted into epipolar line image coordinates of the projector through calibration parameters of the projector and the camera calibrated in advance, and the line coordinate values of the epipolar line coordinates are recorded as E values; the symbol neighborhood relative displacement relationship is determined by the misplaced pixels of adjacent symbols; distance of upper and lower offset of adjacent symbol relative to current symbol with current symbol as centerThe neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood symbols are in the vertical directionFlush with the current symbol, then mark as 0; the neighbor symbol is located below the current symbol in the vertical direction, and is marked as 1; according to the relative displacement relation of symbol neighborhood, 9 conditions are provided, and the condition of up-down dislocation is marked as a D value; when the space coding pattern is generated, randomly generating the vertical dislocation condition of each column, determining the corresponding D value, and storing the D value in each column; the decoding point of the ith projector is recorded as M i (D (COL), E), 0 < i < m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector;
s13, designing a decoding point and an encryption point; a dot with a smaller relative decoding point is designed as a projector encryption point, and the radius of the dot of the encryption point is set asThe projector decoding point is used for decoding operation of post-processing, and column number information corresponding to the point is judged; the projector encryption point does not participate in the decoding operation of the post-processing.
3. The method of claim 1, wherein the S2 is implemented as follows: photoetching the final coding pattern, and projecting the final coding pattern onto the surface of an object through a projection system; the projection system can adopt common LED white light or a light source with a specific wave band so as to reduce the influence of background light on projection pattern acquisition.
4. A spatially encoded structured light three-dimensional scanning method according to claim 1, wherein the implementation of S3 is as follows:
s31, the camera shoots a space coding pattern I modulated by an object from different angles;
s32, roughly extracting decoding points; by usingThe center dot of the window size convolves the template,performing convolution operation on the I image to obtain a convolution image C, and performing local non-maximum suppression search on the convolution image C by adopting a Grid multiplied by 2Grid window, wherein the obtained local maximum point is an image decoding point; assuming that the number of the detected image decoding points is S, recording the integral pixel coordinates S of the image decoding points i (x, y) 0.ltoreq.i.ltoreq.s, where (x, y) is an integer pixel coordinate value;
s33, extracting decoding points; coordinate fine extraction is carried out on the whole pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then the image decoding point is recorded as S i (x,y,u,v),0≤i≤s;
S34, roughly extracting encryption points; for image decoding point S i (x, y, u, v) in the rectangular region of the imageIn accordance with->The window carries out local non-maximum value inhibition search on the convolution image C, and the obtained local maximum value point is the image encryption point; assuming that the number of the detected image encryption points is T, recording the integral pixel coordinates T of the image encryption points i (x, y) 0.ltoreq.i.ltoreq.t, where (x, y) is an integer pixel coordinate value;
s35, finely extracting encryption points; coordinate fine extraction is carried out on the whole pixel coordinates of the image encryption points by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption points, and then the image encryption points are marked as T i (x,y,u,v),0≤i≤t。
5. The spatially encoded structured light three-dimensional scanning method of claim 1, wherein S4 is implemented as follows:
s41, determining the candidate projector column numbers of the decoding points by using the horizontal epipolar constraint relation; according to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and the rows of epipolar image coordinates are converted into the camera epipolar image coordinatesCoordinate value e i Value and projector code value M j E of (D (COL), E) j Comparing the values one by one, when meeting |e i -E j When the condition is less than or equal to epsilon (epsilon=0.3 pixel), M is calculated j Corresponding projector column number COL j Imparting a decoding point S i The column number of the candidate projector for recording the ith decoding point is CAND i (c 1 ,L,c k ) Where k is the number of candidate projector column numbers, (c) 1 ,L,c k ) A sequence of column number values for the candidate projector; the process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed; if the candidate projector column numbers cannot be found according to the horizontal epipolar constraint relation, directly deleting the current decoding point;
s42, deleting the candidate projector column numbers of the decoding points by using the neighborhood relative displacement relation; for the i-th decoding point S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,L,c k ) According to the candidate projector column number c k The corresponding D value, denoted herein as D, is obtained k When d i =D k When the candidate projector column number is reserved, otherwise, deleting the candidate projector column number; the column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,L,c n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,L,c n ) A sequence of column number values for the candidate projector; the above process is continuously carried out until the deletion of the candidate projector column numbers of all the image decoding points is completed; if the ith decoding point does not obtain the D value, this step is not done, and all possible candidate projector column numbers are directly reserved.
6. The spatially encoded structured light three-dimensional scanning method of claim 1, wherein the S5 embodiment is as follows:
s51, constructing a reference projector column number of a decoding point; searching for current decoding using 8-neighbor search methodPoint S i (x, y, U, v) the 8 decoding points closest to the image space are denoted as U 1 ,L,U 8 The 8 neighbor decoding points all have respective candidate projector column numbers, noted CAND u1 ,L,CAND u8 Combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is recorded as UCAND i (u 1 ,L,u w ) Wherein w is the sum of the number of elements of the column numbers of the candidate projectors of the adjacent decoding points, and the sequence is called the column number of the reference projector of the current decoding point;
s52, constructing a statistical weight array for the candidate projector column numbers by using the reference projector column numbers; for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,L,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the j candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,L,u w ) Element u of (2) w Performing column number comparison byAccumulating; continuously performing until all candidate coding column numbers are traversed, and obtaining a statistical array V; traversing all decoding points to complete the construction of respective statistical weight arrays;
s53, updating the statistic weight array of the candidate projector column numbers by using the reference projector column numbers; for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,L,c n ) The j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,L,u w ) Element u of (2) w Performing column number comparison byUpdate, wherein V w Is element u w The corresponding weight value; continuously performing until all candidate coding column numbers are traversed, and updating a statistical array V; traversing all decoding points to finish updating respective statistical weight arrays;
s54, iterating the step S53 for 5 times to obtain a stable statistical weight array result;
s55, deleting the candidate projector column number sequences according to the statistic weight array; ordering the statistic weight arrays of the coding points, selecting the column number with the largest weight as the column number value of the current coding point, and recording the coding point as S at the moment i (x, y, u, v, c), c being the column number value of the resulting decoding point.
7. The method of claim 1, wherein S6 is: traversing image encryption point T i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching the upper, lower, left and right adjacent decoding points of the image encryption point according to the neighbor, and marking the column numbers of the four adjacent decoding points as c respectively Upper part 、c Lower part(s) 、c Left side 、c Right side If c Upper part =c Lower part(s) =c Left side +1=c Right side -1, the current image encryption point T i Column number c of (x, y, u, v) Upper part At this time, the encryption point is recorded as T i (x, y, u, v, c), c being the column number value of the obtained encryption point, otherwise deleting the current encryption point; this step is continued until the judgment of all the encryption points is completed.
8. The method of claim 1, wherein S7 is: for the image decoding point S i (x, y, u, v, c) and image encryption point T i And (x, y, u, v and c), according to calibration parameters of a camera and a projector which are calibrated in advance, utilizing the image point coordinates (u, v) of the image points and the column number c of the projector, adopting a line-plane intersection method to obtain corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010281877.8A CN111336949B (en) | 2020-04-11 | 2020-04-11 | Space coding structured light three-dimensional scanning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010281877.8A CN111336949B (en) | 2020-04-11 | 2020-04-11 | Space coding structured light three-dimensional scanning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111336949A CN111336949A (en) | 2020-06-26 |
CN111336949B true CN111336949B (en) | 2023-06-02 |
Family
ID=71180765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010281877.8A Active CN111336949B (en) | 2020-04-11 | 2020-04-11 | Space coding structured light three-dimensional scanning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111336949B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111707192B (en) * | 2020-07-08 | 2021-07-06 | 中国科学院长春光学精密机械与物理研究所 | Structured light coding and decoding method and device combining sine phase shift asymmetry with Gray code |
CN112985307B (en) * | 2021-04-13 | 2023-03-21 | 先临三维科技股份有限公司 | Three-dimensional scanner, system and three-dimensional reconstruction method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697233B (en) * | 2009-10-16 | 2012-06-06 | 长春理工大学 | Structured light-based three-dimensional object surface reconstruction method |
JP2011237296A (en) * | 2010-05-11 | 2011-11-24 | Nippon Telegr & Teleph Corp <Ntt> | Three dimensional shape measuring method, three dimensional shape measuring device, and program |
JP5635218B1 (en) * | 2011-11-23 | 2014-12-03 | クレアフォーム・インコーポレイテッドCreaform Inc. | Pattern alignment method and system for spatially encoded slide images |
CN109242957A (en) * | 2018-08-27 | 2019-01-18 | 深圳积木易搭科技技术有限公司 | A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint |
-
2020
- 2020-04-11 CN CN202010281877.8A patent/CN111336949B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111336949A (en) | 2020-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7145556B2 (en) | Method and device for decoding a position-coding pattern | |
US9207070B2 (en) | Transmission of affine-invariant spatial mask for active depth sensing | |
CN108122254B (en) | Three-dimensional image reconstruction method and device based on structured light and storage medium | |
KR100960786B1 (en) | Methods and systems for encoding and decoding data in 2d symbology | |
US6758399B1 (en) | Distortion correction method in optical code reading | |
CN111336949B (en) | Space coding structured light three-dimensional scanning method and system | |
US20070272755A1 (en) | Two-directional bar code symbol and its encoding & decoding method | |
JP6254548B2 (en) | High capacity 2D color barcode and method for decoding high capacity 2D color barcode | |
JPWO2014077184A1 (en) | 2D code | |
TWI501159B (en) | QR code | |
US6839450B2 (en) | Detecting halftone modulations embedded in an image | |
CN112184826B (en) | Calibration plate and calibration method | |
CN108895979B (en) | Line segment coded structured light depth acquisition method | |
CN109690241B (en) | Three-dimensional measurement device and three-dimensional measurement method | |
CN111336950B (en) | Single frame measurement method and system combining space coding and line structured light | |
Kim et al. | Antipodal gray codes for structured light | |
CN111340957B (en) | Measurement method and system | |
CN111307069A (en) | Light three-dimensional scanning method and system for dense parallel line structure | |
JP4232676B2 (en) | Information detection apparatus, image processing system, and information detection method | |
CN112183695B (en) | Coding method, coding pattern reading method, and photographing apparatus | |
CN111783877A (en) | Depth information measuring method based on single-frame grid composite coding template structured light | |
CN108038898B (en) | Single-frame binary structure optical coding and decoding method | |
CN110926370B (en) | Measurement method and system | |
CN112184803A (en) | Calibration plate and calibration method | |
Hu et al. | Robust 3D shape reconstruction from a single image based on color structured light |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |