CN111336949A - Spatial coding structured light three-dimensional scanning method and system - Google Patents

Spatial coding structured light three-dimensional scanning method and system Download PDF

Info

Publication number
CN111336949A
CN111336949A CN202010281877.8A CN202010281877A CN111336949A CN 111336949 A CN111336949 A CN 111336949A CN 202010281877 A CN202010281877 A CN 202010281877A CN 111336949 A CN111336949 A CN 111336949A
Authority
CN
China
Prior art keywords
decoding
point
projector
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010281877.8A
Other languages
Chinese (zh)
Other versions
CN111336949B (en
Inventor
黄文超
龚静
刘改
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xuanjing Technology Co ltd
Original Assignee
Wuhan Xuanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xuanjing Technology Co ltd filed Critical Wuhan Xuanjing Technology Co ltd
Priority to CN202010281877.8A priority Critical patent/CN111336949B/en
Publication of CN111336949A publication Critical patent/CN111336949A/en
Application granted granted Critical
Publication of CN111336949B publication Critical patent/CN111336949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a space coding structured light three-dimensional scanning method and a system, comprising the steps of designing a single-frame space coding pattern which comprises decoding points for decoding and encryption points for encrypting; projecting the pre-coded single-frame space coding pattern to the surface of a measured object through a projector; shooting a space coding pattern modulated by an object by a camera, and extracting image decoding points and encryption points; determining candidate projector column numbers of decoding points by utilizing the relation between horizontal epipolar line constraint and neighborhood relative displacement; constructing and updating a statistical weight array of the decoding points, and deleting candidate projector column numbers of the decoding points according to the statistical weight array; deducing the projector column number of the encryption point according to the projector column number of the decoding point; and reconstructing point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector serial number. The invention solves the problems of low light robustness, poor precision, larger decoding requirement, poor expansibility and the like of the traditional spatial coding structure.

Description

Spatial coding structured light three-dimensional scanning method and system
Technical Field
The invention relates to the field of three-dimensional measurement, in particular to a method and a system for three-dimensional scanning of spatial coding structured light.
Background
In the common image modeling, a light source is an uncoded light source such as ambient light or white light, and image recognition completely depends on the feature points of a shot object, so matching is always a difficulty in image modeling. The difference of the structured light method is that the projection light source is encoded, and the image projected onto the object by the encoded light source and modulated by the depth of the object surface is photographed, as shown in fig. 1. Because the structured light source has a plurality of coding features, the matching of the feature points can be conveniently carried out, namely, the structured light method actively provides a plurality of feature points for matching, and the feature points of the shot object are not required to be used, so that a better matching result can be provided. In addition, because the objects shot in the common image modeling are various, each image matching faces different images, and the feature points need to be extracted again; the same pattern is projected by the structured light method, the characteristic is fixed, and the change is not needed according to the change of the scene, so that the matching difficulty is reduced, and the matching efficiency is improved.
The most important of the structured light method is the design and identification of codes, as shown in fig. 2, which can be roughly classified into six types, i.e., spatial codes, temporal codes, space-time codes, multi-wavelength composite codes, direct codes, and linear structured light scanning, according to the difference of the coding modes. The spatial coding technology hides the coded information in a single-frame projection image space; the time coding technology projects a plurality of different coding patterns according to time sequence, combines corresponding coding image sequence groups for decoding; the space-time coding technology integrates a time coding scheme and a space coding scheme, the coding capacity problem is solved through the time coding technology, and the reconstruction problem is solved through the space coding technology; the multi-wavelength composite coding technology is a technology for acquiring different coding information on different wave bands at the same time for reconstruction; the direct coding method directly sets a code word for each pixel of the coding pattern by using the characteristics of the projection light (such as color change, gray scale change and the like); the line structured light scanning technology projects a plurality of lines of laser to the surface of a measured object, and the line structured features in the image are extracted by an algorithm to realize matching and reconstruction.
Compared with other structured light methods, the spatial coding method is more suitable for being used in dynamic and handheld measuring environments due to the advantages of single-frame coding and decoding. Document US7768656B2 uses large and small rectangular symbols for coding, the rectangular symbols in the column direction also have a direct connection relationship, and special projector and camera mounting conditions are adopted to form a row coordinate difference approximate to epipolar line constraint, so that the large and small rectangular symbols are expanded into 6 different symbol types, a one-dimensional de bruijn pseudo-random sequence is adopted to form 216-column unique coding values according to 6 symbols, and the symbols in adjacent three columns are required to be used for decoding during decoding; the disadvantage of this method is that it has a relatively strict requirement on hardware installation conditions and the coding space is not scalable. Document CN101627280B uses diagonal square blocks, and 2 × 3 row-column coding according to diagonal mode and black-and-white color of corner square, which has the advantages of adding symbol check, having certain reliability guarantee, and the disadvantage that 2-element 6-order de bruijn pseudo-random sequence has only 64 columns of coding space, and the coding space of this method is not expandable. The document CN103400366B describes the coding value by using lines with different thicknesses, and has a disadvantage that the thickness of the lines is easily subject to extraction errors caused by noise, extraction methods, object surface depth modulation, and the like, and an erroneous coding result is obtained. Document CN104457607A adopts a symmetrical hourglass shape, forms different symbols by rotation of 0 °/45 °/90 °/135 °, and encodes 9 consecutive symbols in a specific sequence of 3 × 3 rows, which is essentially the M-Array encoding method adopted, and does not fully utilize the epipolar constraint relationship, and the encoding capacity is very limited. Document WO2018219227a1 uses a pattern coding method, which uses different patterns in the white lattices of a checkerboard pattern for coding, and has the disadvantages that no epipolar line relationship is utilized, the coding method is a two-dimensional coding method, and the coding capacity is very limited; meanwhile, the pattern coding mode is particularly easy to be interfered by noise, and the extraction of high-precision matching points is not facilitated. Document CN109242957A uses a coding principle similar to document US7768656B2 and the same basic symbols, but adds an additional constraint of 2 x 3 row column space for M-Array coding. The document CN109540023A adopts binary thin line coding, and performs coding through the existence or non-existence of thin lines on four diagonal lines, and has the disadvantage that the symbol is too thin and is easily interfered by noise; fine lines on the diagonal line are not beneficial to high-precision center extraction; thin lines between adjacent symbols tend to interfere with each other.
In summary, the existing spatial coding methods have the following disadvantages:
1) the coding pattern is easily damaged by the surface shape, color and material of the object;
2) the reconstruction needs a larger image space, and the geometric detail recovery capability is weaker; both the DeBruijn pseudorandom sequence and the M-Array require a large image space for stable decoding, and thus measurement of complex objects is relatively lacking;
3) the coding mode is particularly easy to be interfered by noise, and even the code elements of some methods have mutual interference phenomena;
4) the encoding only considers the uniqueness of the decoding pattern, and neglects the influence of pattern design on the extraction of high-precision matching points;
5) the flexibility of the coding mode is low, the coding capacity space is fixed, and the expansibility is not available.
Disclosure of Invention
The invention provides a spatial coding structure light three-dimensional scanning method aiming at the problems of low robustness, poor precision, larger decoding requirement image space, poor expansibility and the like of the traditional spatial coding structure light, and the process comprises the following steps:
s1, designing a single-frame space coding pattern, including decoding points for decoding and encryption points for encryption;
s2, projecting the pre-coded single-frame space coding pattern to the surface of the measured object through a projector;
s3, shooting the space coding pattern modulated by the object by the camera, and extracting the image decoding point and the encryption point;
s4, determining candidate projector column numbers of decoding points by using the relation between horizontal epipolar line constraint and neighborhood relative displacement;
s5, constructing and updating a statistical weight array of the decoding points, and deleting the candidate projector serial numbers of the decoding points according to the statistical weight array;
s6, deducing the projector serial number of the encryption point according to the projector serial number of the decoding point;
and S7, reconstructing point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector serial number.
On the basis of the above technical solution, the preferable S1 specifically includes the following steps:
and S11, designing a basic code element. Aiming at the defects of low robustness, poor precision, larger decoding requirement of an image space and the like of the traditional space coding pattern, the decoding point code element adopts simple round points. Assuming that the projector image space occupied by each code element of single-frame space coding is Grid size, the Grid is set to be 8 pixels in the invention, the distance between the code elements along the row direction and the column direction is Grid, and the radius of the code element of a decoding point on the projector image coordinate is recorded as
Figure BDA0002446897930000031
S12, design of adjacent symbol arrangement. Encoding is performed using a horizontal epipolar constraint versus symbol neighborhood relative displacement. The horizontal epipolar constraint is determined by the installation of the camera and the projector, the projector image coordinate of each coding point on the projector is converted into the epipolar image coordinate of the projector by calibrating the calibration parameters of the projector and the camera in advance, and the line coordinate value of the epipolar coordinate is recorded as an E value. The code element neighborhood relative displacement relation is determined by the staggered pixels of the adjacent code elements; the distance of the adjacent code element which is staggered up and down relative to the current code element by taking the current code element as the center
Figure BDA0002446897930000032
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; the neighborhood code element is positioned below the current code element in the vertical direction and is marked as 1; recording the conditions of up-down dislocation as D values according to the relative displacement relation of the code element neighborhood and 9 conditions in total; when the space coding pattern is generated, the vertical dislocation condition of each column is randomly generated, the corresponding D value is determined, and the D value is stored in each column; let the decoding point of the ith projector be Mi(D (COL) and E), wherein i is more than 0 and less than m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
And S13, designing a decoding point and an encryption point. Designing smaller dots as projector encryptionThe radius of the point is set as
Figure BDA0002446897930000033
The projector decoding point is used for decoding operation of post processing and judging column number information corresponding to the point; the projector enciphered point does not participate in the decoding operation of post-processing, and the method for judging the column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods.
Based on the above technical solution, a preferred embodiment of S2 is: the final coding pattern is obtained through photoetching and projected to the surface of an object through a projection system; the projection system can adopt common LED white light and can also adopt a light source with a specific wave band to reduce the influence of background light on the collection of projection patterns.
On the basis of the above technical solution, the preferable S3 specifically includes the following steps:
s31, the camera shoots the object-modulated spatial coding pattern I from different angles.
And S32, roughly extracting the decoding points. By using
Figure BDA0002446897930000041
Performing convolution operation on the I image to obtain a convolution image C by a window-sized center dot convolution template, performing local non-maximum suppression search on the convolution image C in a Grid × 2Grid window to obtain local maximum points, namely image decoding points, and recording the whole pixel coordinates S of the image decoding points on the assumption that the number of the detected image decoding points is Si(x, y), i is more than or equal to 0 and less than or equal to s, wherein (x, y) is the coordinate value of the whole pixel;
and S33, fine extraction of the decoding points. Performing coordinate fine extraction on integer pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then recording the image decoding point as Si(x,y,u,v),0≤i≤s;
And S34, roughly extracting the encrypted points. For image decoding point Si(x, y, u, v) within the image rectangle
Figure BDA0002446897930000042
In accordance with
Figure BDA0002446897930000043
The window carries out local non-maximum suppression search on the convolution image C, and the obtained local maximum point is an image encryption point; assuming that the number of detected image encryption points is T, recording the whole pixel coordinate T of the image encryption pointsi(x, y), i is more than or equal to 0 and less than or equal to t, wherein (x, y) is the coordinate value of the whole pixel.
And S35, carrying out precision extraction on the encrypted points. Performing coordinate fine extraction on integer pixel coordinates of the image encryption point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption point, and recording the image encryption point as T at the momenti(x,y,u,v),0≤i≤t。
On the basis of the above technical solution, the preferable S4 specifically includes the following steps:
and S41, determining candidate projector column numbers of decoding points by using the horizontal epipolar constraint relation. Decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value Mj(D), (COL), E) ofjComparing the values one by one, and when the value satisfies | ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector column number COLjAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) Where k is the number of candidate projector column numbers, (c)1,…,ck) A column number value sequence of candidate projectors; the process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed; and if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, directly deleting the current decoding point.
And S42, deleting the candidate projector serial numbers of the decoding points by utilizing the relative displacement relation of the neighborhoods. For the ith decoding point Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkIf so, retaining the candidate projector column number, otherwise, deleting; recording the column number of the candidate projector of the ith decoding point deleted by the neighborhood relative displacement relationship as CANDi(c1,…,cn) Wherein n is the number of the candidate projector column numbers after being deleted, (c)1,…,cn) A column number value sequence of candidate projectors; the process is continuously carried out until the deletion of the candidate projector serial numbers of all the image decoding points is completed; if the ith decoding point can not obtain the D value, the step is not carried out, and all possible candidate projector column numbers are directly reserved.
On the basis of the above technical solution, the preferable S5 specifically includes the following steps:
s51, a reference projector column number of the decoding point is constructed. Searching current decoding point S by adopting 8-nearest neighbor searching methodi(x, y, U, v) the 8 decoding points closest to each other in the image space are marked as U1,…,U8The 8 neighboring decoding points all have their own candidate projector column number, which is denoted as CANDu1,…,CANDu8(ii) a Merging the candidate projector column numbers of the adjacent decoding points into a sequence, and recording the sequence as UCANDi(u1,…,uw) W is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points, and the sequence is called as the reference projector column number of the current decoding point;
and S52, constructing a statistical weight array for the candidate projector column number by using the reference projector column number. For the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n; for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446897930000051
Accumulating; continuously performing the operation until all candidate coding column numbers are traversed to obtain a statistical array V; and traversing all decoding points to complete the construction of the respective statistical weight arrays.
And S53, updating the statistic weight array of the candidate projector column numbers by using the reference projector column numbers. For the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446897930000061
Performing an update in which VwIs an element uwThe corresponding weight value; continuously performing the operation until all candidate coding column numbers are traversed, and updating the statistical array V; and traversing all decoding points to complete respective updating of the statistical weight arrays.
And S54, iterating the step S53 for 5 times to obtain a stable statistical weight array result.
And S55, deleting the candidate projector serial number sequence according to the statistic weight number. Sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as Si(x, y, u, v, c), c is the column number value of the decoding point obtained.
Based on the above technical solution, a preferred embodiment of S6 is: traverse image encryption point T one by onei(x, y, u, v), i is more than or equal to 0 and less than or equal to t, searching the adjacent decoding points above, below, left and right of the image encryption point according to the neighbors, and recording the column number values of the four adjacent decoding points as cOn the upper part、cLower part、cLeft side of、cRight side(ii) a If c is satisfiedOn the upper part=cLower part=cLeft side of+1=cRight side-1, then the current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partAt this time, the encryption point is recorded as Ti(x, y, u, v, c), wherein c is the obtained column number value of the encryption point; otherwise, the current encryption is deletedPoint; this step is continued until the judgment of all the encryption points is completed.
Based on the above technical solution, a preferred embodiment of S7 is: for image decoding point Si(x, y, u, v, c) and a picture encryption point Ti(x, y, u, v, c), according to the calibrated parameters of the camera and the projector which are calibrated in advance, by utilizing the image point coordinates (u, v) of the image point and the column number c of the projector, adopting a line-plane intersection method to intersect the corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
Moreover, the invention also provides a space coding structured light three-dimensional scanning system which is used for the space coding structured light three-dimensional scanning method.
Compared with the prior art, the space coding structured light three-dimensional scanning method and the system have the following beneficial effects:
1) because the round points are used as basic code elements, the coding pattern is not easy to be damaged by the surface shape, color and material of the object;
2) because the basic code element of the dot is smaller, a larger image space is not needed for reconstruction, and the geometric detail recovery capability is strong;
3) because the round points are used as basic code elements and a set of effective decoding rules are designed, the coding mode is not easily interfered by noise and the phenomenon of mutual interference of the code elements can not occur;
4) because the round points are used as basic code elements, the design of the coding pattern is beneficial to extracting high-precision matching points;
5) because a set of effective decoding rules is designed and the round points are used as basic code elements, the coding mode is flexible, and the coding capacity space has expansibility.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a structured light principle;
FIG. 2 is a schematic diagram of structured light method classification;
FIG. 3 is a flow chart of a method for three-dimensional scanning of spatially coded structured light according to the present invention;
FIG. 4 is a flow chart of a single frame spatial coding pattern design;
FIG. 5 is a classification diagram of relative position relationship of the neighborhood of the coding points;
FIG. 6 is a schematic diagram of a decode point and an encrypt point;
FIG. 7 is a schematic diagram of a designed single-frame spatial coding pattern;
FIG. 8 is a flowchart of decoding point and encryption point extraction;
FIG. 9 is a schematic diagram of a center dot convolution template;
FIG. 10 is a flowchart of candidate projector column number selection at decoding points;
FIG. 11 is a schematic diagram of a three-dimensional object to be measured;
FIG. 12 is a diagram showing a camera capturing a spatial encoding pattern modulated by a three-dimensional object to be measured;
fig. 13 is a schematic diagram of a single-frame networking result after decoding and reconstructing the acquired spatial coding pattern.
Detailed Description
The technical solutions of the present invention are described in detail below with reference to the drawings and examples, and the technical solutions in the embodiments of the present invention are clearly and completely described. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention provides a spatial coding structure light three-dimensional scanning method aiming at the problems of low robustness, poor precision, larger decoding requirement image space, poor expansibility and the like of the traditional spatial coding structure light, and as shown in figure 3, the method comprises the following steps:
s1, designing a single frame spatial coding pattern including decoding points for decoding and encryption points for encryption. The specific flow is shown in fig. 4.
And S11, designing a basic code element. Aiming at the defects of low robustness, poor precision, larger decoding requirement image space and the like of the traditional space coding pattern, the decoding point code element of the invention adopts simple round dots, the image space of a projector occupied by each code element of single-frame space coding is assumed to be Grid size, Grid is set to be 8 pixels in the invention, the distance between the code elements along the row direction and the column direction is Grid, and the radius of the decoding point code element on the image coordinate of the projector is recorded as
Figure BDA0002446897930000081
The dot has the advantages that firstly, the dot features are obvious and easy to extract; secondly, the dot extraction precision is high, which is beneficial to high-precision measurement; finally, the image space occupied by the dots is small, the influence of the surface shape, color and material of the object is small, and the geometric detail recovery capability is strong. After the code element design is adopted, a plurality of problems of the existing space coding pattern can be solved.
S12, design of adjacent symbol arrangement. Because the adopted basic code element only has one circular point, the DeBruijn pseudo-random sequence or the M-array cannot be adopted for coding, and therefore the coding is considered to be carried out by utilizing the relation between horizontal epipolar line constraint and the relative displacement of the code element neighborhood. The horizontal epipolar constraint is determined by the installation of the camera and the projector, the projector image coordinate of each coding point on the projector is converted into the epipolar image coordinate of the projector by calibrating the calibration parameters of the projector and the camera in advance, the row coordinate value of the epipolar coordinate is recorded as an E value, and the E value of each coding point is different. The symbol neighborhood relative displacement relationship is determined by the misaligned pixels of adjacent symbols. As shown in FIG. 5, the neighboring symbols are shifted up and down from the current symbol by the distance from the current symbol as the center
Figure BDA0002446897930000082
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; the neighborhood symbol is vertically located at the current symbolBelow, it is marked 1. In terms of the symbol neighborhood relative displacement relationship, there are 9 cases, that is, case 1(0, 0), case 2(0, 1), case 3(-1, 1), case 4(-1, -1), case 5(1, -1), case 6(1, 0), case 7(0, -1), case 8(1, 1), and case 9(-1, 0), and the cases where the positions are shifted up and down are referred to as D values. When the spatial coding pattern is generated, the vertical misalignment of each column is randomly generated, after the generation, the D value corresponding to each column is determined, and each column stores the D value, and finally, the local misalignment is as shown in fig. 6. In the above description, let M be the decoding point of the ith projectori(D (COL) and E), wherein i is more than 0 and less than m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
And S13, designing a decoding point and an encryption point. If all the points are projector decoding points, the calculated amount is large, and in consideration of continuity along the column direction, relatively small round points are designed to be projector encryption points, and the radius of the encryption points is set as
Figure BDA0002446897930000083
As shown in fig. 6, the decoding dots and the encryption dots are staggered in the row direction. The projector decoding point is used for decoding operation of post processing and judging column number information corresponding to the point; the projector enciphered point does not participate in the decoding operation of post-processing, and the method for judging the column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods. The design of the decoding point and the encryption point effectively reduces the calculation amount of decoding and simultaneously reduces the confusability of code elements.
And S2, projecting the single-frame space coding pattern which is coded in advance to the surface of the measured object through the projector. As shown in fig. 7, the final coding pattern is obtained by lithography and then projected onto the surface of the object through a projection system, where the projection system may use ordinary LED white light, or may use a light source with a specific wavelength band to reduce the influence of background light on the projection pattern collection.
And S3, shooting the space coding pattern modulated by the object by the camera, and extracting the image decoding point and the image encryption point. The specific flow is shown in fig. 8.
S31, shooting the space coding pattern I modulated by the object from different angles by the camera;
and S32, roughly extracting the decoding points. By using
Figure BDA0002446897930000091
The window-sized center dot convolution template is a typical 5 × 5 window as shown in fig. 9, a convolution operation is performed on an I image to obtain a convolution image C, a Grid × 2Grid window is adopted to perform local non-maximum suppression search on the convolution image C, and the obtained local maximum point is an image decoding pointi(x, y),0 ≤ i ≤ s, wherein (x, y) is integer pixel coordinate value.
And S33, fine extraction of the decoding points. Performing coordinate fine extraction on integer pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then recording the image decoding point as Si(x,y,u,v),0≤i≤s。
And S34, roughly extracting the encrypted points. For image decoding point Si(x, y, u, v) within the image rectangle
Figure BDA0002446897930000092
In accordance with
Figure BDA0002446897930000093
And the window carries out local non-maximum suppression search on the convolution image C, and the obtained local maximum point is the image encryption point. Assuming that the number of detected image encryption points is T, recording the whole pixel coordinate T of the image encryption pointsi(x, y), i is more than or equal to 0 and less than or equal to t, wherein (x, y) is the coordinate value of the whole pixel.
And S35, carrying out precision extraction on the encrypted points. Performing coordinate fine extraction on integer pixel coordinates of the image encryption point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption point, and recording the image encryption point as T at the momenti(x,y,u,v),0≤i≤t。
And S4, determining the candidate projector column number of the decoding point by using the relation between the horizontal epipolar line constraint and the neighborhood relative displacement.
And S41, determining candidate projector column numbers of decoding points by using the horizontal epipolar constraint relation. Decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value Mj(D), (COL), E) ofjComparing the values one by one, and when the value satisfies | ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector column number COLjAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) Where k is the number of candidate projector column numbers, (c)1,…,ck) Is the column number value sequence of the candidate projector. The above process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed. It should be noted that if the candidate projector column number cannot be found according to the horizontal epipolar constraint relationship, it is indicated that a large error exists in the currently extracted decoding point, and the decoding point is directly deleted.
And S42, deleting the candidate projector serial numbers of the decoding points by utilizing the relative displacement relation of the neighborhoods. For the ith decoding point Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkThe candidate projector column number is retained if necessary, and deleted if not. Recording the column number of the candidate projector of the ith decoding point deleted by the neighborhood relative displacement relationship as CANDi(c1,…,cn) Wherein n is the number of the candidate projector column numbers after being deleted, (c)1,…,cn) Is the column number value sequence of the candidate projector. The above process is continuously carried out until the deletion of the candidate projector serial numbers of all the image decoding points is completed. It should be noted that if the i-th decoding point does not obtain the D value, this step is not performed, and all possible candidate projector column numbers are directly reserved.
And S5, constructing and updating the statistical weight array of the decoding points, and deleting the candidate projector serial numbers of the decoding points according to the statistical weight array. The specific flow is shown in fig. 10.
S51, a reference projector column number of the decoding point is constructed. Searching current decoding point S by adopting 8-nearest neighbor searching methodi(x, y, U, v) the 8 decoding points closest to each other in the image space are marked as U1,…,U8The 8 neighboring decoding points all have their own candidate projector column number, which is denoted as CANDu1,…,CANDu8Merging the candidate projector column numbers of the adjacent decoding points into a sequence, and recording the sequence as UCANDi(u1,…,uw) Wherein w is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points, and the sequence is called as the column number of the reference projector of the current decoding point.
And S52, constructing a statistical weight array for the candidate projector column number by using the reference projector column number. For the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446897930000111
Accumulating; and continuously performing the operation until all candidate coding column numbers are traversed to obtain a statistical weight array V. And traversing all decoding points to complete the construction of the respective statistical weight arrays.
And S53, updating the statistic weight array of the candidate projector column numbers by using the reference projector column numbers. For the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446897930000112
Performing an update in which VwIs an element uwThe corresponding weight value; and continuously performing the operation until all candidate coding column numbers are traversed, and updating the statistical weight array V. And traversing all decoding points to complete respective updating of the statistical weight arrays.
And S54, iterating the step S53 for 5 times to obtain a stable statistical weight array result.
And S55, deleting the candidate projector serial number sequence according to the statistic weight number. Sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as Si(x, y, u, v, c), c is the column number value of the decoding point obtained.
And S6, deducing the projector column number of the encrypted point according to the projector column number of the decoded point. Traversing image encryption point Ti(x, y, u, v), i is more than or equal to 0 and less than or equal to t, searching the adjacent decoding points above, below, left and right of the image encryption point according to the neighbors, and recording the column number values of the four adjacent decoding points as cOn the upper part、cLower part、cLeft side of、cRight sideIf c is satisfiedOn the upper part=cLower part=cLeft side of+1=cRight side-1, then the current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partAt this time, the encryption point is recorded as TiAnd (x, y, u, v, c) and c are the obtained column number values of the encryption points, otherwise, the current encryption point is deleted. This step is continued until the judgment of all the encryption points is completed.
And S7, reconstructing point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector serial number. For image decoding point Si(x, y, u, v, c) and a picture encryption point Ti(x, y, u, v, c), according to the calibrated parameters of the camera and the projector which are calibrated in advance, by utilizing the image point coordinates (u, v) of the image point and the column number c of the projector, adopting a line-plane intersection method to intersect the corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
According to the above process, as shown in fig. 11, an original schematic diagram of an object to be measured is shown, a device in which a camera and a projector are fixedly connected is used to perform pattern projection and shooting on the object to be measured, so as to obtain a modulation encoded image as shown in fig. 12, and a network construction result of a reconstructed point cloud as shown in fig. 13 is obtained through the decoding and reconstruction method set forth by the present invention.
From the implementation steps, compared with the traditional method, the method has the following remarkable advantages:
1) because the round points are used as basic code elements, the coding pattern is not easy to be damaged by the surface shape, color and material of the object;
2) because the dot code elements are small, a large image space is not needed for reconstruction, and the geometric detail recovery capability is strong;
3) because the round points are used as basic code elements and a set of effective decoding rules are designed, the coding mode is not easily interfered by noise and the phenomenon of mutual interference of the code elements can not occur;
4) because the round points are used as basic code elements, the design of the coding pattern is beneficial to extracting high-precision matching points;
5) because a set of effective decoding rules is designed and the round points are used as basic code elements, the coding mode is flexible, and the coding capacity space has expansibility.
In addition, the invention is suitable for a single-camera structured light three-dimensional measurement system and a multi-camera structured light three-dimensional measurement system. In specific implementation, the above processes can be automatically operated by adopting a computer software mode, and a system device for operating the method also needs to be in a protection range.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (9)

1. A space coding structure light three-dimensional scanning method is characterized by comprising the following steps:
s1, designing a single-frame space coding pattern, including decoding points for decoding and encryption points for encryption;
s2, projecting the pre-coded single-frame space coding pattern to the surface of the measured object through a projector;
s3, shooting the space coding pattern modulated by the object by the camera, and extracting the image decoding point and the encryption point;
s4, determining candidate projector column numbers of decoding points by using the relation between horizontal epipolar line constraint and neighborhood relative displacement;
s5, constructing and updating a statistical weight array of the decoding points, and deleting the candidate projector serial numbers of the decoding points according to the statistical weight array;
s6, deducing the projector serial number of the encryption point according to the projector serial number of the decoding point;
and S7, reconstructing point cloud according to the image coordinates of the decoding point and the encryption point and the corresponding projector serial number.
2. A method for three-dimensional scanning of spatially coded structured light, wherein S1 specifically includes the following steps:
s11, designing basic code elements; aiming at the defects of low robustness, poor precision, larger decoding requirement of an image space and the like of the traditional space coding pattern, the decoding point code element adopts simple dots; assuming that the projector image space occupied by each code element of single-frame space coding is Grid size, the Grid is set to be 8 pixels in the invention, the distance between the code elements along the row direction and the column direction is Grid, and the radius of the code element of a decoding point on the projector image coordinate is recorded as
Figure FDA0002446897920000011
S12, designing the arrangement of adjacent code elements; encoding by using a horizontal epipolar constraint and a code element neighborhood relative displacement relation; the horizontal epipolar constraint is determined by the installation of the camera and the projector, and the projector image coordinates of each coding point on the projector are converted into the projector image coordinates of each coding point on the projector through the calibration parameters of the projector and the camera which are calibrated in advanceRecording the line coordinate value of the epipolar image coordinate as an E value; the code element neighborhood relative displacement relation is determined by the staggered pixels of the adjacent code elements; the distance of the adjacent code element which is staggered up and down relative to the current code element by taking the current code element as the center
Figure FDA0002446897920000012
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; the neighborhood code element is positioned below the current code element in the vertical direction and is marked as 1; according to the relative displacement relation of the code element neighborhood, 9 conditions exist, and the condition of vertical dislocation is recorded as a D value; when generating the space coding pattern, randomly generating the upper and lower dislocation condition of each column, determining the corresponding D value, and storing the D value in each column; let the decoding point of the ith projector be Mi(D (COL) and E), wherein i is more than 0 and less than m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, D (COL) is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector;
s13, designing a decoding point and an encryption point; designing a round point with smaller relative decoding point as an encryption point of the projector, and setting the radius of the round point of the encryption point as
Figure FDA0002446897920000021
The projector decoding point is used for decoding operation of post processing and judging column number information corresponding to the point; the projector encryption point does not participate in the decoding operation of the post-processing.
3. A method for spatially encoded structured light three-dimensional scanning, the implementation manner of S2 being: photoetching the final coding pattern, and projecting the final coding pattern to the surface of an object through a projection system; the projection system can adopt common LED white light and can also adopt a light source with a specific wave band to reduce the influence of background light on projection pattern collection.
4. A method for spatially encoded structured light three-dimensional scanning, wherein the implementation manner of S3 is as follows:
s31, shooting the space coding pattern I modulated by the object from different angles by the camera;
s32, coarse extraction of decoding points; by using
Figure FDA0002446897920000022
Carrying out convolution operation on the I image to obtain a convolution image C by a window-sized center dot convolution template, carrying out local non-maximum suppression search on the convolution image C by adopting a Grid × 2Grid window to obtain local maximum points, namely image decoding points, and recording the whole pixel coordinates S of the image decoding points on the assumption that the number of the detected image decoding points is Si(x, y), i is more than or equal to 0 and less than or equal to s, wherein (x, y) is the coordinate value of the whole pixel;
s33, fine extraction of decoding points; performing coordinate fine extraction on integer pixel coordinates of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point, and then recording the image decoding point as Si(x,y,u,v),0≤i≤s;
S34, roughly extracting the encrypted points; for image decoding point Si(x, y, u, v) within the image rectangle
Figure FDA0002446897920000023
In accordance with
Figure FDA0002446897920000024
The window carries out local non-maximum suppression search on the convolution image C, and the obtained local maximum point is an image encryption point; assuming that the number of detected image encryption points is T, recording the whole pixel coordinate T of the image encryption pointsi(x, y), i is more than or equal to 0 and less than or equal to t, wherein (x, y) is the coordinate value of the whole pixel;
s35, carrying out precision extraction on the encrypted points; performing coordinate fine extraction on integer pixel coordinates of the image encryption point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption point, and recording the image encryption point as T at the momenti(x,y,u,v),0≤i≤t。
5. A method for spatially encoded structured light three-dimensional scanning, wherein the implementation manner of S4 is as follows:
s41, determining candidate projector column numbers of decoding points by using a horizontal epipolar constraint relation; decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value Mj(D), (COL), E) ofjComparing the values one by one, and when the value satisfies | ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector column number COLjAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) Where k is the number of candidate projector column numbers, (c)1,…,ck) A column number value sequence of candidate projectors; the process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed; if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, directly deleting the current decoding point;
s42, deleting the candidate projector serial number of the decoding point by utilizing the neighborhood relative displacement relation; for the ith decoding point Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkIf so, retaining the candidate projector column number, otherwise, deleting; recording the column number of the candidate projector of the ith decoding point deleted by the neighborhood relative displacement relationship as CANDi(c1,…,cn) Wherein n is the number of the candidate projector column numbers after being deleted, (c)1,…,cn) A column number value sequence of candidate projectors; the process is continuously carried out until the deletion of the candidate projector serial numbers of all the image decoding points is completed; if the ith decoding point can not obtain the D valueWithout this step, all possible candidate projector column numbers are kept directly.
6. A method for spatially encoded structured light three-dimensional scanning, wherein the implementation manner of S5 is as follows:
s51, constructing a reference projector serial number of the decoding point; searching current decoding point S by adopting 8-nearest neighbor searching methodi(x, y, U, v) the 8 decoding points closest to each other in the image space are marked as U1,…,U8The 8 neighboring decoding points all have their own candidate projector column number, which is denoted as CANDu1,…,CANDu8Merging the candidate projector column numbers of the adjacent decoding points into a sequence, and recording the sequence as UCANDi(u1,…,uw) W is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points, and the sequence is called as the reference projector column number of the current decoding point;
s52, constructing a statistical weight array for the candidate projector column number by using the reference projector column number; for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure FDA0002446897920000031
Accumulating; continuously performing the operation until all candidate coding column numbers are traversed to obtain a statistical array V; traversing all decoding points to complete the construction of respective statistical weight arrays;
s53, updating the statistic weight array of the candidate projector column number by using the reference projector column number; for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwThe comparison of the column numbers is carried out,by using
Figure FDA0002446897920000041
Performing an update in which VwIs an element uwThe corresponding weight value; continuously performing the operation until all candidate coding column numbers are traversed, and updating the statistical array V; traversing all decoding points to complete respective statistical weight array updating;
s54, iterating the step S53 for 5 times to obtain a stable statistical weight array result;
s55, deleting the candidate projector serial number sequence according to the statistic weight number; sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as Si(x, y, u, v, c), c is the column number value of the decoding point obtained.
7. A method for three-dimensional scanning of spatially coded structured light, wherein S6 is: traversing image encryption point Ti(x, y, u, v), i is more than or equal to 0 and less than or equal to t, searching the adjacent decoding points above, below, left and right of the image encryption point according to the neighbors, and recording the column number values of the four adjacent decoding points as cOn the upper part、cLower part、cLeft side of、cRight sideIf c is aOn the upper part=cLower part=cLeft side of+1=cRight side-1, then the current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partAt this time, the encryption point is recorded as Ti(x, y, u, v, c), wherein c is the obtained column number value of the encryption point, otherwise, the current encryption point is deleted; this step is continued until the judgment of all the encryption points is completed.
8. A method for three-dimensional scanning of spatially coded structured light, wherein S7 is: for image decoding point Si(x, y, u, v, c) and a picture encryption point Ti(x, y, u, v, c), according to the calibrated parameters of the camera and the projector which are calibrated in advance, by utilizing the image point coordinates (u, v) of the image point and the column number c of the projector, adopting a line-plane intersection method to intersect the corresponding object point coordinates, and completing the point cloud reconstruction of the single-frame coded image.
9. A spatial coding structured light three-dimensional scanning system is characterized in that: a spatially coded structured light three-dimensional scanning method for use in accordance with claims 1 to 8.
CN202010281877.8A 2020-04-11 2020-04-11 Space coding structured light three-dimensional scanning method and system Active CN111336949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010281877.8A CN111336949B (en) 2020-04-11 2020-04-11 Space coding structured light three-dimensional scanning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010281877.8A CN111336949B (en) 2020-04-11 2020-04-11 Space coding structured light three-dimensional scanning method and system

Publications (2)

Publication Number Publication Date
CN111336949A true CN111336949A (en) 2020-06-26
CN111336949B CN111336949B (en) 2023-06-02

Family

ID=71180765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010281877.8A Active CN111336949B (en) 2020-04-11 2020-04-11 Space coding structured light three-dimensional scanning method and system

Country Status (1)

Country Link
CN (1) CN111336949B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111707192A (en) * 2020-07-08 2020-09-25 中国科学院长春光学精密机械与物理研究所 Structured light coding and decoding method and device combining sine phase shift asymmetry with Gray code
CN112985307A (en) * 2021-04-13 2021-06-18 先临三维科技股份有限公司 Three-dimensional scanner, system and three-dimensional reconstruction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697233A (en) * 2009-10-16 2010-04-21 长春理工大学 Structured light-based three-dimensional object surface reconstruction method
JP2011237296A (en) * 2010-05-11 2011-11-24 Nippon Telegr & Teleph Corp <Ntt> Three dimensional shape measuring method, three dimensional shape measuring device, and program
US20140247326A1 (en) * 2011-11-23 2014-09-04 Creaform Inc. Method and system for alignment of a pattern on a spatial coded slide image
CN109242957A (en) * 2018-08-27 2019-01-18 深圳积木易搭科技技术有限公司 A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697233A (en) * 2009-10-16 2010-04-21 长春理工大学 Structured light-based three-dimensional object surface reconstruction method
JP2011237296A (en) * 2010-05-11 2011-11-24 Nippon Telegr & Teleph Corp <Ntt> Three dimensional shape measuring method, three dimensional shape measuring device, and program
US20140247326A1 (en) * 2011-11-23 2014-09-04 Creaform Inc. Method and system for alignment of a pattern on a spatial coded slide image
CN109242957A (en) * 2018-08-27 2019-01-18 深圳积木易搭科技技术有限公司 A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111707192A (en) * 2020-07-08 2020-09-25 中国科学院长春光学精密机械与物理研究所 Structured light coding and decoding method and device combining sine phase shift asymmetry with Gray code
CN111707192B (en) * 2020-07-08 2021-07-06 中国科学院长春光学精密机械与物理研究所 Structured light coding and decoding method and device combining sine phase shift asymmetry with Gray code
CN112985307A (en) * 2021-04-13 2021-06-18 先临三维科技股份有限公司 Three-dimensional scanner, system and three-dimensional reconstruction method
CN112985307B (en) * 2021-04-13 2023-03-21 先临三维科技股份有限公司 Three-dimensional scanner, system and three-dimensional reconstruction method

Also Published As

Publication number Publication date
CN111336949B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US9207070B2 (en) Transmission of affine-invariant spatial mask for active depth sensing
US9530215B2 (en) Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
JP5791826B2 (en) 2D code
WO2018219156A1 (en) Structured light coding method and apparatus, and terminal device
CN111336949B (en) Space coding structured light three-dimensional scanning method and system
JP2011237296A (en) Three dimensional shape measuring method, three dimensional shape measuring device, and program
US10818030B2 (en) Three-dimensional measurement apparatus and three-dimensional measurement method
JP3384329B2 (en) 3D image capturing device
Kim et al. Antipodal gray codes for structured light
CN111336950B (en) Single frame measurement method and system combining space coding and line structured light
KR102442980B1 (en) Super-resolution method for multi-view 360-degree image based on equi-rectangular projection and image processing apparatus
CN111307069A (en) Light three-dimensional scanning method and system for dense parallel line structure
CN108038898B (en) Single-frame binary structure optical coding and decoding method
CN111783877A (en) Depth information measuring method based on single-frame grid composite coding template structured light
CN112183695A (en) Encoding method, encoding pattern reading method, and imaging device
JP2019219277A (en) Measuring system and measuring method
Hu et al. Robust 3D shape reconstruction from a single image based on color structured light
CN114363600B (en) Remote rapid 3D projection method and system based on structured light scanning
JP7243513B2 (en) Measuring system and measuring method
Chen et al. A light field sparse representation structure and its fast coding technique
Su et al. DOE-based Structured Light For Robust 3D Reconstruction
Sun et al. Orthogonal coded multi-view structured light for inter-view interference elimination
CN110926370A (en) Measurement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant