CN111336950B - Single frame measurement method and system combining space coding and line structured light - Google Patents

Single frame measurement method and system combining space coding and line structured light Download PDF

Info

Publication number
CN111336950B
CN111336950B CN202010281986.XA CN202010281986A CN111336950B CN 111336950 B CN111336950 B CN 111336950B CN 202010281986 A CN202010281986 A CN 202010281986A CN 111336950 B CN111336950 B CN 111336950B
Authority
CN
China
Prior art keywords
point
decoding
projector
encryption
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010281986.XA
Other languages
Chinese (zh)
Other versions
CN111336950A (en
Inventor
黄文超
龚静
刘改
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xuanjing Technology Co ltd
Original Assignee
Wuhan Xuanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xuanjing Technology Co ltd filed Critical Wuhan Xuanjing Technology Co ltd
Priority to CN202010281986.XA priority Critical patent/CN111336950B/en
Publication of CN111336950A publication Critical patent/CN111336950A/en
Application granted granted Critical
Publication of CN111336950B publication Critical patent/CN111336950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a single frame measuring method and system combining space coding and line structure light, which comprises designing a single frame space coding pattern, including decoding points for decoding and encryption points for encrypting; projecting the pre-coded single frame coding pattern to the surface of the object to be measured through a projector; shooting the modulated coding pattern by a camera, and extracting a decoding point, an encryption point and an encryption line; determining the projector row number of the decoding point by utilizing the relation between the horizontal epipolar line constraint and the neighborhood relative displacement; deducing the projector row and column numbers of the encryption points according to the projector row and column numbers of the decoding points; deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point; and reconstructing the point cloud point by point according to the decoding point, the encryption point, the pixel coordinates of the encryption line and the corresponding projector column number. The invention combines the advantages of space coding and multi-line structured light, solves the respective problems of space coding and multi-line structured light, and effectively improves the efficiency and effect of three-dimensional reconstruction of single-frame structured light.

Description

Single frame measurement method and system combining space coding and line structured light
Technical Field
The invention relates to the field of three-dimensional measurement, in particular to a single-frame measurement method and a single-frame measurement system combining space coding with line structured light.
Background
In the common image modeling, the light source is an ambient light or white light which is not coded, and the image recognition is completely dependent on the characteristic points of the photographed object, so that matching is always a difficulty in image modeling. The structured light method is different in that the projection light source is coded, and an image which is projected by the coded light source onto the object and modulated by the depth of the surface of the object is shot. Because the structured light source is provided with a plurality of coding features, the matching of the feature points can be conveniently carried out, namely, the structured light method actively provides a plurality of feature points for matching, and the feature points of the shot object are not needed, so that a better matching result can be provided. In addition, as the photographed objects in the common image modeling are various, each image matching faces different images, and the characteristic points need to be extracted again; the same pattern is projected by the structured light method, the characteristics are fixed, the change is not needed according to the scene change, the matching difficulty is reduced, and the matching efficiency is improved.
The most important of the structured light method is to design and identify codes, and as shown in fig. 1, the method can be divided into six types, namely space coding, time coding, space-time coding, multi-wavelength composite coding, direct coding and line structured light scanning according to different coding modes. The space coding technology conceals coding information in a single-frame projection image space; the time coding technology projects a plurality of different coding patterns according to time sequence, and corresponding coding image sequences are combined for decoding; the space-time coding technology combines a time coding scheme and a space coding scheme, solves the problem of coding capacity through the time coding technology, and solves the problem of reconstruction through the space coding technology; the multi-wavelength composite coding technology is a technology for acquiring different coding information for reconstruction on different wavebands at the same time; the direct coding method directly sets a codeword for each pixel of the coding pattern by using the characteristics of the projected light (such as color change, gray change, etc.); the line structure light scanning technology projects a plurality of line lasers to the surface of the measured object, and the algorithm extracts line structure characteristics in the image to realize matching and reconstruction.
The spatial coding method is more suitable for dynamic and handheld measurement environments because of the advantages of single frame encoding and decoding. Document US7768656B2 uses a large rectangular code element encoding, wherein rectangular code elements along the column direction have a direct connection relationship, a special projector and a camera mounting condition are adopted to form a row coordinate difference which approximates a epipolar constraint, the large rectangular code elements are expanded into 6 different code element types, a one-dimensional DeBruijn pseudo-random sequence is adopted to form 216 unique code values according to 6 code elements, and the code elements of three adjacent columns are required to be used for decoding during decoding; the disadvantage of this approach is that it places severe demands on hardware installation conditions and the coding space is not scalable. The document CN101627280B adopts diagonal square blocks, adopts 2*3 row-column coding mode according to diagonal mode and corner square black and white color, has the advantages of adding symbol verification, having certain reliability guarantee, and has the disadvantages that the 2-element 6-order DeBruijn pseudo-random sequence has only 64-column coding space, and the coding space of the method is inextensible. The document CN103400366B describes the encoded value by using lines with different thicknesses, which has the disadvantage that the line thickness is easily subject to extraction errors caused by noise, an extraction method, object surface depth modulation, and the like, and an erroneous encoded result is obtained. Document CN104457607a adopts a symmetric hourglass shape, forms different symbols by 0 °/45 °/90 °/135 ° rotation, and encodes 9 consecutive symbols in a specific order of 3*3 rows and columns, which is essentially an M-Array encoding method that does not fully exploit the epipolar constraint relationship, and has a very limited encoding capacity. Document WO2018219227A1 uses a pattern coding scheme, where the coding is performed by using different patterns in white lattices of a checkerboard pattern, which has the disadvantage that a epipolar relationship is not utilized, and is a two-dimensional coding method with a very limited coding capacity; meanwhile, the pattern coding mode is particularly easy to be interfered by noise, and is not beneficial to the extraction of high-precision matching points. Document CN109242957a uses a coding principle similar to that of document US7768656B2 and the same basic symbols, except for the additional constraint of adding 2*3 rank space for M-Array coding. Document CN109540023a uses binary thin line coding, coding by the presence or absence of thin lines on the four diagonals, the drawback being that the symbols are too thin, easily disturbed by noise; thin lines on the diagonal line are not beneficial to high-precision center extraction; thin lines between adjacent symbols are prone to interference with each other. In summary, the advantages of spatial coding are: 1) A reconstruction result with even distribution can be obtained; 2) The multi-frame positioning can be performed by adopting geometric characteristics without attaching mark points on the object for auxiliary positioning; 3) The reconstruction work can be completed by only a single camera, and the overlapping view fields are larger. However, the existing spatial coding methods still have disadvantages: 1) The coding pattern is easily damaged by the shape, color and material of the surface of the object; 2) The reconstruction requires a larger image space, and the geometric detail recovery capability is weaker; the DeBruijn pseudo-random sequence and the M-Array require a large image space for stable decoding, so that the measurement of complex objects is relatively lacking; 3) The coding mode is particularly easy to be interfered by noise, and even mutual interference exists among code elements of some methods; 4) The encoding only considers the uniqueness of the decoding pattern, and neglects the influence of pattern design on the extraction of high-precision matching points; 5) The coding mode has low flexibility, fixed coding capacity space and no expansibility.
The multi-line structured light scanning does not perform any coding, CN205960570a adopts two parallel structured light lines intersecting each other, and the structured light lines are distinguished by a dual camera, so that matching is completed, and the projection pattern is shown in fig. 2. The multi-line structured light method has the advantages that 1) the method is not easily damaged by the shape, color and material of the surface of an object; 2) The reconstruction requires extremely small image space, and the geometric detail recovery capability is strong; 3) The structural light is not easy to be interfered by noise, and basically can not interfere with each other; 4) Decoding is not needed, double cameras are adopted for verification, and the structural light is favorable for high-precision extraction; 5) The connection relation of the internal points of the structural light can be adopted to improve the reconstruction quality. The method has the defects that 1) only sparse structural light rays can be adopted to obtain sparse structural information with very uneven distribution; 2) Marking points are required to be attached to the object to assist in positioning, so that the equipment is not beneficial to use; 3) At least two cameras are required to accomplish the reconstruction task with a small overlapping field of view.
Disclosure of Invention
Aiming at the problems of the traditional space coding structure light and multi-line structure light scanning, the invention provides a single frame measuring method combining space coding and line structure light, which combines the advantages of a space coding scheme and a multi-line structure light scanning scheme, solves the coding capacity problem by a space coding technology, solves the reconstruction problem by the multi-line structure light scanning technology, and comprises the following steps:
S1, designing a single frame coding pattern, wherein the single frame coding pattern comprises decoding points for decoding, encryption points for encrypting and encryption lines;
s2, projecting the pre-coded single-frame coding pattern to the surface of the object to be measured through a projector;
s3, shooting the modulated coding pattern by a camera, and extracting a decoding point, an encryption point and an encryption line;
s4, determining the projector row and column numbers of the decoding points by utilizing the relation between horizontal epipolar line constraint and neighborhood relative displacement;
s5, deducing the projector row number of the encryption point according to the projector row number of the decoding point;
s6, deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point;
s7, reconstructing point clouds point by point according to the decoding points, the encryption points, the pixel coordinates of the encryption lines and the corresponding projector column numbers.
On the basis of the technical scheme, the preferable S1 specifically comprises the following steps:
s11, designing basic code elements. The invention adopts simple dots, and assumes that the projector image space occupied by each symbol of single frame space coding is Grid size, grid=8 pixels are arranged in the invention, the spacing between the symbols along the row and column direction is Grid, and the radius of the decoding point symbol on the projector image coordinate is recorded as follows
Figure BDA0002446939660000031
/>
S12, designing adjacent code element arrangement. The horizontal epipolar line constraint is determined by the installation of a camera and a projector, the projector image coordinates of each coding point on the projector are converted into epipolar line image coordinates of the projector by calibrating calibration parameters of the projector and the camera in advance, the line coordinate values of the epipolar line coordinates are recorded as E values, and the E values of the coding points are different; the relative displacement relation of the symbol neighborhood is determined by the offset pixels of the adjacent symbols, and the adjacent symbols are offset up and down relative to the current symbol by taking the current symbol as the center
Figure BDA0002446939660000032
The neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood code element is flush with the current code element in the vertical direction, and is marked as 0; the neighborhood code element is positioned below the current code element in the vertical direction, the code element is marked as 1, 9 conditions are totally generated according to the relative displacement relation of the code element neighborhood, when the space coding pattern is generated, the upper and lower dislocation condition of each column is randomly generated, the D value corresponding to each column is determined, each column stores the D value, and the decoding point of the ith projector is marked as M i (D, E, COL, ROW), 0 < i < m, wherein m is the total number of projector decoding points, COL is the column number of the projector decoding points, ROW is the ROW number of the projector decoding points, D is the D value of the COL column of the projector decoding points, and E is the E value of the projector decoding points.
S13, designing a decoding point and an encryption point. The relatively small dots are designed as the projector encryption points, and the radius of the encryption point dots is set as
Figure BDA0002446939660000041
The decoding points and the encryption points are staggered along the row direction.
S14, designing an encryption line. The lines of the multi-line structured light and the columns where the decoding points and the encryption points are located are staggered along the column direction, and the line width of the multi-line structured light is set to 2 pixels.
Based on the above technical solution, a preferred embodiment of S2 is: the final coding pattern is obtained by photoetching and projected to the surface of an object through a projection system; the projection system can adopt common LED white light or a light source with a specific wave band so as to reduce the influence of background light on projection pattern acquisition.
On the basis of the technical scheme, the preferable S3 specifically comprises the following steps:
s31, the camera shoots the coded pattern I modulated by the object from different angles.
S32, extracting decoding points. By using
Figure BDA0002446939660000042
The convolution template of the central round dot with the window size carries out convolution operation on the I image to obtain a convolution image C 1 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using Grid x 2Grid window pair 1 Performing local non-maximum suppression search, wherein the obtained local maximum point is an image decoding point; carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of the detected image decoding points is S, the image decoding points are recorded as S i (x,y,u,v),0≤i≤s。
S33, extracting encryption points. For the ith decoding point S on the image i (x, y, u, v) setting the rectangular search range of the image with the current decoding point
Figure BDA0002446939660000043
In accordance with->
Figure BDA0002446939660000044
Window pair convolution image C 1 Performing local non-maximum value inhibition search, wherein the obtained local maximum value point is an image encryption point; carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image encryption points by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption points; assuming that the number of the detected image encryption points is T, the image encryption points are recorded as T i (x,y,u,v),0≤i≤t。
S34, extracting the encryption line. By using
Figure BDA0002446939660000051
The convolution operation is carried out on the I image by using a vertical line convolution template with the window size to obtain a convolution image C 2 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using 2Grid x Grid window pair 2 Performing local non-maximum value inhibition search, wherein the obtained local maximum value point is the point on the encryption line; then carrying out Gaussian fitting on the points by points along the row direction, and carrying out fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); finally, connecting points on the same segment according to the topological relation of the lines; let the number of detected image encryption lines be L, the ith encryption line L i The number of sub-pixel points on i is more than or equal to 0 and less than or equal to L is p, and the point on the encryption line is expressed as L ij (x,y,u,v),0≤i≤l,0≤j≤p。
On the basis of the technical scheme, the preferable S4 specifically comprises the following steps:
s41, determining the row and column numbers of the candidate projector of the decoding point by using the horizontal epipolar constraint relation. According to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and row coordinate values e of the epipolar image coordinates are obtained i Value and projector code value M j E of (D, E, COL, ROW) j Comparing the values one by one, when the value e is satisfied i -E j When the condition +.epsilon (epsilon=0.3 pixel in the present invention), M is defined as j Corresponding projector ROW (COL, ROW) j Imparting a decoding point S i Record the ith solutionCandidate projector column number of code point is CAND i (c 1 ,…,c k ) The row number corresponding to the sequence is CAND i (r 1 ,…,r k ) Where k is the number of candidate projector column numbers, (c) 1 ,r 1 ),…(c k ,r k ) A sequence of rank values for the candidate projector; the above process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed; if the candidate projector column numbers cannot be found according to the horizontal epipolar constraint relation, the fact that the currently extracted decoding points have larger errors is indicated, and the decoding points are directly deleted.
S42, deleting the row and column numbers of the candidate projector of the decoding point by using the neighborhood relative displacement relation. For the i-th decoding point element S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,…,c k ) According to the candidate projector column number c k The corresponding D value, denoted herein as D, is obtained k When d i =D k When the candidate projector column number and the corresponding row number are reserved, otherwise, deleting; the column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,…,c n ) The row number corresponding to the sequence is CAND i (r 1 ,…,r n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,r 1 ),…(c n ,r n ) A sequence of rank values for the candidate projector; the above process is continuously carried out until the deletion of the row numbers of the candidate projector of all the image decoding points is completed; if the ith decoding point does not obtain the D value, this step is not done, and all possible candidate projector row and column numbers are directly reserved.
S43, constructing a reference projector column number of a decoding point, constructing and updating a statistic weight array of candidate projector column numbers, and completing the deletion of projector column numbers. Searching the current decoding point S by adopting a neighbor searching method i (x, y, u, v) the 8 nearest decoding points in image space Combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is recorded as UCAND i (u 1 ,…,u w ) Wherein w is the sum of the number of elements of the column numbers of the candidate projectors of the adjacent decoding points, and the sequence is called the column number of the reference projector of the current decoding point; for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the j candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison by
Figure BDA0002446939660000061
Accumulating, and traversing all candidate coding column numbers to obtain a statistical weight array V; then, for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) The j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Column number comparison is performed using +.>
Figure BDA0002446939660000062
Update, wherein V w Is element u w Traversing all candidate coding column numbers according to the corresponding weight values, and repeatedly updating the statistical weight array V for 5 times; finally, sorting the statistic weight array of the coding points, selecting the column number with the largest weight as the column number value of the current coding point, and recording the coding point as S i (x, y, u, v, c, r) and (c, r) are the column and row number values of the resulting decoding point.
Based on the above technical solution, a preferred embodiment of S5 is: encryption point T for ith image i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching upper and lower adjacent decoding points of the image encryption point according to the neighbor, and marking the row number values of the two adjacent decoding points as (c) Upper part ,r Upper part )、(c Lower part(s) ,r Lower part(s) ) If c is satisfied Upper part =c Lower part(s) ,r Upper part +Grid=r Lower part(s) Grid, then the current image encryption point T i Column number c of (x, y, u, v) Upper part Line number is
Figure BDA0002446939660000063
The encryption point is recorded as T i (x, y, u, v, c, r), c being the column number value of the obtained encryption point, r being the row number value of the obtained encryption point, otherwise deleting the current encryption point; this step is continued until the judgment of all the encryption points is completed.
Based on the above technical solution, a preferred embodiment of S6 is: for each point L on the encryption line ij (x, y, u, v), i is not less than 0 and not more than l, j is not less than 0 and not more than p, and searching decoding points or encryption points adjacent to each other left and right, and marking as ST Left side ,ST Right side The corresponding projector column numbers are c respectively Left side ,c Right side If c is satisfied Left side +Grid=c Right side -Grid, in which case the j-th point on the i-th encryption line is L ij (x, y, u, v, c), 0.ltoreq.i.ltoreq.l, 0.ltoreq.j.ltoreq.p, c being the column number value of the point on the resulting encryption line; otherwise, deleting the point on the encryption line; completing projector column number judgment of the encryption line for the points on the encryption line one by one; and finally, judging whether the column numbers of the points on the same encrypted line are consistent according to the topological relation of the points on the line, if so, reserving the encrypted line, otherwise, deleting.
Based on the above technical solution, a preferred embodiment of S7 is: for the image decoding point S i (x, y, u, v, c, r), image encryption point T i (x, y, u, v, c, r) and a point L on the encryption line ij And (x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and according to calibration parameters of a camera and a projector calibrated in advance, the image point coordinates (u, v) of the image point and the column number c of the projector are utilized to obtain corresponding object point coordinates by adopting a line-plane intersection method, so that the point cloud reconstruction of the single-frame coded image is completed.
The invention also provides a single frame measurement system combining the space coding and the line structure light, which is used for the single frame measurement method combining the space coding and the line structure light.
The single-frame measurement method and system combining space coding and line structure light has the following beneficial effects compared with the prior art:
1) Due to the combination of the advantages of space coding and line structure light scanning, the problems that the space coding is greatly interfered by noise, the reconstruction needs a larger image space, the geometric detail recovery capability is weak and the extraction precision is poor are solved, and the problems that the line structure light scanning only adopts sparse structure light rays, the decoding capacity is small, the reconstructed structure information is sparse and uneven in distribution, the point pasting positioning is needed and at least two cameras are needed are also solved.
2) Because the dots are adopted as basic code elements, and a set of effective space decoding rules are designed, compared with the traditional space coding method, the coding pattern is not easy to be interfered by noise.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of structured light classification;
FIG. 2 is a schematic diagram of a multi-line structured light scan pattern;
FIG. 3 is a flow chart of a single frame measurement method combining spatial coding with line structured light according to the present invention;
FIG. 4 is a flow chart of a single frame coding pattern design;
FIG. 5 is a schematic diagram of a decoding point, an encryption point, and an encryption line;
FIG. 6 is a schematic diagram of a designed single frame coding pattern;
FIG. 7 is a flow chart of decoding point, encryption point and encryption line extraction;
FIG. 8 is a decoding point decoding flow chart;
FIG. 9 is a schematic diagram of a three-dimensional object to be measured;
FIG. 10 is a diagram of a camera capturing a coded pattern modulated by a three-dimensional object to be measured;
Fig. 11 is a single frame networking result after decoding and reconstructing the acquired encoding pattern.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in detail with reference to the accompanying drawings and examples, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Aiming at the problems of the traditional space coding structure light and multi-line structure light scanning, the invention provides a single frame measuring method combining space coding and line structure light, which combines the advantages of a space coding scheme and a multi-line structure light scanning scheme, solves the coding capacity problem through a space coding technology, solves the reconstruction problem through the multi-line structure light scanning technology, and comprises the following steps as shown in figure 3:
s1, designing a single frame coding pattern, wherein the single frame coding pattern comprises decoding points for decoding, encryption points for encrypting and encryption lines. The specific flow is shown in fig. 4.
S11, designing basic code elements. Aiming at the defects of low robustness, poor precision, large decoding requirement image space and the like of the traditional space coding pattern, the decoding point code element adopts simple dots, the projector image space occupied by each code element of single-frame space coding is assumed to be Grid size, grid=8 pixels are arranged in the invention, the spacing between the code elements along the row and column directions is Grid, and the radius of the decoding point code element on the projector image coordinate is recorded as
Figure BDA0002446939660000081
The advantages of the dots are that firstly, the features of the dots are obvious and the dots are easy to extract; secondly, the dot extraction precision is high, which is favorable for high-precision measurement; finally, the dot occupies small image space, is little influenced by the shape, color and material of the object surface, and has strong geometric detail recovery capability. With the code element design, the method can solve the problems at presentThere are many problems with existing spatial coding patterns.
S12, designing adjacent code element arrangement. Since the basic code element is only provided with a round dot, the code cannot be coded by using a DeBruijn pseudo-random sequence or an M-array, and therefore, the code is considered to be coded by using the relation of horizontal epipolar constraint and relative displacement of the code element neighborhood. The horizontal epipolar line constraint is determined by the installation of a camera and a projector, the projector image coordinates of each coding point on the projector are converted into epipolar line image coordinates of the projector by calibrating calibration parameters of the projector and the camera in advance, the line coordinate values of the epipolar line coordinates are recorded as E values, and the E values of the coding points are different. The symbol neighborhood relative displacement relationship is determined by the displaced pixels of adjacent symbols. Distance of upper and lower offset of adjacent symbol relative to current symbol with current symbol as center
Figure BDA0002446939660000091
The neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood code element is flush with the current code element in the vertical direction, and is marked as 0; the neighbor symbol is located vertically below the current symbol and is marked as 1. According to the symbol neighborhood relative displacement relationship, there are 9 cases in total, namely, case 1 (0, 0), case 2 (0, 1), case 3 (-1, 1), case 4 (-1, -1), case 5 (1, -1), case 6 (1, 0), case 7 (0, -1), case 8 (1, 1), case 9 (-1, 0), and the cases with the up-down misalignment are denoted as D values. When the space coding pattern is generated, the vertical dislocation condition of each column is randomly generated, after the generation, the D value corresponding to each column is determined, the D value is stored in each column, and the final local dislocation condition is shown in fig. 5. The decoding point of the ith projector is recorded as M i (D, E, COL, ROW), 0 < i < m, wherein m is the total number of projector decoding points, COL is the column number of the projector decoding points, ROW is the ROW number of the projector decoding points, D is the D value of the COL column of the projector decoding points, and E is the E value of the projector decoding points.
S13, designing a decoding point and an encryption point. If all the points are projector decoding points, the calculation amount will be relatively large, considering the continuity along the column direction Because of this, a relatively small dot is designed as the projector encryption point, and the encryption point dot radius is set to
Figure BDA0002446939660000092
As shown in fig. 5, the decoding points and the encryption points are staggered in the row direction. The projector decoding point is used for decoding operation of post-processing, and column number information corresponding to the point is judged; the projector encryption point does not participate in the decoding operation of the post-processing, and the method for judging the belonged column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods. The design of the decoding point and the encryption point effectively reduces the calculation amount of decoding and simultaneously reduces the confusion of code elements.
S14, designing an encryption line. Considering that a spatial encoding scheme and a multi-line structured light scanning scheme are required to be fused, as shown in fig. 5, the columns where the decoding points and the encryption points are located and the lines of the multi-line structured light are staggered along the column direction, and the line width of the multi-line structured light is set to 2 pixels. The projector decoding point and the encryption point are used for solving the problem of the coding capacity of the multi-line structured light, and the row and column numbers of the multi-line structured light scanning are deduced through the projector row and column numbers of the decoding point and the encryption point; the projector encryption line is used for solving the problems of reconstruction effect and precision of space coding. The design of the decoding point, the encryption point and the encryption line effectively combines the advantages of space coding and multi-line structured light, and improves the efficiency and effect of three-dimensional reconstruction.
S2, projecting the pre-coded single-frame coding pattern to the surface of the object to be measured through a projector. As shown in fig. 6, the final coding pattern is obtained by photoetching and then projected onto the surface of an object through a projection system, wherein the projection system can adopt common LED white light or a light source with a specific wave band so as to reduce the influence of background light on the acquisition of the projection pattern.
S3, shooting the modulated coding pattern by a camera, and extracting a decoding point, an encryption point and an encryption line. The specific flow is shown in fig. 7.
S31, shooting the coded pattern I modulated by the object from different angles by a camera;
s32, decoding point extraction. By using
Figure BDA0002446939660000101
The convolution template of the central round dot with the window size carries out convolution operation on the I image to obtain a convolution image C 1 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using Grid x 2Grid window pair 1 Performing local non-maximum suppression search, wherein the obtained local maximum point is an image decoding point; carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of the detected image decoding points is S, the image decoding points are recorded as S i (x,y,u,v),0≤i≤s。
S33, extracting encryption points. Decoding point S for ith image i (x, y, u, v) setting the rectangular search range of the image with the current decoding point
Figure BDA0002446939660000102
In accordance with->
Figure BDA0002446939660000103
Window pair convolution image C 1 And carrying out local non-maximum value inhibition search, wherein the obtained local maximum value point is the image encryption point. And (3) carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image encryption points by adopting a Gaussian ellipse fitting method to obtain the sub-pixel coordinates (u, v) of the image encryption points. Assuming that the number of the detected image encryption points is T, the image encryption points are recorded as T i (x,y,u,v),0≤i≤t。
S34, extracting the encryption line. By using
Figure BDA0002446939660000104
The convolution operation is carried out on the I image by using a vertical line convolution template with the window size to obtain a convolution image C 2 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using 2Grid x Grid window pair 2 Performing local non-maximum value inhibition search, wherein the obtained local maximum value point is the point on the encryption line; then carrying out Gaussian fitting on the points by points along the row direction, and carrying out fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); finally according to the topology connection of the wiresThe relationship connects points on the same segment according to the topological relationship. Let the number of detected image encryption lines be L, the ith encryption line L i The number of sub-pixel points on i is more than or equal to 0 and less than or equal to L is p, and the point on the encryption line is expressed as L ij (x,y,u,v),0≤i≤l,0≤j≤p。
S4, determining the projector row and column numbers of the decoding points by utilizing the relation between the horizontal epipolar line constraint and the neighborhood relative displacement. The specific flow is shown in fig. 8.
S41, determining the row and column numbers of the candidate projector of the decoding point by using the horizontal epipolar constraint relation. According to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and row coordinate values e of the epipolar image coordinates are obtained i Value and projector code value M j E of (D, E, COL, ROW) j Comparing the values one by one, when meeting |e i -E j When the condition +.epsilon (epsilon=0.3 pixel in the present invention), M is defined as j Corresponding projector ROW (COL, ROW) j Imparting a decoding point S i The column number of the candidate projector for recording the ith decoding point is CAND i (c 1 ,…,c k ) The row number corresponding to the sequence is CAND i (r 1 ,…,r k ) Where k is the number of candidate projector column numbers, (c) 1 ,r 1 ),…(c k ,r k ) Is a sequence of rank values for candidate projectors. The above process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed. It is worth to say that if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, the decoding point extracted at present is indicated to have a larger error and is directly deleted.
S42, deleting the row and column numbers of the candidate projector of the decoding point by using the neighborhood relative displacement relation. For the i-th decoding point element S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,…,c k ) According to the candidate projector column number c k Obtaining the corresponding D value, hereDenoted as D k When d i =D k If so, the candidate projector column number and the corresponding row number are reserved, otherwise, the candidate projector column number is deleted. The column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,…,c n ) The row number corresponding to the sequence is CAND i (r 1 ,…,r n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,r 1 ),…(c n ,r n ) Is a sequence of rank values for candidate projectors. The above process is continuously carried out until the deletion of the row and column numbers of the candidate projector of all the image decoding points is completed. It should be noted that if the ith decoding point does not obtain the D value, all possible candidate projector row numbers are directly reserved without doing so.
S43, constructing a reference projector column number of a decoding point, constructing and updating a statistic weight array of candidate projector column numbers, and completing the deletion of projector column numbers. Searching the current decoding point S by adopting a neighbor searching method i (x, y, u, v) 8 decoding points closest to each other in the image space, and combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is denoted as UCAND i (u 1 ,…,u w ) Where w is the sum of the number of neighbor decoding point candidate projector column number elements, and this sequence is referred to as the reference projector column number of the current decoding point. For the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the j candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Performing column number comparison by
Figure BDA0002446939660000121
And accumulating, namely traversing all candidate coding column numbers to obtain a statistical weight array V. Then, for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,…,c n ) The j-th candidate decoding column number c j Sequentially withReference projector column number UCAND i (u 1 ,…,u w ) Element u of (2) w Column number comparison is performed using +.>
Figure BDA0002446939660000122
Update, wherein V w Is element u w And traversing all candidate coding column numbers according to the corresponding weight values, and repeatedly updating the statistical weight array V for 5 times. Finally, sorting the statistic weight array of the coding points, selecting the column number with the largest weight as the column number value of the current coding point, and recording the coding point as S i (x, y, u, v, c, r) and (c, r) are the column and row number values of the resulting decoding point.
S5, deducing the projector row number of the encryption point according to the projector row number of the decoding point. Traversing image encryption point T i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching upper and lower adjacent decoding points of the image encryption point according to the neighbor, and marking the row number values of the two adjacent decoding points as (c) Upper part ,r Upper part )、(c Lower part(s) ,r Lower part(s) ) If c is satisfied Upper part =c Lower part(s) ,r Upper part +Grid=r Lower part(s) Grid, then the current image encryption point T i Column number c of (x, y, u, v) Upper part Line number is
Figure BDA0002446939660000123
The encryption point is recorded as T i (x, y, u, v, c, r), c is the column number value of the obtained encryption point, r is the row number value of the obtained encryption point, otherwise, the current encryption point is deleted. This step is continued until the judgment of all the encryption points is completed.
S6, deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point. For the encryption line point L ij (x, y, u, v), i is not less than 0 and not more than l, j is not less than 0 and not more than p, and searching decoding points or encryption points adjacent to each other left and right, and marking as ST Left side ,ST Right side The corresponding projector column numbers are c respectively Left side ,c Right side If c is satisfied Left side +Grid=c Right side -Grid, in which case the j-th point on the i-th encryption line is L ij (x, y, u, v, c), 0.ltoreq.i.ltoreq.l, 0.ltoreq.j.ltoreq.p, c being on the resulting encrypted lineColumn number value of the dot; otherwise, deleting the point on the encryption line; and completing projector column number judgment of the encryption lines for the points on the encryption lines one by one. And finally, judging whether the column numbers of the points on the same encrypted line are consistent according to the topological relation of the points on the line, if so, reserving the encrypted line, otherwise, deleting.
S7, reconstructing point clouds point by point according to the decoding points, the encryption points, the pixel coordinates of the encryption lines and the corresponding projector column numbers. For the extracted image decoding point S i (x, y, u, v, c, r), image encryption point T i (x, y, u, v, c, r) and a point L on the encryption line ij And (x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and according to calibration parameters of a camera and a projector calibrated in advance, the image point coordinates (u, v) of the image point and the column number c of the projector are utilized to obtain corresponding object point coordinates by adopting a line-plane intersection method, so that the point cloud reconstruction of the single-frame coded image is completed.
According to the above flow, as shown in fig. 9, which is an original schematic diagram of an object to be measured, a device that is fixedly connected with a camera and a projector is used to project and shoot a pattern of the object to be measured, so as to obtain a modulated and encoded image as shown in fig. 10, and the decoding and reconstructing method of the present invention is used to obtain a network structure result of reconstructing point cloud as shown in fig. 11.
As can be seen from the implementation steps, compared with the traditional method, the method has the following remarkable advantages:
1) Due to the combination of the advantages of space coding and line structure light scanning, the problems that the space coding is greatly interfered by noise, the reconstruction needs a larger image space, the geometric detail recovery capability is weak and the extraction precision is poor are solved, and the problems that the line structure light scanning only adopts sparse structure light rays, the decoding capacity is small, the reconstructed structure information is sparse and uneven in distribution, the point pasting positioning is needed and at least two cameras are needed are also solved.
2) Because the dots are adopted as basic code elements, and a set of effective space decoding rules are designed, compared with the traditional space coding method, the coding pattern is not easy to be interfered by noise.
In addition, the invention is suitable for a single-camera structured light three-dimensional measurement system and a multi-camera structured light three-dimensional measurement system. In specific implementation, the above flow can be automatically operated in a computer software mode, and a system device for operating the method should be within a protection range.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (8)

1. The single frame measurement method combining the space coding and the line structure light is characterized by comprising the following steps of:
s1, designing a single frame coding pattern, wherein the single frame coding pattern comprises decoding points for decoding, encryption points for encrypting and encryption lines; the decoding point code element adopts simple round dots, and the code element neighborhood relative displacement relation is determined by dislocation pixels of adjacent code elements; the lines of the multi-line structured light and the columns where the decoding points and the encryption points are located are staggered along the column direction, and the line width of the multi-line structured light is set to be 2 pixels;
S2, projecting the pre-coded single-frame coding pattern to the surface of the object to be measured through a projector;
s3, shooting the modulated coding pattern by a camera, and extracting a decoding point, an encryption point and an encryption line;
s4, determining the projector row and column numbers of the decoding points by utilizing the relation between horizontal epipolar line constraint and neighborhood relative displacement;
s5, deducing the projector row number of the encryption point according to the projector row number of the decoding point;
s6, deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point;
s7, reconstructing point clouds point by point according to the decoding points, the encryption points, the pixel coordinates of the encryption lines and the corresponding projector column numbers.
2. The method for single frame measurement in combination with line structured light according to claim 1, wherein S1 specifically comprises the steps of:
s11, designing a basic code element; the decoding point code element of (1) adopts simple dots, the projector image space occupied by each code element is assumed to be Grid size by single frame space coding, grid=8 pixels are arranged in the projector image space, the spacing between the code elements along the row and column directions is Grid, and the radius of the decoding point code element on the projector image coordinate is recorded as
Figure FDA0004101356900000011
S12, designing adjacent code element arrangement; converting projector image coordinates of each coding point on the projector into nuclear line image coordinates of the projector by calibrating calibration parameters of the projector and the camera in advance, and recording line coordinate values of the nuclear line coordinates as E values, wherein the E values of the coding points are different; the relative displacement relation of the symbol neighborhood is determined by the offset pixels of the adjacent symbols, and the adjacent symbols are offset up and down relative to the current symbol by taking the current symbol as the center
Figure FDA0004101356900000012
The neighbor symbol is located vertically above the current symbol, and is marked as-1; the neighborhood code element is flush with the current code element in the vertical direction, and is marked as 0; the neighborhood code element is positioned below the current code element in the vertical direction, and is marked as 1, according to the relation of relative displacement of the code element neighborhood, the up-down dislocation condition of each column is randomly generated when the space coding pattern is generated, the D value corresponding to each column is determined, each column stores the D value, and the decoding point of the ith projector is marked as M i (D, E, COL, ROW), 0 < i < m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, ROW is the ROW number of the decoding points of the projector, D is the D value of the COL column of the decoding points of the projector, E is the E value of the decoding points of the projector;
s13, designing a decoding point and an encryption point; the relatively small dots are designed as the projector encryption points, and the radius of the encryption point dots is set as
Figure FDA0004101356900000021
The decoding points and the encryption points are staggered along the row direction;
s14, designing an encryption line; the lines of the multi-line structured light and the columns where the decoding points and the encryption points are located are staggered along the column direction, and the line width of the multi-line structured light is set to 2 pixels.
3. The method for single frame measurement with spatial coding combined with line structured light according to claim 1, wherein the implementation of S2 is: photoetching the final coding pattern, and projecting the final coding pattern onto the surface of an object through a projection system; the projection system can adopt common LED white light or a light source with a specific wave band so as to reduce the influence of background light on projection pattern acquisition.
4. A single frame measurement method combining spatial coding with line structured light according to claim 1, characterized in that the implementation of S3 is as follows:
s31, shooting the coded pattern I modulated by the object from different angles by a camera;
s32, extracting decoding points; by using
Figure FDA0004101356900000022
The convolution template of the central round dot with the window size carries out convolution operation on the I image to obtain a convolution image C 1 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using Grid x 2Grid window pair 1 Performing local non-maximum suppression search, wherein the obtained local maximum point is an image decoding point; carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of the detected image decoding points is S, the image decoding points are recorded as S i (x,y,u,v),0≤i≤s;
S33, extracting encryption points; for image decoding point S i (x, y, u, v) setting the rectangular search range of the image with the current decoding point
Figure FDA0004101356900000023
In accordance with->
Figure FDA0004101356900000024
Window pair convolution image C 1 Performing local non-maximum value inhibition search, wherein the obtained local maximum value point is an image encryption point; carrying out coordinate fine extraction on the whole pixel coordinates (x, y) of the image encryption points by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption points; assuming that the number of the detected image encryption points is T, the image encryption points are recorded as T i (x,y,u,v),0≤i≤t;
S34, extracting an encryption line; by using
Figure FDA0004101356900000025
The convolution operation is carried out on the I image by using a vertical line convolution template with the window size to obtain a convolution image C 2 The method comprises the steps of carrying out a first treatment on the surface of the Convolved image C using 2Grid x Grid window pair 2 Performing local non-maximum value inhibition search, wherein the obtained local maximum value point is the point on the encryption line; then carrying out Gaussian fitting on the points by points along the row direction, and carrying out fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); finally, connecting points on the same segment according to the topological connection relation of the lines; let the number of detected image encryption lines be L, the ith encryption line L i The number of sub-pixel points on i is more than or equal to 0 and less than or equal to L is p, and the point on the encryption line is expressed as L ij (x,y,u,v),0≤i≤l,0≤j≤p。
5. A single frame measurement method combining spatial coding with line structured light according to claim 1, characterized in that the implementation of S4 is as follows:
s41, determining the row number of the candidate projector of the decoding point by using the horizontal epipolar constraint relation; according to the calibration parameters of the projector and the camera, the image decoding point S i Sub-pixel image coordinates (u, v) of (x, y, u, v) are converted into camera epipolar image coordinates, and row coordinate values e of the epipolar image coordinates are obtained i Value and projector code value M j E of (D, E, COL, ROW) j Value-to-ratioFor, when meeting |e i -E j When the condition is less than or equal to epsilon (epsilon=0.3 pixel), M is calculated j Corresponding projector ROW (COL, ROW) j Imparting a decoding point S i The column number of the candidate projector for recording the ith decoding point is CAND i (c 1 ,L,c k ) The row number corresponding to the sequence is CAND i (r 1 ,L,r k ) Where k is the number of candidate projector column numbers, (c) 1 ,r 1 ),L(c k ,r k ) A sequence of rank values for the candidate projector; the above process is continuously carried out until the judgment of the candidate projector column numbers of all the image decoding points is completed; if the candidate projector column numbers cannot be found according to the horizontal epipolar constraint relation, the fact that the currently extracted decoding points have larger errors is indicated, and the decoding points are directly deleted;
s42, deleting the row and column numbers of the candidate projector of the decoding point by using the neighborhood relative displacement relation; for the i-th decoding point element S i (x, y, u, v), searching left and right neighborhood decoding points, and obtaining corresponding D value according to the relative displacement relation of symbol neighborhood, wherein the D value is denoted as D i The method comprises the steps of carrying out a first treatment on the surface of the Candidate projector column numbers CAND for the ith decoding point are taken out one by one i (c 1 ,L,c k ) According to the candidate projector column number c k The corresponding D value, denoted herein as D, is obtained k When d i =D k When the candidate projector column number and the corresponding row number are reserved, otherwise, deleting; the column number of the candidate projector for recording the ith decoding point deleted by the neighborhood relative displacement relation is CAND i (c 1 ,L,c n ) The row number corresponding to the sequence is CAND i (r 1 ,L,r n ) Where n is the number of candidate projector column numbers that have been punctured, (c) 1 ,r 1 ),L(c n ,r n ) A sequence of rank values for the candidate projector; the above process is continuously carried out until the deletion of the row numbers of the candidate projector of all the image decoding points is completed; if the ith decoding point cannot obtain the D value, not doing so, directly reserving all possible candidate projector row and column numbers;
s43, constructing reference projector column numbers of decoding points, and carrying out statistical weight on candidate projector column numbersConstructing and updating an array to finish deleting row and column numbers of the projector; searching the current decoding point S by adopting a neighbor searching method i (x, y, u, v) 8 decoding points closest to each other in the image space, and combining the candidate projector column numbers of the neighboring decoding points into a sequence, which is denoted as UCAND i (u 1 ,L,u w ) Wherein w is the sum of the number of elements of the column numbers of the candidate projectors of the adjacent decoding points, and the sequence is called the column number of the reference projector of the current decoding point; for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,L,c n ) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the j candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,L,u w ) Element u of (2) w Performing column number comparison by
Figure FDA0004101356900000041
Accumulating, and traversing all candidate coding column numbers to obtain a statistical weight array V; then, for the i-th decoding point S i Candidate decoded column number CAND for (x, y, u, v) i (c 1 ,L,c n ) The j-th candidate decoding column number c j Sequentially with the reference projector column number UCAND i (u 1 ,L,u w ) Element u of (2) w Column number comparison is performed using +.>
Figure FDA0004101356900000042
Update, wherein V w Is element u w Traversing all candidate coding column numbers according to the corresponding weight values, and repeatedly updating the statistical weight array V for 5 times; finally, sorting the statistic weight array of the coding points, selecting the column number with the largest weight as the column number value of the current coding point, and recording the coding point as S i (x, y, u, v, c, r) and (c, r) are the column and row number values of the resulting decoding point.
6. The method for single frame measurement with spatial coding combined with line structured light according to claim 1, wherein the implementation of S5 is:traversing image encryption point T i (x, y, u, v), 0.ltoreq.i.ltoreq.t, searching upper and lower adjacent decoding points of the image encryption point according to the neighbor, and marking the row number values of the two adjacent decoding points as (c) Upper part ,r Upper part )、(c Lower part(s) ,r Lower part(s) ) If c is satisfied Upper part =c Lower part(s) ,r Upper part +Grid=r Lower part(s) Grid, then the current image encryption point T i Column number c of (x, y, u, v) Upper part Line number is
Figure FDA0004101356900000043
The encryption point is recorded as T i (x, y, u, v, c, r), c being the column number value of the obtained encryption point, r being the row number value of the obtained encryption point, otherwise deleting the current encryption point; this step is continued until the judgment of all the encryption points is completed.
7. The method for single frame measurement with spatial coding combined with line structured light according to claim 1, wherein the implementation of S6 is: for the encryption line point L ij (x, y, u, v), i is not less than 0 and not more than l, j is not less than 0 and not more than p, and searching decoding points or encryption points adjacent to each other left and right, and marking as ST Left side ,ST Right side The corresponding projector column numbers are c respectively Left side ,c Right side If c Left side +Gri=d Right side c-G, in which the point on the encryption line is recorded as L ij (x, y, u, v, c), 0.ltoreq.i.ltoreq.l, 0.ltoreq.j.ltoreq.p, c being the column number value of the point on the resulting encryption line; otherwise, deleting the point on the encryption line; completing projector column number judgment of the encryption line for the points on the encryption line one by one; and finally, judging whether the column numbers of the points on the same encrypted line are consistent according to the topological relation of the points on the line, if so, reserving the encrypted line, otherwise, deleting.
8. The method for single frame measurement with spatial coding combined with line structured light according to claim 1, wherein the implementation of S7 is as follows: for the extracted image decoding point S i (x, y, u, v, c) r, image encryption point T i (x, y, u, v, c, r) and a point L on the encryption line ij (x,y,u,v,c) And i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and according to calibration parameters of a camera and a projector calibrated in advance, the point coordinates (u, v) of an image point and the column number c of the projector are utilized to obtain corresponding object point coordinates by adopting a line-plane intersection method, so that the point cloud reconstruction of a single-frame coded image is completed.
CN202010281986.XA 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light Active CN111336950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010281986.XA CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010281986.XA CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Publications (2)

Publication Number Publication Date
CN111336950A CN111336950A (en) 2020-06-26
CN111336950B true CN111336950B (en) 2023-06-02

Family

ID=71182753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010281986.XA Active CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Country Status (1)

Country Link
CN (1) CN111336950B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7916932B2 (en) * 2005-02-16 2011-03-29 In-G Co., Ltd. Method and system of structural light-based 3D depth imaging using signal separation coding and error correction thereof
CN103796004B (en) * 2014-02-13 2015-09-30 西安交通大学 A kind of binocular depth cognitive method of initiating structure light
CN109242957A (en) * 2018-08-27 2019-01-18 深圳积木易搭科技技术有限公司 A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint
CN109993826B (en) * 2019-03-26 2023-06-13 中国科学院深圳先进技术研究院 Structured light three-dimensional image reconstruction method, device and system

Also Published As

Publication number Publication date
CN111336950A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN110084757B (en) Infrared depth image enhancement method based on generation countermeasure network
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
CN100554873C (en) A kind of based on two-dimensional encoded 3 D measuring method
CN108122254B (en) Three-dimensional image reconstruction method and device based on structured light and storage medium
US20160239975A1 (en) Highly robust mark point decoding method and system
CN110580481B (en) Light field image key position detection method based on EPI
CN111336949B (en) Space coding structured light three-dimensional scanning method and system
WO2018219156A1 (en) Structured light coding method and apparatus, and terminal device
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
Furukawa et al. One-shot entire shape acquisition method using multiple projectors and cameras
Hu et al. Texture-aware dense image matching using ternary census transform
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
JP2011237296A (en) Three dimensional shape measuring method, three dimensional shape measuring device, and program
CN113284251A (en) Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN108895979B (en) Line segment coded structured light depth acquisition method
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
Pagès et al. Implementation of a robust coded structured light technique for dynamic 3D measurements
CN111336950B (en) Single frame measurement method and system combining space coding and line structured light
CN109690241B (en) Three-dimensional measurement device and three-dimensional measurement method
CN111340957B (en) Measurement method and system
CN111307069A (en) Light three-dimensional scanning method and system for dense parallel line structure
Pages et al. Robust segmentation and decoding of a~ grid pattern for structured light
CN108038898B (en) Single-frame binary structure optical coding and decoding method
CN111783877A (en) Depth information measuring method based on single-frame grid composite coding template structured light
Liang et al. A structured light encoding method for M-array technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant