CN111336950A - Single-frame measuring method and system combining spatial coding and line structure light - Google Patents

Single-frame measuring method and system combining spatial coding and line structure light Download PDF

Info

Publication number
CN111336950A
CN111336950A CN202010281986.XA CN202010281986A CN111336950A CN 111336950 A CN111336950 A CN 111336950A CN 202010281986 A CN202010281986 A CN 202010281986A CN 111336950 A CN111336950 A CN 111336950A
Authority
CN
China
Prior art keywords
point
decoding
encryption
projector
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010281986.XA
Other languages
Chinese (zh)
Other versions
CN111336950B (en
Inventor
黄文超
龚静
刘改
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xuanjing Technology Co ltd
Original Assignee
Wuhan Xuanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xuanjing Technology Co ltd filed Critical Wuhan Xuanjing Technology Co ltd
Priority to CN202010281986.XA priority Critical patent/CN111336950B/en
Publication of CN111336950A publication Critical patent/CN111336950A/en
Application granted granted Critical
Publication of CN111336950B publication Critical patent/CN111336950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a single-frame measuring method and a single-frame measuring system combining space coding and line structure light, which comprises the steps of designing a single-frame space coding pattern, wherein the single-frame space coding pattern comprises decoding points for decoding and encryption points for encrypting; projecting the pre-coded single-frame coding pattern to the surface of the measured object through a projector; shooting the modulated coding pattern by a camera, and extracting decoding points, encryption points and encryption lines; determining the projector row number and column number of a decoding point by utilizing the relation between horizontal epipolar line constraint and neighborhood relative displacement; deducing the projector row number of the encryption point according to the projector row number of the decoding point; deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point; and reconstructing point cloud point by point according to the decoding point, the encryption point, the image point coordinate of the encryption line and the corresponding projector column number. The invention integrates the advantages of space coding and multi-line structured light, solves the respective problems of space coding and multi-line structured light, and effectively improves the efficiency and effect of single-frame structured light three-dimensional reconstruction.

Description

Single-frame measuring method and system combining spatial coding and line structure light
Technical Field
The invention relates to the field of three-dimensional measurement, in particular to a single-frame measurement method and a single-frame measurement system combining spatial coding and line structure light.
Background
In the common image modeling, a light source is an uncoded light source such as ambient light or white light, and image recognition completely depends on the feature points of a shot object, so matching is always a difficulty in image modeling. The difference of the structured light method is that the projection light source is encoded, and the image projected onto the object by the encoded light source and modulated by the depth of the object surface is photographed. Because the structured light source has a plurality of coding features, the matching of the feature points can be conveniently carried out, namely, the structured light method actively provides a plurality of feature points for matching, and the feature points of the shot object are not required to be used, so that a better matching result can be provided. In addition, because the objects shot in the common image modeling are various, each image matching faces different images, and the feature points need to be extracted again; the same pattern is projected by the structured light method, the characteristic is fixed, and the change is not needed according to the change of the scene, so that the matching difficulty is reduced, and the matching efficiency is improved.
The most important of the structured light method is the design and identification of codes, as shown in fig. 1, which can be roughly classified into six types, i.e., spatial codes, temporal codes, space-time codes, multi-wavelength composite codes, direct codes, and linear structured light scanning, according to the difference of the coding modes. The spatial coding technology hides the coded information in a single-frame projection image space; the time coding technology projects a plurality of different coding patterns according to time sequence, combines corresponding coding image sequence groups for decoding; the space-time coding technology integrates a time coding scheme and a space coding scheme, the coding capacity problem is solved through the time coding technology, and the reconstruction problem is solved through the space coding technology; the multi-wavelength composite coding technology is a technology for acquiring different coding information on different wave bands at the same time for reconstruction; the direct coding method directly sets a code word for each pixel of the coding pattern by using the characteristics of the projection light (such as color change, gray scale change and the like); the line structured light scanning technology projects a plurality of lines of laser to the surface of a measured object, and the line structured features in the image are extracted by an algorithm to realize matching and reconstruction.
The spatial coding method is more suitable for dynamic and handheld measurement environments due to the advantages of single-frame coding and decoding. Document US7768656B2 uses large and small rectangular symbols for coding, the rectangular symbols in the column direction also have a direct connection relationship, and special projector and camera mounting conditions are adopted to form a row coordinate difference approximate to epipolar line constraint, so that the large and small rectangular symbols are expanded into 6 different symbol types, a one-dimensional de bruijn pseudo-random sequence is adopted to form 216-column unique coding values according to 6 symbols, and the symbols in adjacent three columns are required to be used for decoding during decoding; the disadvantage of this method is that it has a relatively strict requirement on hardware installation conditions and the coding space is not scalable. Document CN101627280B uses diagonal square blocks, and 2 × 3 row-column coding according to diagonal mode and black-and-white color of corner square, which has the advantages of adding symbol check, having certain reliability guarantee, and the disadvantage that 2-element 6-order de bruijn pseudo-random sequence has only 64 columns of coding space, and the coding space of this method is not expandable. The document CN103400366B describes the coding value by using lines with different thicknesses, and has a disadvantage that the thickness of the lines is easily subject to extraction errors caused by noise, extraction methods, object surface depth modulation, and the like, and an erroneous coding result is obtained. Document CN104457607A adopts a symmetrical hourglass shape, forms different symbols by rotation of 0 °/45 °/90 °/135 °, and encodes 9 consecutive symbols in a specific sequence of 3 × 3 rows, which is essentially the M-Array encoding method adopted, and does not fully utilize the epipolar constraint relationship, and the encoding capacity is very limited. Document WO2018219227a1 uses a pattern coding method, which uses different patterns in the white lattices of a checkerboard pattern for coding, and has the disadvantages that no epipolar line relationship is utilized, the coding method is a two-dimensional coding method, and the coding capacity is very limited; meanwhile, the pattern coding mode is particularly easy to be interfered by noise, and the extraction of high-precision matching points is not facilitated. Document CN109242957A uses a coding principle similar to document US7768656B2 and the same basic symbols, but adds an additional constraint of 2 x 3 row column space for M-Array coding. The document CN109540023A adopts binary thin line coding, and performs coding through the existence or non-existence of thin lines on four diagonal lines, and has the disadvantage that the symbol is too thin and is easily interfered by noise; fine lines on the diagonal line are not beneficial to high-precision center extraction; thin lines between adjacent symbols tend to interfere with each other. In summary, the advantages of spatial coding are: 1) reconstruction results with even distribution can be obtained; 2) the multi-frame positioning can be carried out by adopting geometric characteristics, and mark points do not need to be marked on an object for auxiliary positioning; 3) the reconstruction can be completed only by a single camera, and the overlapped view field is large. However, the existing spatial coding methods still have the following disadvantages: 1) the coding pattern is easily damaged by the surface shape, color and material of the object; 2) the reconstruction needs a larger image space, and the geometric detail recovery capability is weaker; both the DeBruijn pseudorandom sequence and the M-Array require a large image space for stable decoding, and thus measurement of complex objects is relatively lacking; 3) the coding mode is particularly easy to be interfered by noise, and even the code elements of some methods have mutual interference phenomena; 4) the encoding only considers the uniqueness of the decoding pattern, and neglects the influence of pattern design on the extraction of high-precision matching points; 5) the flexibility of the coding mode is low, the coding capacity space is fixed, and the expansibility is not available.
The multiline structured light scanning does not carry out any coding, and CN205960570A adopts two columns of parallel structured light rays which are mutually crossed, and the structured light rays are distinguished by a double camera, so that matching is completed, and a projection pattern is shown in figure 2. The multi-line structured light method has the advantages that 1) the surface shape, the color and the material of the object are not easy to damage; 2) the reconstruction needs extremely small image space, and the geometric detail recovery capability is strong; 3) the structure light is not easy to be interfered by noise and basically does not interfere with each other; 4) decoding is not needed, double-camera verification is adopted, and the structured light is beneficial to high-precision extraction; 5) the connection relationship of the internal points of the structured light can be adopted to improve the reconstruction quality. The method has the disadvantages that 1) only sparse structural light can be adopted to obtain sparse structural information which is distributed very unevenly; 2) mark points are required to be marked on the object for auxiliary positioning, which is not beneficial to equipment use; 3) at least two cameras are needed to complete the reconstruction task, and the overlapping view field is small.
Disclosure of Invention
The invention provides a single frame measuring method combining space coding and line structure light aiming at the problems of the traditional space coding structure light and the multi-line structure light scanning, the method integrates the advantages of the space coding scheme and the multi-line structure light scanning scheme, solves the problem of coding capacity by the space coding technology, solves the problem of reconstruction by the multi-line structure light scanning technology, and comprises the following steps:
s1, designing a single-frame coding pattern, including decoding points for decoding, encryption points for encryption and encryption lines;
s2, projecting the pre-coded single-frame coding pattern to the surface of the measured object through a projector;
s3, shooting the modulated coding pattern by a camera, and extracting decoding points, encryption points and encryption lines;
s4, determining the projector row and column number of the decoding point by using the relation between the horizontal epipolar line constraint and the neighborhood relative displacement;
s5, deducing the projector row number of the encryption point according to the projector row number of the decoding point;
s6, deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point;
and S7, reconstructing point cloud point by point according to the image point coordinates of the decoding point, the encryption point and the encryption line and the corresponding projector column number.
On the basis of the above technical solution, the preferable S1 specifically includes the following steps:
and S11, designing a basic code element. The decoding point code elements of the invention adopt simple round points, the image space of the projector occupied by each code element of single-frame space coding is assumed to be Grid size, Grid is set to be 8 pixels in the invention, the space between the code elements along the line direction and the column direction is Grid, and the radius of the decoding point code elements on the image coordinate of the projector is recorded as
Figure BDA0002446939660000031
S12, design of adjacent symbol arrangement. The horizontal epipolar constraint is determined by the installation of the camera and the projector, the projector image coordinate of each coding point on the projector is converted into the epipolar image coordinate of the projector by calibrating the calibration parameters of the projector and the camera in advance, the row coordinate value of the epipolar coordinate is recorded as an E value, and the E value of each coding point is different; the relative displacement relation of the adjacent code elements is determined by the staggered pixels of the adjacent code elements, and the distance of the adjacent code elements staggered up and down relative to the current code element is determined by taking the current code element as the center
Figure BDA0002446939660000032
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; marking the neighborhood code element as 1 under the current code element in the vertical direction, according to the relative displacement relation of the code element neighborhood, totally 9 conditions, randomly generating the up-down dislocation condition of each column when generating the space coding pattern, determining the D value corresponding to each column, storing the D value in each column, and recording the decoding point of the ith projector as Mi(D, E, COL and ROW), wherein i is more than 0 and less than m, m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, ROW is the ROW number of the decoding points of the projector, D is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
And S13, designing a decoding point and an encryption point. Designing a small round point as a projector encryption point, wherein the radius of the encryption point is set as
Figure BDA0002446939660000041
The decoding points and the encryption points are staggered along the row direction.
S14, design of encryption line. The columns where the decoding points and the encryption points are located and the lines of the multi-line structured light are arranged in a staggered mode along the column direction, and the line width of the multi-line structured light is set to be 2 pixels.
Based on the above technical solution, a preferred embodiment of S2 is: finally, the code pattern is obtained through photoetching and projected to the surface of an object through a projection system; the projection system can adopt common LED white light and can also adopt a light source with a specific wave band to reduce the influence of background light on projection pattern collection.
On the basis of the above technical solution, the preferable S3 specifically includes the following steps:
s31, the camera shoots the object-modulated code pattern I from different angles.
And S32, extracting decoding points. By using
Figure BDA0002446939660000042
The convolution operation is carried out on the I image to obtain a convolution image C1Convolved image C with Grid × 2Grid window pair1Carrying out local non-maximum suppression search to obtain local maximum points, namely image decoding points; carrying out coordinate fine extraction on integer pixel coordinates (x, y) of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of detected image decoding points is S, the image decoding points are recorded as Si(x,y,u,v),0≤i≤s。
And S33, extracting the encrypted points. Aiming at the ith decoding point S on the imagei(x, y, u, v), setting the image rectangular search range with the current decoding point
Figure BDA0002446939660000043
In accordance with
Figure BDA0002446939660000044
Window pair convolution image C1Carrying out local non-maximum inhibition search to obtain local maximum points as image encryption points; carrying out coordinate fine extraction on integer pixel coordinates (x, y) of the image encryption point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption point; assuming that the number of detected image encryption points is T, the image encryption points are recorded as Ti(x,y,u,v),0≤i≤t。
And S34, encrypted line extraction. By using
Figure BDA0002446939660000051
Carrying out convolution operation on the I image to obtain a convolution image C by using a vertical line convolution template with the window size2Convolved image C with a 2Grid × Grid window pair2Carrying out local non-maximum inhibition search to obtain local maximum points, namely points on the encryption line; then, performing Gaussian fitting point by point along the row direction, and performing fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); finally, connecting the points on the same line segment according to the topological relation of the line; assuming that the number of detected image encryption lines is L, the ith encryption line LiWhen the number of sub-pixel points on the line i is more than or equal to 0 and less than or equal to L is p, the point on the encryption line is represented as Lij(x,y,u,v),0≤i≤l,0≤j≤p。
On the basis of the above technical solution, the preferable S4 specifically includes the following steps:
and S41, determining candidate projector row and column numbers of decoding points by using the horizontal epipolar constraint relation. Decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value MjE of (D, E, COL, ROW)jComparing the values one by one, when satisfying ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector line (COL, ROW)jAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rk) Where k is the number of candidate projector column numbers, (c)1,r1),…(ck,rk) A row and column number value sequence of candidate projectors; the process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed; if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, the decoding point extracted currently has a large error and is deleted directly.
And S42, deleting the candidate projector row and column numbers of the decoding points by utilizing the neighborhood relative displacement relation. For the ith decoding point element Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkIf so, retaining the column number and the corresponding row number of the candidate projector, otherwise, deleting the candidate projector; candidate projector for recording ith decoding point deleted through neighborhood relative displacement relationThe serial number is CANDi(c1,…,cn) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rn) Wherein n is the number of the candidate projector column numbers after being deleted, (c)1,r1),…(cn,rn) A row and column number value sequence of candidate projectors; the process is continuously carried out until the candidate projector row and column numbers of all the image decoding points are deleted; if the ith decoding point can not obtain the D value, the step is not carried out, and all possible candidate projector row and column numbers are directly reserved.
And S43, constructing the reference projector column number of the decoding point, and constructing and updating the statistical weight array of the candidate projector column number to complete the deletion of the projector column number. Searching current decoding point S by adopting neighbor searching methodi(x, y, u, v) 8 decoding points closest to each other in the image space, and combining the candidate projector column numbers of the adjacent decoding points into a sequence, which is recorded as UCANDi(u1,…,uw) W is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points, and the sequence is called as the reference projector column number of the current decoding point; for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446939660000061
Accumulating, and traversing all candidate coding column numbers to obtain a statistical weight array V; then, for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446939660000062
Performing an update in which VwIs an element uwTraversing all candidate code column numbers according to the corresponding weight values, and repeatedly updating the statistical weight array V for 5 times; finally, sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as S at the momenti(x, y, u, v, c, r), (c, r) are the column number and row number values of the resulting decoded point.
Based on the above technical solution, a preferred embodiment of S5 is: for the ith image encryption point Ti(x, y, u, v), i is more than or equal to 0 and less than or equal to t, upper and lower adjacent decoding points of the image encryption point are searched according to the neighbors, and the row-column number values of the two adjacent decoding points are respectively recorded as (c)On the upper part,rOn the upper part)、(cLower part,rLower part) If c is satisfiedOn the upper part=cLower part,rOn the upper part+Grid=rLower partGrid, then current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partThe row number is
Figure BDA0002446939660000063
The time stamp encryption point is Ti(x, y, u, v, c, r), wherein c is the column number value of the obtained encryption point, r is the row number value of the obtained encryption point, and if not, the current encryption point is deleted; this step is continued until the judgment of all the encryption points is completed.
Based on the above technical solution, a preferred embodiment of S6 is: for each point L on the encryption lineij(x, y, u, v), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, searching decoding points or encryption points adjacent to the decoding points or the encryption points at the left and the right, and marking as STLeft side of,STRight sideThe corresponding projector column numbers are respectively cLeft side of,cRight sideIf c is satisfiedLeft side of+Grid=cRight side-Grid, noting that the jth point on the ith encryption line is Lij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and c is the column number value of the point on the obtained encryption line; otherwise, deleting the point on the encryption line; the projector column number judgment of the encryption line is completed for the points on the encryption line one by one; finally, judging whether the column numbers of the points on the same encryption line are consistent according to the topological relation of the points on the line, if so, reserving the encryption line,otherwise, deleting.
Based on the above technical solution, a preferred embodiment of S7 is: for image decoding point Si(x, y, u, v, c, r), image encryption point Ti(x, y, u, v, c, r) and a point L on the encryption lineij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, according to the calibrated parameters of the camera and the projector which are calibrated in advance, the coordinates of the corresponding object side point are intersected by using the coordinates (u, v) of the image point and the column number c of the projector by adopting a line-surface intersection method, and the point cloud reconstruction of the single-frame coded image is completed.
Moreover, the invention also provides a single-frame measuring system combining the spatial coding and the line structure light, which is used for the single-frame measuring method combining the spatial coding and the line structure light.
Compared with the prior art, the single-frame measuring method and the system combining the space coding and the line structure light have the following beneficial effects:
1) the advantages of space coding and line structure light scanning are combined, so that the problems that the space coding is greatly interfered by noise, reconstruction needs a larger image space, the recovery capability of geometric details is weak, and the extraction precision is poor are solved, and the problems that line structure light scanning only can adopt sparse structure light, the decoding capacity is small, the distribution of reconstruction structure information is sparse and uneven, the positioning of a contact point is needed, and at least two cameras are needed are solved.
2) Because the round points are used as basic code elements and a set of effective space decoding rules are designed, compared with the traditional space coding method, the coding pattern is not easily interfered by noise.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of structured light method classification;
FIG. 2 is a schematic view of a multi-line structured light scanning pattern;
FIG. 3 is a flow chart of a single frame measurement method combining spatial coding and line structure light according to the present invention;
FIG. 4 is a flow chart of a single frame coding pattern design;
FIG. 5 is a schematic diagram of a decoding point, an encryption point, and an encryption line;
FIG. 6 is a schematic diagram of a designed single frame coding pattern;
FIG. 7 is a flowchart of decoding point, encryption point, and encryption line extraction;
FIG. 8 is a decoding point decoding flow diagram;
FIG. 9 is a schematic diagram of a three-dimensional object to be measured;
FIG. 10 is a diagram showing a camera capturing a coded pattern modulated by a three-dimensional object to be measured;
fig. 11 shows the result of single frame network construction after decoding and reconstructing the acquired coding pattern.
Detailed Description
The technical solutions of the present invention are described in detail below with reference to the drawings and examples, and the technical solutions in the embodiments of the present invention are clearly and completely described. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention provides a single frame measuring method combining space coding and line structured light, aiming at the problems of the traditional space coding structured light and the multi-line structured light scanning, the method integrates the advantages of a space coding scheme and a multi-line structured light scanning scheme, solves the problem of coding capacity by a space coding technology, and solves the problem of reconstruction by a multi-line structured light scanning technology, as shown in figure 3, the method comprises the following steps:
and S1, designing a single-frame coding pattern, wherein the single-frame coding pattern comprises decoding points for decoding, encryption points for encryption and encryption lines. The specific flow is shown in fig. 4.
S11, design of basic code element. Aiming at the defects of low robustness, poor precision, larger decoding requirement image space and the like of the traditional space coding pattern, the decoding point code element of the invention adopts simple round dots, the image space of a projector occupied by each code element of single-frame space coding is assumed to be Grid size, Grid is set to be 8 pixels in the invention, the distance between the code elements along the row direction and the column direction is Grid, and the radius of the decoding point code element on the image coordinate of the projector is recorded as
Figure BDA0002446939660000081
The dot has the advantages that firstly, the dot features are obvious and easy to extract; secondly, the dot extraction precision is high, which is beneficial to high-precision measurement; finally, the image space occupied by the dots is small, the influence of the surface shape, color and material of the object is small, and the geometric detail recovery capability is strong. After the code element design is adopted, a plurality of problems of the existing space coding pattern can be solved.
S12, design of adjacent symbol arrangement. Because the adopted basic code element only has one circular point, the DeBruijn pseudo-random sequence or the M-array cannot be adopted for coding, and therefore the coding is considered to be carried out by utilizing the relation between horizontal epipolar line constraint and the relative displacement of the code element neighborhood. The horizontal epipolar constraint is determined by the installation of the camera and the projector, the projector image coordinate of each coding point on the projector is converted into the epipolar image coordinate of the projector by calibrating the calibration parameters of the projector and the camera in advance, the row coordinate value of the epipolar coordinate is recorded as an E value, and the E value of each coding point is different. The symbol neighborhood relative displacement relationship is determined by the misaligned pixels of adjacent symbols. The distance of the adjacent code element which is staggered up and down relative to the current code element by taking the current code element as the center
Figure BDA0002446939660000091
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; the neighborhood symbol is vertically below the current symbol, and is labeled 1. There are 9 cases in total, case 1(0, 0), case 2(0, 1), case 2, respectively, in terms of the symbol neighborhood relative displacement relationship3(-1, 1), case 4(-1, -1), case 5(1, -1), case 6(1, 0), case 7(0, -1), case 8(1, 1), and case 9(-1, 0), and the vertical misalignment is referred to as a D value. When the spatial coding pattern is generated, the vertical misalignment of each column is randomly generated, after the generation, the D value corresponding to each column is determined, and each column stores the D value, and finally, the local misalignment is as shown in fig. 5. Let the decoding point of the ith projector be Mi(D, E, COL and ROW), wherein i is more than 0 and less than m, m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, ROW is the ROW number of the decoding points of the projector, D is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector.
And S13, designing a decoding point and an encryption point. If all the points are projector decoding points, the calculated amount is large, and in consideration of continuity along the column direction, small round points are designed to be projector encryption points, and the radius of the encryption points is set as
Figure BDA0002446939660000092
As shown in fig. 5, the decoding dots and the encryption dots are staggered in the row direction. The projector decoding point is used for decoding operation of post processing and judging column number information corresponding to the point; the projector enciphered point does not participate in the decoding operation of post-processing, and the method for judging the column number is carried out according to the decoding point information corresponding to the upper and lower neighborhoods. The design of the decoding point and the encryption point effectively reduces the calculation amount of decoding and simultaneously reduces the confusability of code elements.
S14, design of encryption line. Considering that a spatial coding scheme and a multi-line structured light scanning scheme need to be fused, as shown in fig. 5, the columns where the decoding points and the encryption points are located and the lines of the multi-line structured light are staggered along the column direction, and the line width of the multi-line structured light is set to be 2 pixels. The projector decoding point and the encryption point are used for solving the problem of coding capacity of the multi-line structured light, and the row and column numbers of the multi-line structured light scanning are deduced through the projector row and column numbers of the decoding point and the encryption point; the role of the projector encryption line is to solve the reconstruction effect and accuracy problem of spatial coding. The design of the decoding point, the encryption point and the encryption line effectively combines the respective advantages of space coding and multi-line structured light, and improves the efficiency and effect of three-dimensional reconstruction.
And S2, projecting the pre-coded single-frame coding pattern to the surface of the measured object through the projector. As shown in fig. 6, the final coding pattern is obtained by lithography and then projected onto the surface of the object through a projection system, where the projection system may use ordinary LED white light, or may use a light source with a specific wavelength band to reduce the influence of background light on the projection pattern collection.
S3, the camera takes a picture of the modulated code pattern, and performs decoding point, encryption point and encryption line extraction. The specific flow is shown in fig. 7.
S31, shooting the coded pattern I modulated by the object from different angles by the camera;
and S32, extracting decoding points. By using
Figure BDA0002446939660000101
The convolution operation is carried out on the I image to obtain a convolution image C1Convolved image C with Grid × 2Grid window pair1Carrying out local non-maximum suppression search to obtain local maximum points, namely image decoding points; carrying out coordinate fine extraction on integer pixel coordinates (x, y) of the image decoding point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of detected image decoding points is S, the image decoding points are recorded as Si(x,y,u,v),0≤i≤s。
And S33, extracting the encrypted points. For the ith image decoding point Si(x, y, u, v), setting the image rectangular search range with the current decoding point
Figure BDA0002446939660000102
In accordance with
Figure BDA0002446939660000103
Window pair convolution image C1And carrying out local non-maximum inhibition search to obtain local maximum points, namely the image encryption points. Integral image of image encryption point by adopting Gaussian ellipse fitting methodAnd (5) carrying out coordinate fine extraction on the pixel coordinates (x, y) to obtain sub-pixel coordinates (u, v) of the image encryption point. Assuming that the number of detected image encryption points is T, the image encryption points are recorded as Ti(x,y,u,v),0≤i≤t。
And S34, encrypted line extraction. By using
Figure BDA0002446939660000104
Carrying out convolution operation on the I image to obtain a convolution image C by using a vertical line convolution template with the window size2Convolved image C with a 2Grid × Grid window pair2Carrying out local non-maximum inhibition search to obtain local maximum points, namely points on the encryption line; then, performing Gaussian fitting point by point along the row direction, and performing fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); and finally, connecting the points on the same line segment according to the topological relation of the line. Assuming that the number of detected image encryption lines is L, the ith encryption line LiWhen the number of sub-pixel points on the line i is more than or equal to 0 and less than or equal to L is p, the point on the encryption line is represented as Lij(x,y,u,v),0≤i≤l,0≤j≤p。
And S4, determining the projector row and column number of the decoding point by using the relation between the horizontal epipolar line constraint and the neighborhood relative displacement. The specific flow is shown in fig. 8.
And S41, determining candidate projector row and column numbers of decoding points by using the horizontal epipolar constraint relation. Decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value MjE of (D, E, COL, ROW)jComparing the values one by one, and when the value satisfies | ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector line (COL, ROW)jAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rk) Where k is the number of candidate projector column numbers, (c)1,r1),…(ck,rk) Is a row column number value sequence of candidate projectors. The above process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed. It should be noted that if the candidate projector column number cannot be found according to the horizontal epipolar constraint relationship, it is indicated that a large error exists in the currently extracted decoding point, and the decoding point is directly deleted.
And S42, deleting the candidate projector row and column numbers of the decoding points by utilizing the neighborhood relative displacement relation. For the ith decoding point element Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkIf so, the candidate projector column number and the corresponding row number are reserved, otherwise, the candidate projector column number and the corresponding row number are deleted. Recording the column number of the candidate projector of the ith decoding point deleted by the neighborhood relative displacement relationship as CANDi(c1,…,cn) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rn) Wherein n is the number of the candidate projector column numbers after being deleted, (c)1,r1),…(cn,rn) Is a row column number value sequence of candidate projectors. The above process is continuously carried out until the candidate projector row and column numbers of all the image decoding points are deleted. It should be noted that if the i-th decoding point does not obtain the D value, this step is not performed, and all possible candidate projector row and column numbers are directly reserved.
And S43, constructing the reference projector column number of the decoding point, and constructing and updating the statistical weight array of the candidate projector column number to complete the deletion of the projector column number. Searching current decoding point S by adopting neighbor searching methodi(x, y, u, v) 8 decoding points closest to each other in the image space, and combining the candidate projector column numbers of the adjacent decoding points into a sequence, which is recorded as UCANDi(u1,…,uw) Wherein w is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points,this sequence is referred to as the reference projector column number of the current decoding point. For the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446939660000121
And accumulating, and traversing all candidate coding column numbers to obtain a statistical weight array V. Then, for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure BDA0002446939660000122
Performing an update in which VwIs an element uwAnd traversing all candidate code column numbers according to the corresponding weight values, and repeatedly updating the statistical weight array V for 5 times. Finally, sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as S at the momenti(x, y, u, v, c, r), (c, r) are the column number and row number values of the resulting decoded point.
S5, the projector row number of the encryption point is deduced according to the projector row number of the decoding point. Traversing image encryption point Ti(x, y, u, v), i is more than or equal to 0 and less than or equal to t, upper and lower adjacent decoding points of the image encryption point are searched according to the neighbors, and the row-column number values of the two adjacent decoding points are respectively recorded as (c)On the upper part,rOn the upper part)、(cLower part,rLower part) If c is satisfiedOn the upper part=cLower part,rOn the upper part+Grid=rLower partGrid, then current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partThe row number is
Figure BDA0002446939660000123
The time stamp encryption point is Ti(x, y, u, v, c, r), wherein c is the column number value of the obtained encryption point, r is the row number value of the obtained encryption point, and otherwise, the current encryption point is deleted. This step is continued until the judgment of all the encryption points is completed.
S6, the projector column number of the encryption line is deduced according to the projector column numbers of the decoding point and the encryption point. For point L on the encryption lineij(x, y, u, v), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, searching decoding points or encryption points adjacent to the decoding points or the encryption points at the left and the right, and marking as STLeft side of,STRight sideThe corresponding projector column numbers are respectively cLeft side of,cRight sideIf c is satisfiedLeft side of+Grid=cRight side-Grid, noting that the jth point on the ith encryption line is Lij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and c is the column number value of the point on the obtained encryption line; otherwise, deleting the point on the encryption line; and the projector column number judgment of the encryption line is completed for the points on the encryption line one by one. And finally, judging whether the column numbers of the points on the same encryption line are consistent or not according to the topological relation of the points on the line, if so, keeping the encryption line, and otherwise, deleting the encryption line.
And S7, reconstructing point cloud point by point according to the image point coordinates of the decoding point, the encryption point and the encryption line and the corresponding projector column number. Decoding point S of the extracted imagei(x, y, u, v, c, r), image encryption point Ti(x, y, u, v, c, r) and a point L on the encryption lineij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, according to the calibrated parameters of the camera and the projector which are calibrated in advance, the coordinates of the corresponding object side point are intersected by using the coordinates (u, v) of the image point and the column number c of the projector by adopting a line-surface intersection method, and the point cloud reconstruction of the single-frame coded image is completed.
According to the above process, as shown in fig. 9, an original schematic diagram of an object to be measured is shown, a device in which a camera and a projector are fixedly connected is used to perform pattern projection and shooting on the object to be measured, so as to obtain a modulation encoded image as shown in fig. 10, and a network construction result of a reconstructed point cloud as shown in fig. 11 is obtained through the decoding and reconstruction method set forth by the present invention.
From the implementation steps, compared with the traditional method, the method has the following remarkable advantages:
1) the advantages of space coding and line structure light scanning are combined, so that the problems that the space coding is greatly interfered by noise, reconstruction needs a larger image space, the recovery capability of geometric details is weak, and the extraction precision is poor are solved, and the problems that line structure light scanning only can adopt sparse structure light, the decoding capacity is small, the distribution of reconstruction structure information is sparse and uneven, the positioning of a contact point is needed, and at least two cameras are needed are solved.
2) Because the round points are used as basic code elements and a set of effective space decoding rules are designed, compared with the traditional space coding method, the coding pattern is not easily interfered by noise.
In addition, the invention is suitable for a single-camera structured light three-dimensional measurement system and a multi-camera structured light three-dimensional measurement system. In specific implementation, the above processes can be automatically operated by adopting a computer software mode, and a system device for operating the method also needs to be in a protection range.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (9)

1. A single frame measuring method combining space coding and line structure light is characterized by comprising the following steps:
s1, designing a single-frame coding pattern, including decoding points for decoding, encryption points for encryption and encryption lines;
s2, projecting the pre-coded single-frame coding pattern to the surface of the measured object through a projector;
s3, shooting the modulated coding pattern by a camera, and extracting decoding points, encryption points and encryption lines;
s4, determining the projector row and column number of the decoding point by using the relation between the horizontal epipolar line constraint and the neighborhood relative displacement;
s5, deducing the projector row number of the encryption point according to the projector row number of the decoding point;
s6, deducing the projector column number of the encryption line according to the projector column numbers of the decoding point and the encryption point;
and S7, reconstructing point cloud point by point according to the image point coordinates of the decoding point, the encryption point and the encryption line and the corresponding projector column number.
2. A single frame measurement method combining spatial coding and line structured light, wherein S1 specifically includes the following steps:
s11, designing basic code elements; the decoding point code elements of the invention adopt simple round points, the image space of the projector occupied by each code element of single-frame space coding is assumed to be Grid size, Grid is set to be 8 pixels in the invention, the space between the code elements along the line direction and the column direction is Grid, and the radius of the decoding point code elements on the image coordinate of the projector is recorded as
Figure FDA0002446939650000011
S12, designing the arrangement of adjacent code elements; the method comprises the steps that calibration parameters of a projector and a camera are calibrated in advance, projector image coordinates of each coding point on the projector are converted into epipolar image coordinates of the projector, line coordinate values of the epipolar coordinates are recorded as E values, and the E values of the coding points are different; the relative displacement relation of the adjacent code elements is determined by the staggered pixels of the adjacent code elements, and the distance of the adjacent code elements staggered up and down relative to the current code element is determined by taking the current code element as the center
Figure FDA0002446939650000012
The neighborhood code element is positioned above the current code element in the vertical direction and is marked as-1; if the adjacent code element is level with the current code element in the vertical direction, marking as 0; marking the neighborhood code element as 1 under the current code element in the vertical direction, randomly generating the up-down dislocation condition of each column according to the relative displacement relation of the code element neighborhood and totally 9 conditions when generating the space coding pattern, and ensuring the D value corresponding to each columnThen, each column stores the D value, and the decoding point of the ith projector is recorded as Mi(D, E, COL and ROW), wherein i is more than 0 and less than m, wherein m is the total number of the decoding points of the projector, COL is the column number of the decoding points of the projector, ROW is the ROW number of the decoding points of the projector, D is the D value of the COL column of the decoding points of the projector, and E is the E value of the decoding points of the projector;
s13, designing a decoding point and an encryption point; designing a small round point as a projector encryption point, wherein the radius of the encryption point is set as
Figure FDA0002446939650000021
The decoding points and the encryption points are arranged in a staggered mode along the row direction;
s14, designing an encryption line; the columns where the decoding points and the encryption points are located and the lines of the multi-line structured light are arranged in a staggered mode along the column direction, and the line width of the multi-line structured light is set to be 2 pixels.
3. A single-frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S2 is: photoetching the final coding pattern, and projecting the final coding pattern to the surface of an object through a projection system; the projection system can adopt common LED white light and can also adopt a light source with a specific wave band to reduce the influence of background light on projection pattern collection.
4. A single frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S3 is as follows:
s31, shooting the coded pattern I modulated by the object from different angles by the camera;
s32, extracting decoding points; by using
Figure FDA0002446939650000022
The convolution operation is carried out on the I image to obtain a convolution image C1Convolved image C with Grid × 2Grid window pair1Carrying out local non-maximum suppression search to obtain local maximum points, namely image decoding points; fitting squares by Gaussian ellipsesCarrying out coordinate fine extraction on integer pixel coordinates (x, y) of an image decoding point to obtain sub-pixel coordinates (u, v) of the image decoding point; assuming that the number of detected image decoding points is S, the image decoding points are recorded as Si(x,y,u,v),0≤i≤s;
S33, extracting the encrypted points; for image decoding point Si(x, y, u, v), setting the image rectangular search range with the current decoding point
Figure FDA0002446939650000023
In accordance with
Figure FDA0002446939650000024
Window pair convolution image C1Carrying out local non-maximum inhibition search to obtain local maximum points as image encryption points; carrying out coordinate fine extraction on integer pixel coordinates (x, y) of the image encryption point by adopting a Gaussian ellipse fitting method to obtain sub-pixel coordinates (u, v) of the image encryption point; assuming that the number of detected image encryption points is T, the image encryption points are recorded as Ti(x,y,u,v),0≤i≤t;
S34, extracting an encryption line; by using
Figure FDA0002446939650000025
Carrying out convolution operation on the I image to obtain a convolution image C by using a vertical line convolution template with the window size2Convolved image C with a 2Grid × Grid window pair2Carrying out local non-maximum inhibition search to obtain local maximum points, namely points on the encryption line; then, performing Gaussian fitting point by point along the row direction, and performing fine extraction on the whole pixel image coordinates (x, y) of the points on the encryption line to obtain sub-pixel coordinates (u, v); finally, connecting points on the same line segment according to the topological connection relation of the line; assuming that the number of detected image encryption lines is L, the ith encryption line LiWhen the number of sub-pixel points on the line i is more than or equal to 0 and less than or equal to L is p, the point on the encryption line is represented as Lij(x,y,u,v),0≤i≤l,0≤j≤p。
5. A single frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S4 is as follows:
s41, determining candidate projector row and column numbers of decoding points by using a horizontal epipolar constraint relation; decoding the image to obtain a decoded image point S according to the calibration parameters of the projector and the cameraiConverting the (x, y, u, v) sub-pixel image coordinates (u, v) into camera epipolar image coordinates, and converting the line coordinate values e of the epipolar image coordinatesiValue and projector code value MjE of (D, E, COL, ROW)jComparing the values one by one, and when the value satisfies | ei-EjIf | ≦ epsilon (in the present invention, let epsilon equal to 0.3 pixel), M is addedjCorresponding projector line (COL, ROW)jAssigning a decoding point SiAnd recording the serial number of the candidate projector of the ith decoding point as CANDi(c1,…,ck) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rk) Where k is the number of candidate projector column numbers, (c)1,r1),…(ck,rk) A row and column number value sequence of candidate projectors; the process is continuously carried out until the judgment of the candidate projector serial numbers of all the image decoding points is completed; if the candidate projector column number cannot be found according to the horizontal epipolar constraint relation, the decoding point extracted currently has a large error, and is deleted directly;
s42, deleting the row and column numbers of the candidate projectors of the decoding points by using the neighborhood relative displacement relation; for the ith decoding point element Si(x, y, u, v), searching the left and right neighborhood decoding points, and obtaining the corresponding D value according to the relative displacement relation of the code element neighborhood, wherein the D value is marked as Di(ii) a Taking out the candidate projector column number CAND of the ith decoding point one by onei(c1,…,ck) According to the candidate projector column number ckObtain the corresponding D value, denoted as DkWhen d isi=DkIf so, retaining the column number and the corresponding row number of the candidate projector, otherwise, deleting the candidate projector; recording the column number of the candidate projector of the ith decoding point deleted by the neighborhood relative displacement relationship as CANDi(c1,…,cn) The row numbers sequentially corresponding to the row numbers are CANDi(r1,…,rn) Wherein n is the candidate of deletionNumber of shadow instrument column number (c)1,r1),…(cn,rn) A row and column number value sequence of candidate projectors; the process is continuously carried out until the candidate projector row and column numbers of all the image decoding points are deleted; if the ith decoding point cannot obtain the D value, the step is not carried out, and all possible candidate projector row and column numbers are directly reserved;
s43, constructing reference projector column numbers of decoding points, and constructing and updating statistical weight arrays of candidate projector column numbers to complete projector column number deletion and selection; searching current decoding point S by adopting neighbor searching methodi(x, y, u, v) 8 decoding points closest to each other in the image space, and combining the candidate projector column numbers of the adjacent decoding points into a sequence, which is recorded as UCANDi(u1,…,uw) W is the sum of the number of column number elements of the candidate projectors of the adjacent decoding points, and the sequence is called as the reference projector column number of the current decoding point; for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) Establishing a statistical weight array V with an initial value of 0 and a size of n, and for the jth candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure FDA0002446939650000041
Accumulating, and traversing all candidate coding column numbers to obtain a statistical weight array V; then, for the ith decoding point SiCandidate decoding column number CAND of (x, y, u, v)i(c1,…,cn) J (th) candidate decoding column number cjSequentially with reference projector column number UCANDi(u1,…,uw) Element u ofwPerforming column number comparison by
Figure FDA0002446939650000042
Performing an update in which VwIs an element uwCorresponding weight values, traversing all candidate code column numbers, and repeatedly updating the system for 5 timesCalculating a weight array V; finally, sorting the statistical weight array of the coding points, selecting the column number with the maximum weight as the column number value of the current coding point, and recording the decoding point as S at the momenti(x, y, u, v, c, r), (c, r) are the column number and row number values of the resulting decoded point.
6. A single-frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S5 is: traversing image encryption point Ti(x, y, u, v), i is more than or equal to 0 and less than or equal to t, upper and lower adjacent decoding points of the image encryption point are searched according to the neighbors, and the row-column number values of the two adjacent decoding points are respectively recorded as (c)On the upper part,rOn the upper part)、(cLower part,rLower part) If c is satisfiedOn the upper part=cLower part,rOn the upper part+Grid=rLower partGrid, then current image encryption point TiColumn number of (x, y, u, v) is cOn the upper partThe row number is
Figure FDA0002446939650000043
The time stamp encryption point is Ti(x, y, u, v, c, r), wherein c is the column number value of the obtained encryption point, r is the row number value of the obtained encryption point, and if not, the current encryption point is deleted; this step is continued until the judgment of all the encryption points is completed.
7. A single-frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S6 is: for point L on the encryption lineij(x, y, u, v), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, searching decoding points or encryption points adjacent to the decoding points or the encryption points at the left and the right, and marking as STLeft side of,STRight sideThe corresponding projector column numbers are respectively cLeft side of,cRight sideIf c is aLeft side of+Grid=cRight sideGrid, when the point on the encryption line is marked Lij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, and c is the column number value of the point on the obtained encryption line; otherwise, deleting the point on the encryption line; the projector column number judgment of the encryption line is completed for the points on the encryption line one by one; finally, judging whether the column numbers of the points on the same encryption line are consistent according to the topological relation of the points on the line, and if so, ensuringThe encryption line is left, otherwise, it is deleted.
8. A single-frame measurement method combining spatial coding and line structured light, wherein the implementation manner of S7 is: decoding point S of the extracted imagei(x, y, u, v, c, r), image encryption point Ti(x, y, u, v, c, r) and a point L on the encryption lineij(x, y, u, v, c), i is more than or equal to 0 and less than or equal to l, j is more than or equal to 0 and less than or equal to p, according to the calibrated parameters of the camera and the projector which are calibrated in advance, the coordinates of the corresponding object side point are intersected by using the coordinates (u, v) of the image point and the column number c of the projector by adopting a line-surface intersection method, and the point cloud reconstruction of the single-frame coded image is completed.
9. A single frame measurement system combining spatial coding and line structure light is characterized in that: a single frame measurement method for spatial coding combined with line structured light as claimed in claims 1 to 8.
CN202010281986.XA 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light Active CN111336950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010281986.XA CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010281986.XA CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Publications (2)

Publication Number Publication Date
CN111336950A true CN111336950A (en) 2020-06-26
CN111336950B CN111336950B (en) 2023-06-02

Family

ID=71182753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010281986.XA Active CN111336950B (en) 2020-04-11 2020-04-11 Single frame measurement method and system combining space coding and line structured light

Country Status (1)

Country Link
CN (1) CN111336950B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060210145A1 (en) * 2005-02-16 2006-09-21 Sungkyunkwan University Foundation For Corporate Collaboration Method and system of structural light-based 3d depth imaging using signal separation coding and error correction thereof
US20150229911A1 (en) * 2014-02-13 2015-08-13 Chenyang Ge One method of binocular depth perception based on active structured light
CN109242957A (en) * 2018-08-27 2019-01-18 深圳积木易搭科技技术有限公司 A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint
CN109993826A (en) * 2019-03-26 2019-07-09 中国科学院深圳先进技术研究院 A kind of structural light three-dimensional image rebuilding method, equipment and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060210145A1 (en) * 2005-02-16 2006-09-21 Sungkyunkwan University Foundation For Corporate Collaboration Method and system of structural light-based 3d depth imaging using signal separation coding and error correction thereof
US20150229911A1 (en) * 2014-02-13 2015-08-13 Chenyang Ge One method of binocular depth perception based on active structured light
CN109242957A (en) * 2018-08-27 2019-01-18 深圳积木易搭科技技术有限公司 A kind of single frames coding structural light three-dimensional method for reconstructing based on multiple constraint
CN109993826A (en) * 2019-03-26 2019-07-09 中国科学院深圳先进技术研究院 A kind of structural light three-dimensional image rebuilding method, equipment and system

Also Published As

Publication number Publication date
CN111336950B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
WO2018219156A1 (en) Structured light coding method and apparatus, and terminal device
CN111981982B (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN111336949B (en) Space coding structured light three-dimensional scanning method and system
Furukawa et al. One-shot entire shape acquisition method using multiple projectors and cameras
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
JPWO2014077187A1 (en) 2D code
JP2011237296A (en) Three dimensional shape measuring method, three dimensional shape measuring device, and program
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
CN110044927B (en) Method for detecting surface defects of curved glass by space coding light field
CN108895979B (en) Line segment coded structured light depth acquisition method
US20190213753A1 (en) Three-dimensional measurement apparatus and three-dimensional measurement method
CN111336950B (en) Single frame measurement method and system combining space coding and line structured light
Cheng et al. 3D object scanning system by coded structured light
CN111783877B (en) Depth information measurement method based on single-frame grid composite coding template structured light
CN111307069A (en) Light three-dimensional scanning method and system for dense parallel line structure
CN111340957B (en) Measurement method and system
CN108038898B (en) Single-frame binary structure optical coding and decoding method
KR20190103833A (en) Method for measuring 3-dimensional data in real-time
Liang et al. A structured light encoding method for M-array technique
JP2006023133A (en) Instrument and method for measuring three-dimensional shape
Adán et al. Disordered patterns projection for 3D motion recovering
CN110926370B (en) Measurement method and system
JP2006058092A (en) Three-dimensional shape measuring device and method
Su et al. DOE-based Structured Light For Robust 3D Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant