CN117781925A - 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping - Google Patents

3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping Download PDF

Info

Publication number
CN117781925A
CN117781925A CN202311534513.6A CN202311534513A CN117781925A CN 117781925 A CN117781925 A CN 117781925A CN 202311534513 A CN202311534513 A CN 202311534513A CN 117781925 A CN117781925 A CN 117781925A
Authority
CN
China
Prior art keywords
phase
image
code
codeword
calculated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311534513.6A
Other languages
Chinese (zh)
Inventor
武迎春
杨娜
刘丽
李晋红
王安红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN202311534513.6A priority Critical patent/CN117781925A/en
Publication of CN117781925A publication Critical patent/CN117781925A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of high-speed measurement, and particularly discloses a 3D contour reconstruction system adopting a four-codeword coding diagram to assist in phase expansion, which comprises the following specific technical scheme: the invention discloses a three-dimensional image reconstruction method based on different measurement objects, which comprises the steps of generating N sine stripe patterns and 1 gray scale code pattern by using an FPGA module, projecting the generated images onto a measured object by using the image projection module, acquiring the generated N deformed sine stripe patterns and 1 deformed gray scale code pattern by using the image acquisition module, controlling the image acquisition module and the image projection module to synchronously work by using the time sequence control unit, so that one frame of image is projected while synchronously acquiring the image, and processing the acquired deformed patterns by using the FPGA module to obtain 3D information of the object.

Description

3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping
Technical Field
The invention belongs to the technical field of high-speed measurement, and particularly relates to a 3D contour reconstruction system adopting a four-codeword coding diagram to assist in phase expansion.
Background
The optical three-dimensional shape measurement is widely applied to various fields such as industrial quality detection, virtual reality, reverse engineering, cultural relic protection and the like, and the stripe projection profile measurement (FPP, fringe projection profilometry) is widely focused as a three-dimensional measurement technology due to the simple structure, low cost and simple stripe generation. In FPP, a projector projects a fringe pattern onto an object, and a camera captures a deformation map containing depth information of the object. By analysing the acquired deformed fringe pattern, the phase distribution of the object can be obtained, since this process involves an arctangent operation, the wrapping phases obtained from the sinusoidal fringe pattern have to be spread out.
In the last decades, many phase unwrapping methods have been rapidly developed, which can traditionally be divided into two main categories: spatial phase unwrapping (SPU, spatial phase unwrapping) and temporal phase unwrapping (TPU, temporal phase unwrapping). The SPU algorithm is based on the world's view of being continuous, a relative phase unwrapping algorithm, which unwraps the wrapped phases without requiring additional graphs. However, if the wrapping phase is unreliable, the phase error may be generated and spread to the subsequent point, causing error in phase unwrapping, especially for objects with phase differences of adjacent pixels exceeding 2pi, the phase unwrapping may cause errors, such as discontinuous objects or a plurality of isolated objects, which may be solved by the TPU method.
The main idea of TPU is that the phase value of each pixel is a function of time and that the operation of the pixels during phase unwrapping is independent of each other. Therefore, an erroneous transfer phenomenon does not occur. The TPU method is more suitable for measuring complex shapes or separate objects. A typical TPU process includes: 1), multi-wavelength/multi-frequency method, 2), gray code method, 3), phase encoding method. In a multi-wavelength/multi-frequency approach, at least two wrapped phases are typically used, calculated using different sets of phase shift maps with unique frequencies, for generating a fringe order map. Miao et al propose a phase unwrapping method based on dual-frequency fringes, which achieves higher accuracy by synthesizing the projected high-frequency and low-frequency dual-frequency fringes. The Gui et al propose an improved dual-frequency phase-encoded fringe projection method aimed at achieving absolute phase recovery. Yi et al propose a phase unwrapping method with improved fringe order, exploiting the multi-frequency characteristics to improve the accuracy of the phase details of the solution.
The gray scale coding method eliminates phase discontinuity by projecting a series of preset binary images, the number of the projected gray scale code images is log 2 M, where M is the number of stripe periods. Therefore, when M is large, more drawings need to be projected on the object to be measured. Wang et al propose a method of phase unwrapping a redesigned gray code that can effectively achieve an accurate three-dimensional shape by reducing at least two gray code patterns. Qi et al propose a new absolute phase measurement method of a small number of patterns, which can obtain a large number of stripe level sub-codewords without reducing the intensity level of each staircase. Ran et al propose a gray-scale encoding strategy based on half-period encoding, which can generate n with n gray-scales in one figure 2 And code words. Furthermore, the present invention is insensitive to moderate image blur. The phase encoding method embeds codewords in phase form into periodic sinusoidal stripes, thus having low sensitivity to image noise, ambient light and surface contrast variation. Chen et al propose a two-digit phase-coding (TDPC) strategy that can generate more codewords without adding additional pictures. Zheng et al propose to embed codewords simultaneously in the phase and intensity domains of the composite stripe pattern, significantly expanding the encoding range of the valid codewords. Wu et al propose combining a ramp pattern of unit frequency with three phase shifted sinusoidal fringe patterns to form a composite pattern, with a minimum number of fringe patterns to achieve TPU. From the following componentsThe high-frequency fringe pattern is used in the measuring process to effectively improve the measuring precision, so that when a gray scale coding or phase coding method is adopted, a large number of code words are required to be generated, and the generation of the large number of code words can be realized through a higher quantization level or a plurality of patterns. However, higher quantization levels can easily lead to quantization errors, while using too many maps can lead to a reduction in measurement speed.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a 3D contour reconstruction system adopting a four-code word code image to assist in phase expansion. Each region is embedded with a different codeword, respectively, to generate a code map. The method provided by the invention can effectively reduce the quantization level and avoid quantization error while generating a larger coding range.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: the method comprises the steps of adopting a 3D contour reconstruction system with four code word coding graphs for assisting in phase expansion to construct a typical N-step phase shift contour operation measurement system, forming four hardware parts of an FPGA module, an image projection module, an image acquisition module and a time sequence control unit, generating N sine stripe graphs and 1 gray scale coding graph by the FPGA module, projecting the generated image onto a measured object by the image projection module, acquiring the N deformed sine stripe graphs and 1 deformed gray scale coding graph by the image acquisition module, controlling the image acquisition module and the image projection module to synchronously work by the time sequence control unit, synchronously acquiring the image while projecting one frame of image, and processing the acquired deformed graph by the FPGA module to obtain 3D information of the object;
the FPGA module is divided into image coding and image decoding, in the image coding process, a sinusoidal fringe pattern with phase shift of 2 pi/N is projected onto the surface of a three-dimensional diffuse reflection object, and a corresponding deformation pattern captured by a camera is expressed as follows:
where (x, y) is the pixel coordinates, A (x, y) is the average intensity, B (x, y) is the intensity modulation,phase information of stripes on the surface of an object;
combining formula (1) to obtain the wrapping phase:
since equation (2) relates to the arctangent operation,is limited to [ -pi, pi]Denoted as wrap phase, a continuous phase is obtained using a fringe order k (x, y), denoted as:
in a two-bit digital phase encoding strategy, each stripe is divided into two parts according to the phase values in the sinogram, and when each half-cycle is encoded using m quantization levels, m is generated 2 Code words g 1 Codeword representing first half, g 2 Representing the codeword of the second half, codeword g 1 And g 2 Embedded in the code map, expressed as:
wherein,as an upward rounding function, T is the value of each cycleThe number of pixels, m, is the quantization level and mod () is the remainder operation.
By combining the above equations, the wrapping phase is calculated by (2)And code phase->Obtaining codeword g from a range of code phases 1 And g 2 Thereafter, the fringe order k is calculated as:
k=g 1 +(g 2 -1)×4 (5)
after calculating the fringe order k, obtaining an absolute phase through a formula (3);
a single large scale gray scale coding strategy: projecting a sinusoidal fringe pattern generated by a computer to a reference plane through a projector, shooting by using a camera to obtain a deformed fringe pattern, uploading the shot fringe pattern to the computer, and recovering three-dimensional information of the surface of an object, wherein in N sinusoidal fringe patterns generated by the computer, the phase shift of two adjacent sinusoidal fringe patterns is 2 pi/N, the optical axis OC of the camera is perpendicular to the reference plane, the optical axis OP of the projector is intersected with the optical axis OC of the camera, and the included angle is theta and theta is positioned in a YOZ plane;
the gray scale encoding method obtains the fringe order by projecting an additional encoding diagram, thereby realizing the phase unwrapping. Compared with the Gray code method and the method for directly projecting the fringe order diagram, the method balances quantization level and the number of the additional projection fringe diagrams. When considering the measurement speed, the number of additional code patterns projected needs to be reduced, but this will lead to an increase in the required quantization level. The accuracy of the calculated bar order is affected by the quantization level. Conversely, when higher measurement accuracy is required, a lower quantization level should be used, but it will increase the number of projection views. The present invention seeks to accurately calculate the bar code level using fewer code patterns with lower quantization levels while ensuring that the range of bar code levels is sufficiently large. On the basis of which the wrapping phases with different principal value intervals are calculatedAnd->According to->And->Is used to divide one period of the sinusoidal fringe pattern into four parts and embed a codeword for each part. The range of the fringe order can reach 16 in one cycle. In the existing partial method, when the bar rank reaches 16, the code pattern needs to involve 4 quantization levels, and when there are only two quantization levels, the maximum bar rank number can reach 4.
The image decoding process is as follows: in consideration of factors such as ambient light and reflectivity, before calculating the fringe order, binarizing the acquired code pattern, wherein the average intensity A (x, y) is calculated by the acquired sinusoidal fringe pattern, the code pattern is binarized by using A (x, y) as a threshold value, and the formula of A (x, y) is as follows:
coding diagram I c Binarization was performed using the following formula:
by using N sinograms, the wrap phase is calculatedAnd-> And->The values of (2) are truncated in the (-pi, pi) and (-pi/2, pi/2) ranges, respectively,/->And->For determining the position of four regions within each period of the code map, the four regions being represented by four mask maps, as shown in the following formula:
using 4 mask patterns, a codeword corresponding to each region is calculated by the following formula:
I g c for the acquired code map, the bwlabel function in MATLAB is used to segment and mark the 4 mask maps respectively, each pixel in the same connection domain has the same stripe order, after the code word of each region is obtained, the stripe order k is calculated, and the relation between the code word and the stripe order is expressed by the following formula:
here, m=1, 2, …, M, since each period is divided into four areas, there are four codewords in total, m=4, mod () is used to return the remainder of two number divisions, abs () represents an absolute value taking operation, and the absolute phase is obtained by using the fringe order and the wrapped phase calculated by equation (2), and the expression of the absolute phase is as follows:
the FPGA module further includes error codeword correction: correcting the obtained code image, wherein the average intensity A is used as a threshold value for binarizing the code image, correcting the binarized result after neighborhood prediction operation, filling the edges of the image by using adjacent points, carrying out convolution operation on the filled image and a designed convolution kernel to realize neighborhood prediction, obtaining a neighborhood prediction result, and correcting the code image, wherein the corrected code image is represented as follows:
wherein P (x, y) represents gray values corresponding to the point (x, y) and 8 neighborhood points thereof, G represents a convolution kernel of 3 x 3, the edge of which is 1, the middle of which is 0,representing a rounding down operation.
Correcting each region of the mask map, correcting by adopting a pixel-by-pixel processing method, obtaining a codeword corresponding to each region by utilizing the corrected mask map, wherein each region of the codeword is a unique value, correcting by using a majority judgment principle, dividing each side into connected regions by a bwlabel function of MATLAB, and correcting the values of all pixels in each region by using the following formula, wherein each pixel in the same connected region has the same stripe sequence:
in round []In order to perform the rounding operation,for the codeword of the pixel (x, y) corresponding to the first region,for the average value of all codewords in the first region, U is the number of codewords present in the first region.
Finally, using corrected mask 1 And mask 4 The wrapping phase calculated for equation (2)Correction is performed, and the expression after correction is as follows:
after the corrected codeword is obtained, the fringe order is determined using equations (10) and (11), and then the absolute phase can be calculated using equation (12).
Compared with the prior art, the invention has the following specific beneficial effects: the invention provides an absolute phase recovery method, which effectively expands the measurement range of FPP by designing a single large-range gray scale code image, and the quantization level of the proposed code image is lower and only two gray scale levels are involved; in addition, error code correction algorithm based on nearest neighbor prediction is proposed to improve decoding precision, and the excellent performance of the proposed method is verified through simulation and experiment based on different measurement objects; compared with the existing coding method, the method provided by the invention can calculate the fringe order with higher precision and larger range by using only one extra coding pattern, and has wide application prospect in high-speed measurement.
Drawings
FIG. 1 is a diagram of a three-dimensional contour reconstruction system according to the present invention.
Fig. 2 is a flow chart of an encoding and decoding process.
FIG. 3 is a diagram of a mathematical theoretical model of subregion division, FIG. 3 (a) isFIG. 3 (b) is a graph of +.>FIG. 3 (c) is a graph of +.>Fig. 3 (d) is a wrapping phase diagram in the (-pi, pi) range, fig. 3 (e) is a wrapping phase diagram in the (-pi/2, pi/2) range, fig. 3 (f) is a sinusoidal graph, and fig. 3 (g) is a codeword diagram embedded in the code diagram.
Fig. 4 is a correction chart of the code pattern.
FIG. 5 is a simulation of the reconstruction of the corrugations, FIG. 5 (a) is a distribution of measured corrugations, FIG. 5 (b) is a projected sinogram, FIG. 5 (c) is a projected code diagram, FIG. 5 (d) is a deformed sinogram, FIG. 5 (e) is a deformed code diagram, FIG. 5 (f) is a wrapping phaseFIG. 5 (g) is the wrapping phase +.>Fig. 5 (h) is a streak level chart, and fig. 5 (i) is a reconstruction result chart.
Fig. 6 is a simulation diagram of step reconstruction, fig. 6 (a) is a distribution diagram of the measured steps, fig. 6 (b) is a result diagram after reconstruction, and fig. 6 (c) is an error distribution diagram.
FIG. 7 shows the results of acquisition and calculation during dinosaur model reconstruction, FIG. 7 (a) shows a deformed sinusoidal fringe pattern, FIG. 7 (b) shows a deformed code pattern, FIG. 7 (c) shows a first parcel phase pattern calculated from acquired sinusoidal fringes, FIG. 7 (d) shows a second parcel phase pattern calculated from acquired sinusoidal fringes, FIG. 7 (e) shows a corrected code pattern, FIG. 7 (f) shows a mask pattern, FIG. 7 (g) shows a corrected mask patternCode diagram, FIG. 7 (h) is calculated g i FIG. 7 (i) modified g i Fig. 7 (j) is a bar chart obtained without error code correction, fig. 7 (k) is a bar chart obtained with error code correction, fig. 7 (i) is an absolute phase chart obtained without error code correction, and fig. 7 (m) is an absolute phase chart obtained with error code correction.
FIG. 8 is a cross section of line 200 of the code map, FIG. 8 (a) is the wrap phaseFIG. 8 (b) is the wrapping phase +.>Fig. 8 (c) is a codeword of the 200 th row of the code map, fig. 8 (d) is a bar chart without error correction, and fig. 8 (e) is a bar chart with error correction.
Fig. 9 is a reconstruction result diagram of the dinosaur model, fig. 9 (a) is a reconstruction result diagram of error code correction not performed, and fig. 9 (b) is a reconstruction result diagram of error code correction performed.
Fig. 10 is a graph comparing the reconstruction results of different algorithms of dinosaur model.
Fig. 11 is a graph comparing reconstruction results of an insect model under different algorithms, fig. 11 (a) is a collected sinogram, fig. 11 (b) is a collected code graph, fig. 11 (c) is a wrapping phase graph calculated by a main value interval (-pi/2, pi/2) sinogram, fig. 11 (d) is a wrapping phase graph calculated by a main value interval (-pi, pi) sinogram, and fig. 11 (e) is a corrected code graph.
Fig. 12 is a divided view of different areas on an insect model image.
FIG. 13 shows the effect of a model of a girl sculpture during reconstruction, FIG. 13 (a) shows a deformed sinusoidal fringe pattern, FIG. 13 (b) shows a deformed code pattern, FIG. 13 (c) shows a first calculated parcel phase pattern, FIG. 13 (d) shows a second calculated parcel phase pattern, FIG. 13 (e) shows a corrected code pattern, FIG. 13 (f) shows a calculated mask pattern, FIG. 13 (g) shows a calculated g i Fig. 13 (h) is a calculated fringe order chart.
Fig. 14 is a graph of reconstruction results of an avatar under different algorithms.
FIG. 15 is a graph of the results collected and calculated during the reconstruction of an isolated object, FIG. 15 (a) is a deformed sinusoidal fringe pattern, FIG. 15 (b) is a deformed code pattern, FIG. 15 (c) is a first calculated parcel phase pattern, FIG. 15 (d) is a second calculated parcel phase pattern, FIG. 15 (e) is a corrected code pattern, FIG. 15 (f) is a calculated mask pattern, FIG. 15 (g) is a calculated g i Fig. 15 (h) is a calculated fringe order chart.
Fig. 16 is a graph of error comparisons of the results of reconstruction of isolated objects and different algorithms.
Fig. 17 is a graph comparing the cube reconstruction errors for different algorithms.
Fig. 18 is a cross-sectional view of row 116 of the cube blocks.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
A typical measuring system of N-step phase shift profilometry is constructed by adopting a 3D profile reconstruction system with four code word coding diagrams for assisting phase expansion, and the measuring system is composed of four hardware parts, namely an FPGA module, an image projection module, an image acquisition module and a time sequence control unit, as shown in figure 1.
Firstly, generating N sine stripe patterns and 1 gray scale code patterns by using an FPGA module, projecting the generated images onto a measured object by using an image projection module, acquiring the generated images by using an image acquisition module to obtain N deformed sine stripe patterns and 1 deformed gray scale code patterns, controlling the image acquisition module and the image projection module to synchronously work by using a time sequence control unit, so that the images can be synchronously acquired while one frame of images is projected, and processing the acquired deformed patterns by using the FPGA module to obtain the 3D information of the object.
As shown in fig. 2, the FPGA module is divided into image encoding and image decoding.
Use I n (x, y) represents N acquired sinusoidal fringe patterns, n=0, 1,2, …, N-1. In general, N.gtoreq.3. I n (x, y) is generated using the following formula:
where (x, y) is the pixel coordinates, A (x, y) is the average intensity, B (x, y) is the intensity modulation,phase information of stripes on the surface of an object;
in the image encoding process, in order to obtain a suitable encoding strategy, each period of the projected fringe pattern is first divided. FIG. 3 (a) shows the distribution of the tangent function, the period of which is pi, and the corresponding principal value interval is (-pi/2, pi/2). In order to align with the period of the sine and cosine functions, the period of the tangent function is extended to 2pi. When (when)When the main value interval of (a) is (-pi, pi), the curve distribution of the calculated arctangent function can be obtained by rotating the red frame in fig. 3 (a) as shown in fig. 3 (b). When->When the main value interval of (a) is (-pi/2, pi/2), the curve distribution of the calculated arctangent function can be obtained by rotating the blue frame in fig. 3 (a) as shown in fig. 3 (c). When the argument of the horizontal axis in FIGS. 3 (b) and (c) is made of +.>Changes to->The corresponding curves are shown in fig. 3 (d) and 3 (e). FIG. 3 (d) and FIG. 3 (e) correspond to the calculated wrap phases in FPP with different principal value intervals, respectively denoted +.>And->A sinusoidal fringe pattern with a phase shift of 2 pi/N is projected onto the surface of a three-dimensional diffuse reflection object, and a corresponding deformation pattern is captured by a camera. Fig. 3 (f) is one of the sinusoidal fringe patterns projected in the FPP. Each period of the stripe is according to +.>And->The symbol distribution of (c) is divided into four regions. The division rule of each subarea is shown in a table I. Fig. 3 (g) shows codewords corresponding to each region in the coding mode. Each region corresponds to a different codeword, and different combinations of codewords may represent different stripe levels k. The distribution of codewords is shown in table II, and the distribution of codewords follows the gray code pattern when viewed from the column direction.
Combining formula (1) to obtain the wrapping phase:
since equation (2) relates to the arctangent operation,is limited to [ -pi, pi]Denoted as wrap phase, a continuous phase is obtained using a fringe order k (x, y), denoted as:
in a two-bit digital phase encoding strategy, each stripe is divided into two parts according to the phase values in the sinogram, and when each half-cycle is encoded using m quantization levels, m is generated 2 Code words g 1 Codeword representing first half, g 2 Representing the codeword of the second half, codeword g 1 And g 2 Embedded in the code map, expressed as:
wherein,as an upward rounding function, T is the number of pixels per cycle, m is the quantization level, mod () is the remainder operation.
By combining the above equations, the wrapping phase is calculated by (2)And code phase->Obtaining codeword g from a range of code phases 1 And g 2 Thereafter, the fringe order k is calculated as:
k=g 1 +(g 2 -1)×4 (5)
after calculating the fringe order k, obtaining an absolute phase through a formula (3);
a single large scale gray scale coding strategy: the sinusoidal fringe pattern generated by the computer is projected to a reference plane through the projector, then the camera is used for shooting, a deformed fringe pattern is obtained, three-dimensional information of the object surface is recovered after the shot fringe pattern is uploaded to the computer, in N sinusoidal fringe patterns generated by the computer, the phase shift of two adjacent sine fringe patterns is 2 pi/N, the optical axis OC of the camera is perpendicular to the reference plane, the optical axis OP of the projector is intersected with the optical axis OC of the camera, and the included angle is theta, wherein the theta is positioned in the YOZ plane.
The gray scale encoding method obtains the fringe order by projecting an additional encoding diagram, thereby realizing the phase unwrapping. Compared with Gray code method and method of directly projecting fringe order diagram, the invention balances quantization level and number of additional projected fringe diagram. When considering the measurement speed, the number of additional code patterns projected needs to be reduced, but this will lead to an increase in the required quantization level. The accuracy of the calculated bar order is affected by the quantization level. Conversely, when higher measurement accuracy is required, a lower quantization level should be used, but it will increase the number of projection views. The present invention seeks to accurately calculate the bar code level using fewer code patterns with lower quantization levels while ensuring that the range of bar code levels is sufficiently large. On the basis of which the wrapping phases with different principal value intervals are calculatedAnd->According to->And->Is used to divide one period of the sinusoidal fringe pattern into four parts and embed a codeword for each part. The range of the fringe order can reach 16 in one cycle. In the existing partial method, when the slice level reaches 16, the code map needs to involve 4 quantization levels. When there are only two quantization levels, the maximum number of bar steps can only reach 4.
As can be seen from Table II, codeword g 1 、g 2 、g 3 And g 4 There are only two values (0 and 1).
Thus, the code map includes only two quantization levels, which helps suppress quantization errors during decoding. Since four codewords are used to represent the stripe level k, the maximum value of k is 2 4 =16. This can be achieved by cycling the encoded codeword when a larger measurement range and more stripe levels are required. Table II shows two cycles. The value of k' for both periods is the same, the value of k for the second period being 16 greater than the value for the first period.
Table I subregion partitioning rules
Codewords of the method proposed in Table II
The image decoding process is as follows: in consideration of factors such as ambient light and reflectivity, before calculating the fringe order, binarizing the acquired code pattern, wherein the average intensity A (x, y) is calculated by the acquired sinusoidal fringe pattern, the code pattern is binarized by using A (x, y) as a threshold value, and the formula of A (x, y) is as follows:
coding diagram I c Binarization was performed using the following formula:
by using N sinograms, the wrap phase is calculatedAnd-> And->The values of (2) are truncated in the range (-pi, pi) and (-pi/2, pi/2), respectively, according to Table I>And->For determining the position of four regions within each period of the code map, the four regions being represented by four mask maps, as shown in the following formula:
using 4 mask patterns, a codeword corresponding to each region is calculated by the following formula:
the 4 mask patterns are respectively segmented and marked by using a bwlabel function in MATLAB, each pixel in the same connection domain has the same fringe order, as shown in a table II, after the code word of each region is obtained, the fringe order k is calculated, and the relation between the code word and the fringe order is expressed by the following formula:
here, m=1, 2, …, M, since each period is divided into four areas, and there are four codewords in total, m=4, mod () is used to return the remainder of two number division, and the absolute phase is obtained by using the stripe order and the wrapped phase calculated by equation (2), and the expression of the absolute phase is as follows:
error codeword correction:
in order to reduce the influence of factors such as noise, the invention corrects the acquired code image and some images generated in the absolute phase recovery process.
Firstly, correcting an obtained code image, wherein average intensity A is used as a threshold value for binarizing the code image, correcting a binarized result after neighborhood prediction operation is carried out, filling the edge of an image by using adjacent points, carrying out convolution operation on the filled image and a designed convolution kernel to realize neighborhood prediction, obtaining a neighborhood prediction result, and using the result for correcting the code image, wherein the corrected code image is shown as follows:
wherein P (x, y) represents gray values corresponding to the point (x, y) and 8 neighborhood points thereof, G represents a convolution kernel of 3 x 3, the edge of which is 1, the middle of which is 0,representing a rounding down operation.
Then, each area of the mask map is corrected by adopting a pixel-by-pixel processing method, and a codeword corresponding to each area is obtained by utilizing the corrected mask map, wherein each area of the codeword is a unique value, so that the correction is performed by using a majority judgment principle, each edge is divided into connected areas by a bwlabel function of MATLAB, each pixel in the same connected area has the same stripe sequence, and the values of all pixels in each area are corrected by using the following formula:
in round []In order to perform the rounding operation,for the codeword of the pixel (x, y) corresponding to the first region,for the average value of all codewords in the first region, U is the existence of U codewords in the first region.
Finally, using corrected mask 1 And mask 4 The wrapped phase calculated by equation (2)Correction is performed, and the expression after correction is as follows:
after the corrected codeword is obtained, the fringe order is determined using equations (10) and (11), and then the absolute phase can be calculated using equation (12).
In order to verify the effectiveness of the invention, two groups of simulation experiments are carried out in Matlab R2016b by using the invention, and the assumption that the proposed coding method can correctly calculate the fringe order is verified through simulation.
Firstly, measuring a water ripple with a slowly varying surface, wherein the resolution of a simulation camera is 512 multiplied by 512 pixels, the total cycle number of a sine fringe pattern is 16, fig. 5 (a) is the measured ripple distribution, fig. 5 (b) and fig. 5 (c) show one of the projected sine pattern and the code pattern, fig. 5 (d) and fig. 5 (e) show the deformation sine pattern and the deformation code pattern obtained by object modulation, and using the acquired deformation sine fringe pattern, two wrapping phases with different principal value intervals can be obtained, as shown in fig. 5 (f) and fig. 5 (g), according to the wrapping phasesAnd->The stripe order k can be calculated, as shown in fig. 5 (h), and fig. 5 (i) shows the reconstructed ripple distribution, and it can be seen that the ripple can be correctly reconstructed, and the root mean square error between the reconstructed ripple and the actual ripple is 0.0010mm, which proves the feasibility and effectiveness of the algorithm.
In addition, an object whose surface is drastically changed, which is composed of two cylinders and one rectangular parallelepiped, the resolution of the simulation camera is 512×512 pixels, and the total cycle number of the sinusoidal fringe pattern is 32, is taken as the measurement object. Fig. 6 (a) shows the actual step distribution, fig. 6 (b) shows the distribution of the reconstruction result, fig. 6 (c) shows the corresponding error distribution, and RMSE is 0.0015mm. From two simulation experiments, the invention can reconstruct not only an object with a smooth surface, but also an object with a steep surface, the heights of the two objects are close, and the RMSE calculated after reconstruction is also close, thereby fully proving the reliability and effectiveness of the invention.
The invention uses a common FPP system for verification, which consists of a digital CCD camera (model: MVC 1000M), a Digital Light Processing (DLP) projector (model: CB-X18) and a computer. The resolutions of the projector and the camera are 1140×912 and 1280×992, respectively, and a pre-designed fringe pattern is projected onto the surface of an object by the projector while a deformed fringe pattern is photographed by the camera.
In order to verify the effectiveness of the present invention, the present invention measured a dinosaur sculpture model. Fig. 7 is a picture taken and calculated during the dinosaur model reconstruction process. Fig. 7 (a) shows one of the sinusoidal fringe patterns, fig. 7 (b) shows the encoding pattern acquired by the camera, and fig. 7 (c) and fig. 7 (d) show two wrapping phases calculated from the acquired sinusoidal fringe patterns, respectivelyAnd->FIG. 7 (e) is the corrected FIG. 7 (b)) FIG. 7 (f) shows one of the mask patterns, FIG. 7 (g) shows the corrected result, and FIG. 7 (h) shows one of the mask patterns g i Fig. 7 (i) shows the corrected result, fig. 7 (j) shows the stripe order k obtained without performing the error code correction calculation, the corrected result is shown in fig. 7 (k), fig. 7 (l) shows the absolute phase obtained without performing the error code correction calculation, the corrected result is shown in fig. 7 (m), fig. 7 (a) -7 (e) show the cross sections of the 200 th lines of fig. 7 (c) -7 (e) and 7 (j) -7 (k), respectively, fig. 9 (a) shows the reconstructed result without performing the error code correction, fig. 9 (b) shows the reconstructed result obtained with performing the error code correction, and the effectiveness of the proposed correction algorithm can be seen by comparing fig. 8 (d) with fig. 8 (e), and fig. 9 (a) and 9 (b).
Fig. 10 is a comparison of the reconstruction results of dinosaur models by different algorithms, the second row in fig. 9 shows views of the first row at other angles, the reconstruction results with 18-step phase shift are regarded as standard values, and the points exceeding pi between the reconstruction results of different methods and the true values are regarded as error points. As can be seen from the figure, reference [27] (Y.Wu, G.Wu, L.Li, Y.Zhang, H.Luo, S.Yang, J.Yan, and F.Liu, "Inner rotation-phase method for high-speed high-resolution 3-d measurement," IEEE Trans. Instrument., vol.69, pp.7233-7239,2020) had the worst reconstruction effect, with 782 error reconstruction points. The reconstruction results of references [26] (C.Yuan, H.Xu, P.Zhang, Y.Fu, K.Zhong, and W.zhang, "3d measurement method based on s-shaped segmental phase encoding," Opt. Laser technology, vol.121, p.105781, 2020) and [16] (Z.Miao and Q.zhang, "Dual-frequency fringe for improving measurement accuracy of three-dimensional shape measurement," Chin. Opt. Lett., 2021) are good, but all have significant reconstruction error points, 30 and 37 reconstruction error points, respectively. Reference [29] (Y.Zheng, Y.Jin, M.Duan, C.Zhu, and E.Chen, "Joint coding strategy of the phase domain and intensity domain for absolute phase retrieval," IEEE Trans. Instrom. Meas., vol.70, pp.1-8,2021) and the proposed method have the best reconstruction results with the least number of reconstruction error points. Both methods have the same number of reconstruction error points, but the proposed method uses a smaller number of projection maps than reference [29 ]. Therefore, the proposed method has a better reconstruction effect than other methods.
In order to further verify the universality and effectiveness of the proposed algorithm, different methods were used to reconstruct insect model objects, the reconstruction results being shown in fig. 11. It can be seen from the figure that the reconstruction of reference [27] is the worst, while the reconstruction results of other methods are similar. From the calculated RMSE values, it can be seen that the error value of reference [27] is highest, followed by references [26] and [16], while reference [29] and the method of the invention have the best reconstruction. In order to better compare the reconstruction performance of the different methods, the reconstruction errors of the different regions of the insect model were calculated using the different methods, the results of which are shown in table iii and fig. 12. It can be seen from Table III that for all selected regions, the RMSE value of reference [27] is highest, while the RMSE values of reference [29] and the proposed method are lowest, with the RMSE values of most regions of reference [16] being smaller than those of reference [26], this experiment demonstrates the reliability and accuracy of the proposed method.
Table III insect model reconstruction error (RMSE/mm) comparison of different algorithms
In the experiments of the girl sculpture model, FIG. 13 (a) is one of the acquired sinograms, FIG. 13 (b) is an acquired code chart, the wrapping phases calculated from the sinograms are respectively shown in FIG. 13 (c) and FIG. 13 (d), the main value intervals of which are (-pi/2, pi/2) and (-pi, pi), respectively, the corrected code chart is shown in FIG. 13 (e), FIG. 13 (f) shows one of the calculated mask charts, and FIG. 13 (g) shows one of the calculated g charts i Fig. 13 (h) shows the calculated fringe order k.
The reconstruction results obtained by different methods are shown in fig. 14, the second row in fig. 14 showing other angular views in the first row. As can be seen from the figure, the number of reconstruction error points of reference [27] is maximum, 2006 in total, the number of reconstruction error points of references [26] and [16] is more, 386 and 293 respectively, the number of reconstruction error points of reference [29] and the proposed method is minimum, and the reconstruction effect is best. It can also be seen from the figure that the reconstruction errors of these two methods are mainly concentrated in the edge region, mainly due to the separation of the background and the object, and experiments prove the feasibility and effectiveness of the invention for complex surface objects.
In the experiment of two isolated objects, FIG. 15 (a) is one of the acquired sinograms, FIG. 15 (b) is the acquired code pattern, FIG. 15 (c) and FIG. 15 (d) show two wrap phases calculated from the sinograms, FIG. 15 (e) shows the corrected code pattern, FIG. 15 (f) shows one of the calculated mask patterns, and FIG. 15 (g) shows one of the calculated g i Fig. 15 (h) shows the calculated fringe order k.
The first row of fig. 16 shows the reconstruction results of the proposed method and the other four methods, and the second row shows the error map of the corresponding method using the reconstruction results of the 18-step phase shift as a standard. The number of reconstruction error points of reference [27] is the largest, and 2006 in total, the surface fluctuation of the reconstructed object is obvious, while the number of error points based on references [26] and [16] is obviously smaller than that of reference [27], but still exceeds that of reference [29] and the proposed method, and the feasibility and effectiveness of the proposed method in measuring isolated objects are demonstrated.
Table IV "block gauge" reconstruction results and error comparison of different algorithms
To further evaluate the accuracy of the present invention, the present invention measured a standard cube with a height of 15mm.
Table IV shows the reconstruction results and error comparisons for the different methods. As can be seen from the table, the reference [27] has the greatest number of reconstruction error points, the greatest deviation of the average value of the reconstruction results from the standard value, and the corresponding RMSE value. The reconstruction results of reference [29] and the proposed method have no error points and the average height is closest to the standard value, and from the calculated RMSE results, it is evident that the reconstruction error of reference [27] is the largest, while the reconstruction errors of references [26] and [16] are relatively smaller, reference [29] and the proposed method exhibit the smallest reconstruction errors, and fig. 17 shows the distribution of the reconstruction errors of the different methods, providing a more intuitive representation. It is evident from the figures that reference [29] and the proposed method show minimal reconstruction errors, and the accuracy of the proposed method is further verified.
Fig. 18 shows a cross section of line 116 of the standard block, in which the images in the upper right and lower square areas are partial enlarged images in the corresponding left areas. It can be seen from the figure that the reconstruction results of the proposed method and reference [29] are error free at line 116, while the reconstructed object surface of reference [27] fluctuates significantly, demonstrating the effectiveness of the proposed method.
The invention is discussed and analyzed in detail in comparison to other encoding methods by the following aspects.
Table V number of codewords and number of required patterns for different methods
Number of codewords: since the present invention uses four codewords to determine the stripe level, a large number of codewords can be generated with fewer projections. Table V lists the number of codewords and the number of projections used by the different methods. The proposed method generates more codewords using fewer graphs than references [26] and [29 ]. Reference [27] encodes the sinogram compared to other algorithms, resulting in reduced accuracy and poorer reconstruction performance of the reconstruction result. The number of projections required for both methods is the same as for reference [16], but the coding range of the proposed method is wider. Clearly, the advantage of the proposed method is that more codewords are generated using fewer pictures.
Comparison and evaluation: all encoding methods, including the present invention, are based on phase shift algorithms. In practice, the measurement accuracy is related to the algorithm used and the quality of the acquired fringe pattern, reference [27] encodes a sinusoidal fringe pattern, thus resulting in reduced accuracy, and the measurement accuracy of the proposed algorithm is the same as that of other phase encoding methods.
The foregoing description of the preferred embodiment of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (3)

1. The 3D contour reconstruction system adopting the four-code word code image to assist the phase expansion is characterized by comprising four hardware parts, namely an FPGA module, an image projection module, an image acquisition module and a time sequence control unit, wherein the FPGA module is used for generating N sine stripe images and 1 gray code image, the image projection module is used for projecting the generated image onto a measured object, the image acquisition module is used for acquiring the N deformed sine stripe images and 1 deformed gray code image, the time sequence control unit is used for controlling the image acquisition module and the image projection module to synchronously work, so that the image can be synchronously acquired while one frame of image is projected, and the acquired deformed image is processed by the FPGA module to obtain the 3D information of the object.
2. The 3D contour reconstruction system using four codeword code patterns to assist phase unwrapping according to claim 1, wherein the FPGA module is divided into image encoding and image decoding, and in the image encoding process, a sinusoidal fringe pattern with a phase shift of 2Ω/N is projected onto the surface of a three-dimensional diffuse reflection object, and the corresponding deformation pattern captured by the camera is expressed as:
where (x, y) is the pixel coordinates, A (x, y) is the average intensity, B (x, y) is the intensity modulation,phase information of stripes on the surface of an object;
combining formula (1) to obtain the wrapping phase:
is limited to [ -pi, pi]Denoted as wrap phase, a continuous phase is obtained using a fringe order k (x, y), denoted as:
in a two-bit digital phase encoding strategy, each stripe is divided into two parts according to the phase values in the sinogram, and when each half-cycle is encoded using m quantization levels, m is generated 2 Code words g 1 Codeword representing first half, g 2 Representing the codeword of the second half, codeword g 1 And g 2 Embedded in the code map, expressed as:
wherein,t is the number of pixels in each period, m is the quantization level, mod () is the remainder operation;
according to the above formula, the wrapping phase is calculated by formula (2)And code phase->Obtaining codeword g from a range of code phases 1 And g 2 Thereafter, the fringe order k is calculated as:
k=g 1 +(g 2 -1)×4 (5)
after calculating the fringe order k, obtaining an absolute phase through a formula (3);
projecting a sinusoidal fringe pattern generated by a computer to a reference plane through a projector, shooting by using a camera to obtain a deformed fringe pattern, uploading the shot fringe pattern to the computer, and recovering three-dimensional information of the surface of an object, wherein in N sinusoidal fringe patterns generated by the computer, the phase shift of two adjacent sinusoidal fringe patterns is 2 pi/N, the optical axis OC of the camera is perpendicular to the reference plane, the optical axis OP of the projector is intersected with the optical axis OC of the camera, and the included angle is theta and theta is positioned in a YOZ plane;
the image decoding process is as follows: before calculating the stripe order, binarizing the acquired code pattern, wherein the average intensity A (x, y) is calculated by the acquired sinusoidal stripe pattern, the code pattern is binarized by using A (x, y) as a threshold value, and the formula of A (x, y) is as follows:
coding diagram I c Binarization was performed using the following formula:
by using N sinograms, the wrap phase is calculatedAnd->The value of (2) is phased in the (-pi, pi) range,/-, and>the value of (2) is truncated in the (-pi/2, pi/2) range,/f>And->For determining the positions of four areas in each period of the code map, the four areas are represented by four mask maps, and the specific formula is as follows:
using 4 mask patterns, a codeword corresponding to each region is calculated by the following formula:
dividing and marking the 4 mask patterns by using a bwlabel function in MATLAB, wherein each pixel in the same connecting domain has the same fringe order, after obtaining the code word of each region, calculating the fringe order k, and the relation between the code word and the fringe order is expressed by the following formula:
here, m=1, 2, …, M, since each period is divided into four areas, there are four codewords in total, m=4, mod () is used to return the remainder of two number division, and the absolute phase is obtained by using the stripe order and the wrapped phase calculated by the formula (2), and the expression of the absolute phase is as follows:
3. the 3D contour reconstruction system employing four codeword code patterns to assist phase unwrapping of claim 2, wherein the FPGA module further comprises error codeword correction: correcting the obtained code image, wherein the average intensity A is used as a threshold value for binarizing the code image, correcting the binarized result after neighborhood prediction operation, filling the edges of the image by using adjacent points, carrying out convolution operation on the filled image and a designed convolution kernel to realize neighborhood prediction, obtaining a neighborhood prediction result, and correcting the code image, wherein the corrected code image is represented as follows:
wherein P (x, y) represents gray values corresponding to the point (x, y) and 8 neighborhood points thereof, G represents a convolution kernel of 3 x 3, the edge of which is 1, the middle of which is 0,representing a rounding down operation;
correcting each region of the mask map, correcting by adopting a pixel-by-pixel processing method, and obtaining a codeword corresponding to each region by utilizing the corrected mask map, wherein each region of the codeword is a unique value, each edge is divided into connected regions by a bwlabel function of MATLAB, each pixel in the same connected region has the same stripe sequence, and the values of all pixels in each region are corrected by the following formula:
in round []In order to perform the rounding operation,for the codeword of the pixel point (x, y) corresponding to the first region, +.>The average value of all codewords in the first region is the average value of all codewords in the first region, and U is the existence of U codewords in the first region;
finally, using corrected mask 1 And mask 4 The wrapping phase calculated for equation (2)Correction is performed, and the expression after correction is as follows:
after the corrected codeword is obtained, the fringe order is determined using equations (10) and (11), and then the absolute phase is calculated using equation (12).
CN202311534513.6A 2023-11-16 2023-11-16 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping Pending CN117781925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311534513.6A CN117781925A (en) 2023-11-16 2023-11-16 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311534513.6A CN117781925A (en) 2023-11-16 2023-11-16 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping

Publications (1)

Publication Number Publication Date
CN117781925A true CN117781925A (en) 2024-03-29

Family

ID=90378724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311534513.6A Pending CN117781925A (en) 2023-11-16 2023-11-16 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping

Country Status (1)

Country Link
CN (1) CN117781925A (en)

Similar Documents

Publication Publication Date Title
CN109945802B (en) Structured light three-dimensional measurement method
CN108955571B (en) The method for three-dimensional measurement that double frequency heterodyne is combined with phase-shift coding
CN110230997B (en) Shadow region phase noise correction method based on improved monotony method
CN113506348B (en) Gray code-assisted three-dimensional coordinate calculation method
US10574947B2 (en) Object reconstruction in disparity maps using displaced shadow outlines
CN112880589B (en) Optical three-dimensional measurement method based on double-frequency phase coding
CN111207692B (en) Improved segmented step phase coding three-dimensional measurement method
CN110174079B (en) Three-dimensional reconstruction method based on four-step phase-shift coding type surface structured light
CN107339954A (en) Add the method for three-dimensional measurement of phase code striped based on cycle asynchronous sine streak
CN114777677B (en) Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
CN113514009B (en) Asymmetric combination three-dimensional measurement method for shift step phase code and phase shift fringe
Wang et al. A 3D shape measurement method based on novel segmented quantization phase coding
Zhou et al. Fast phase-measuring profilometry through composite color-coding method
CN115790451A (en) Superposition coding phase unwrapping method based on phase quantization
CN116793247A (en) Stripe projection profilometry stripe series correction method based on region statistics
CN111951377A (en) Three-dimensional object reconstruction method and terminal equipment
CN113345039B (en) Three-dimensional reconstruction quantization structure optical phase image coding method
Yu et al. 3D shape measurement based on the unequal-period combination of shifting Gray code and dual-frequency phase-shifting fringes
CN114152203A (en) Bearing inner and outer diameter size measuring method based on phase coding structured light
CN116608794B (en) Anti-texture 3D structured light imaging method, system, device and storage medium
CN114252026B (en) Three-dimensional measurement method and system for modulating three-dimensional code on periodic edge
CN117781925A (en) 3D contour reconstruction system adopting four-codeword coding diagram to assist phase unwrapping
CN110440714A (en) A kind of phase unwrapping package method based on multifrequency and binary system striped
CN109931880A (en) Efficient three-dimensional reconstruction of the RGB Gray code in conjunction with phase shift method
CN111340957B (en) Measurement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination