CN111402240A - Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning - Google Patents

Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning Download PDF

Info

Publication number
CN111402240A
CN111402240A CN202010194707.6A CN202010194707A CN111402240A CN 111402240 A CN111402240 A CN 111402240A CN 202010194707 A CN202010194707 A CN 202010194707A CN 111402240 A CN111402240 A CN 111402240A
Authority
CN
China
Prior art keywords
layer
data
input
cnn
phase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010194707.6A
Other languages
Chinese (zh)
Inventor
左超
钱佳铭
陈钱
冯世杰
李艺璇
陶天阳
胡岩
尚昱昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010194707.6A priority Critical patent/CN111402240A/en
Publication of CN111402240A publication Critical patent/CN111402240A/en
Priority to PCT/CN2020/115539 priority patent/WO2021184707A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional surface type measuring method of single-frame color stripe projection based on deep learning, which comprises a model CNN based on a convolutional neural network, wherein the input of the model CNN comprises three channels, namely gray stripe images in red, green and blue channels of a color stripe image. The projector is used for projecting 12-step phase shift stripes of three different frequencies, and training the stripes by utilizing a Phase Shift (PS) method and a projection minimum distance (PDM) method to generate training data required by CNN. When the method is used, three channel gray scale fringe images of the color fringe image are input to the CNN, and a molecular item, a denominator item and a low-precision absolute phase containing fringe level information are obtained. And substituting the numerator item and the denominator item into an arc tangent function, and combining low-precision absolute phase calculation to obtain high-precision absolute phase information. The present invention can provide more accurate phase information and more reliable phase unwrapping without any complex pre/post processing.

Description

Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning
Technical Field
The invention belongs to the technical field of optical measurement, and particularly relates to a three-dimensional surface type measuring method of single-frame color fringe projection based on deep learning.
Background
Fringe Projection Profilometry (FPP) is one of the most widely used three-dimensional (3D) measurement techniques due to its simple hardware implementation, flexible implementation and high measurement accuracy. In recent years, with the increasing demand for 3D information acquisition in high-speed scenes in applications such as online quality detection and rapid reverse engineering, the FPP-based high-speed 3D shape measurement technology becomes important (Robust dynamic 3-D measurements with motion-compensated phase-shifting profile, author S feed, etc.).
In order to realize 3D imaging in high-speed scenes, it is necessary to improve the measurement efficiency and reduce the number of fringe patterns required for a single three-dimensional reconstruction. The ideal approach is to recover a high quality 3D absolute surface of the object from a single image. The color-coded projection technology (author Z Zhang) has great advantages in dynamic scene measurement, because the technology can code three independent stripe images in red, blue and green channels, and further the imaging efficiency is improved by 2 times compared with the traditional monochromatic projection mode. To fully utilize the color image channel, many single-frame color-coded projection techniques have been proposed by scholars (Composite phase-shifting algorithm for the three-dimensional mapping, author N Karpinsky, etc.). However, these techniques are rarely useful for high precision measurements of complex objects. On the one hand, in order to obtain high-precision phase information, a Phase Shift (PS) method (phase shifting for fringe projection profile: A review, author C Zuo, etc.) with high measurement resolution should be preferably selected. However, the PS method requires at least three fringe images, which occupy all channels of the RGB image, and thus can only remove the phase ambiguity by the spatial phase unwrapping method (which fails to unwrap when encountering an isolated phase) (Color-encoded digital front projection technique for high-speed 3-d surface constancy, author P S Huang et al). On the other hand, in order to achieve stable phase unwrapping, a strategy of combining a fringe pattern with a gray code or a multi-frequency fringe image is generally adopted. The former still cannot spread the phase stably because it is difficult to recognize the edge of the gray code pattern (project-coded amplitude for spatial equivalent and dynamic objects, author W H Su). The latter can recover the absolute phase by 3-fringe number selection (Optical imaging of physical objects, author D powers, etc.), but the phase accuracy is poor due to the use of Fourier Transform (FT) method, a single frame imaging method that is not continuous in phase diagram or has poor quality in isolated regions. In addition, the color-coded projection method has some inherent drawbacks, such as color difference between channels and color crosstalk, which affect the quality of phase calculation. Although researchers have proposed some pre-processing methods to compensate for this defect, the impact of these defects on the measurement can only be reduced to some extent.
From the above analysis, although the color-coded projection technique has a great potential to realize single-frame three-dimensional measurement, the only three color channels are not enough to code a fringe image that satisfies both high-quality phase information acquisition and stable phase unwrapping, and in addition, the problems of chromatic aberration and color crosstalk inherent to the technique are difficult to solve by the conventional method.
Disclosure of Invention
The invention aims to provide a three-dimensional surface type measuring method of single-frame color fringe projection based on deep learning.
The technical solution for realizing the purpose of the invention is as follows: a single-frame color fringe projection phase unwrapping method based on deep learning specifically comprises the following steps:
step 1: constructing a model CNN based on a convolutional neural network;
step 2: generating CNN model training data, and training the model CNN;
and 3, inputting the gray level images in the three channels of the color composite stripe image of the measured object into the trained model CNN to obtain a numerator item, a denominator item and a low-precision absolute phase, substituting the numerator item and the denominator item into an arc tangent function, and calculating by combining the low-precision absolute phase to obtain final absolute phase information.
Preferably, the model CNN includes five data processing paths, a connection layer 1 and a convolutional layer 11, where:
the data processing path 1 is arranged to: the input data sequentially passes through the convolutional layer 1 and the residual error module 1, the data output by the residual error module 1 and the data output by the convolutional layer 1 are input into the convolutional layer 2, and the output data of the convolutional layer 2 is input into the connecting layer 1;
the data processing path 2 is arranged to: the input data sequentially passes through the convolutional layer 3, the pooling layer 1, the residual error module 2 and the up-sampling layer 1, the data output by the up-sampling layer 1 and the data output by the pooling layer 1 are input into the convolutional layer 4, and the data output by the convolutional layer 4 is input into the connecting layer 1;
the data processing path 3 is arranged to: the input data sequentially passes through the convolutional layer 5, the pooling layer 2, the residual error module 3, the upper sampling layer 2 and the upper sampling layer 3, the data output by the upper sampling layer 3 and the data output by the pooling layer 2 are input into the convolutional layer 6, and the data output by the convolutional layer 6 is input into the connecting layer 1;
the data processing path 4 is arranged to: the input data sequentially passes through the convolutional layer 7, the pooling layer 3, the residual error module 4, the upper sampling layer 5 and the upper sampling layer 6, the data output by the upper sampling layer 6 and the data output by the pooling layer 3 are input into the convolutional layer 8, and the data output by the convolutional layer 8 is input into the connecting layer 1;
the data processing path 5 is arranged to: the input data sequentially passes through a convolutional layer 9, a pooling layer 4, a residual error module 5, an upper sampling layer 7, an upper sampling layer 8, an upper sampling layer 9 and an upper sampling layer 10, the data output by the upper sampling layer 10 and the data output by the pooling layer 4 are input into the convolutional layer 10, and the data output by the convolutional layer 10 is input into a connecting layer 1;
the connection layer 1 is configured to input 5 channels of data to the convolution layer 11, and obtain a 3D tensor with an output channel number of 3.
Preferably, the pooling layers 1,2, 3, 4, 5 down-sample the data 1/2, 1/4, 1/8, 1/16, respectively.
Preferably, the specific method for generating the CNN model training data is as follows:
step 2.1: projecting 37 fringe images to an object using a projector, the 37 fringe images comprising 12 frequencies fRGreen phase shifted fringe image of
Figure BDA0002417177210000031
Frequency f of 12 amplitudesGGreen phase shifted fringe image of
Figure BDA0002417177210000032
And 12 frequencies fBGreen phase shifted fringe image of
Figure BDA0002417177210000033
And 1 composite color stripe image IRGBThe red channel is at frequency fRGray scale stripe image IRGreen channel is frequency fGGray scale stripe image IGBlue channel is frequency fBGray scale stripe image IB
Step 2.2: using a color camera to acquire 37 fringe images modulated by an object and generate a set of input and output data required for training CNN, specifically:
step 2.2.1: for the first 36 collected green stripe images
Figure BDA0002417177210000034
Using phase shift separately(PS) method for obtaining frequency fR、fG、fBWrapped phase of
Figure BDA0002417177210000035
Obtaining frequency f by PDM methodGAbsolute phase of (phi)GWill frequency fGMolecular item M ofGDenominator term DGAnd absolute phase phiGAs a set of standard data for model CNN.
Step 2.2.2, the collected 37 th composite color stripe image IRGBGray scale image I in three channelsR、IG、IBAs a set of input data of the network CNN;
step 2.3: and (5) repeating the steps 2.1 and 2.2 to generate training data with set number.
Preferably, the specific method for training the model CNN is as follows:
the gray level image I in the three channels of the 37 th composite color stripe imageR、IG、IBAs model CNN input data, frequency fGMolecular item M ofGDenominator term DGAnd absolute phase phiGAnd as model CNN standard data, calculating the difference between the standard data and the model CNN output value, and repeatedly and iteratively optimizing the internal parameters of the CNN by using a back propagation method until the loss function is converged.
Preferably, substituting the numerator term and the denominator term into the arctangent function, and obtaining the final absolute phase information by combining with the low-precision absolute phase calculation specifically includes:
substituting the numerator item and the denominator item into an arc tangent function to obtain a wrapping phase;
combining the wrapped phase and the low-precision sensing phase to obtain a final absolute phase, wherein the specific formula is as follows:
Figure BDA0002417177210000041
wherein Round represents a rounding operation, phiGIn order to be the final absolute phase,
Figure BDA0002417177210000042
in order to wrap the phase,
Figure BDA0002417177210000043
a low precision absolute phase output for model CNN.
Compared with the prior art, the invention has the following remarkable advantages: (1) according to the invention, high-precision phase information acquisition and stable phase expansion can be realized simultaneously through a single color image; (2) the invention can automatically compensate the color difference and color crosstalk between color channels without any complex pre/post processing on the system.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a structural and schematic diagram of CNN.
FIG. 3 is a graph comparing the results of the present invention and the conventional method.
Detailed Description
A three-dimensional surface type measuring method of single-frame color fringe projection based on deep learning obtains high-precision absolute phase information through a single-frame color fringe image, and comprises the following steps:
step 1: and constructing a model CNN based on the convolutional neural network.
Specifically, the model CNN is constructed as shown in fig. 2, where H denotes the height (pixel) of the image, W denotes the width of the image, and C denotes the number of channels, which is equal to the number of filters used. The input to the model CNN is a 3D tensor having three channels, and the output is also a 3D tensor having three channels. The model CNN includes five data processing paths, a connection layer 1, and a convolutional layer 11.
In a further embodiment, the data processing path 1 is arranged to: the input data sequentially passes through the convolutional layer 1 and the residual error module 1, the data output by the residual error module 1 and the data output by the convolutional layer 1 are input into the convolutional layer 2, and the output data of the convolutional layer 2 is input into the connection layer 1.
The data processing path 2 is arranged to: the input data sequentially passes through the convolutional layer 3, the pooling layer 1, the residual error module 2 and the up-sampling layer 1, the data output by the up-sampling layer 1 and the data output by the pooling layer 1 are input into the convolutional layer 4, and the data output by the convolutional layer 4 is input into the connection layer 1.
The data processing path 3 is arranged to: the input data sequentially passes through the convolutional layer 5, the pooling layer 2, the residual error module 3, the upsampling layer 2 and the upsampling layer 3, the data output by the upsampling layer 3 and the data output by the pooling layer 2 are input into the convolutional layer 6, and the data output by the convolutional layer 6 is input into the connecting layer 1.
The data processing path 4 is arranged to: the input data sequentially passes through the convolutional layer 7, the pooling layer 3, the residual error module 4, the up-sampling layer 5 and the up-sampling layer 6, the data output by the up-sampling layer 6 and the data output by the pooling layer 3 are input into the convolutional layer 8, and the data output by the convolutional layer 8 is input into the connection layer 1.
The data processing path 5 is arranged to: the input data sequentially passes through a convolutional layer 9, a pooling layer 4, a residual error module 5, an up-sampling layer 7, an up-sampling layer 8, an up-sampling layer 9 and an up-sampling layer 10, the data output by the up-sampling layer 10 and the data output by the pooling layer 4 are input into the convolutional layer 10, and the data output by the convolutional layer 10 is input into a connecting layer 1.
The specific construction method of each residual error module is referred to in Deep residual error for imaging, author K He and the like;
specifically, the pooling layers 1,2, 3, 4 and 5 respectively down-sample the data by 1/2, 1/4, 1/8 and 1/16 to improve the recognition capability of the model to the features while keeping the number of channels unchanged.
Specifically, the function of the upsampling layer 1 to the upsampling layer 10 is to upsample the resolution of the data, and increase the height and width of the data by 1 time respectively, so as to restore the original resolution of the image.
Subsequently, the connection layer 1 superimposes the five-way data. Finally, the 3D tensor having the number of channels of 3 is output through the convolution layer 11.
Step 2: generating training data and training a model CNN, and specifically comprising the following steps:
step 2.1: the projector projects 37 fringe images (including 36 monochromatic fringe images and one compound fringe image) onto the object.
Projecting 37 fringe images to an object using a projector, the 37 fringe images comprising 12 frequencies fRGreen phase shifted fringe image of
Figure BDA0002417177210000061
Frequency f of 12 amplitudesGGreen phase shifted fringe image of
Figure BDA0002417177210000062
And 12 frequencies fBGreen phase shifted fringe image of
Figure BDA0002417177210000063
And 1 composite color stripe image IRGBThe red channel is at frequency fRGray scale stripe image IRGreen channel is frequency fGGray scale stripe image IGBlue channel is frequency fBGray scale stripe image IB
Step 2.2: using a color camera to acquire 37 fringe images modulated by an object and generate a set of input and output data required for training CNN, specifically:
step 2.2.1, for the first 36 collected green stripe images
Figure BDA0002417177210000064
Obtaining the frequency f by using the PS method respectivelyR、fG、fBWrapped phase of
Figure BDA0002417177210000065
Figure BDA0002417177210000066
Figure BDA0002417177210000067
Figure BDA0002417177210000068
In the formula (I), the compound is shown in the specification,
Figure BDA0002417177210000069
respectively representing a frequency of fR、fG、fBN-th green stripe image, n 1,2, 12, M and D respectively represent a numerator term and a denominator term of an arctangent function.
Acquiring wrapped phases of three different frequencies
Figure BDA00024171772100000610
Then, the frequency f is obtained by PDM (Micro fourier transform profiling (mftp):3d shape measurement at 10,000frames per second, left-hand super-equal author) methodGAbsolute phase of (phi)GThe absolute phase phi obtained hereGThere are no color channel-to-color difference and color crosstalk problems because only a single color fringe image is used. The frequency f obtained by the calculation is usedGMolecular item M ofGDenominator term DGAnd absolute phase phiGAs a set of standard (ground route) data for CNN.
Step 2.2.2, for the 37 th acquired composite color stripe image IRGBThe gray level images I in the three channels thereofR、IG、IBAs a set of input data of the network CNN;
step 2.3: and repeating the steps 2.1 and 2.2 to generate 1000 groups of training data.
Step 2.4: training a CNN: the gray level image I in the three channels of the 37 th composite color stripe imageR、IG、IBAs input data, MG、DG、ΦGThe model CNN is entered as standard data. The difference between the standard value and the CNN output value is calculated using the mean square error as a loss function. And combining a back propagation method, and iteratively optimizing the internal parameters of the CNN repeatedly until the loss function is converged, wherein the internal parameters are optimizedThe model CNN training is finished. In the training process of the model, except for the convolutional layer 11, the activation functions used in the other convolutional layers are all linear rectification functions (Relu). And when the loss function is optimized in an iterative manner, searching the minimum value of the loss function by adopting an Adam algorithm.
And step 3: the trained model CNN is used for realizing three-dimensional measurement of the measured object, and the method specifically comprises the following steps:
step 3.1: information for calculating high-precision wrapped phases and for unwrapping is acquired simultaneously.
Inputting gray level images I in three channels of color composite stripe images of the measured object in the CNN after the training is finishedR、IG、IBObtaining a molecular term M for calculating high-precision wrapped phase informationGDenominator term DGAnd low precision absolute phase Φ l containing fringe order informationG(the error is between-pi and pi);
step 3.2: obtaining high precision absolute phase
Step 3.2.1M obtained according to step 3.1GAnd DGObtaining a high-precision wrapped phase by the formula (2)
Figure BDA0002417177210000071
The reason that this strategy can provide high precision phase information is that: the structure of the numerator term and the denominator term corresponding to the prediction arctan function overcomes the difficulty of reproducing 2 pi phase winding in the wrapping phase.
Step 3.2.2 obtaining the high precision absolute phase phi by the following formulaG
Figure BDA0002417177210000072
In the formula, Round represents a rounding operation.
After obtaining the absolute phase, three-dimensional reconstruction can be performed by Calibration parameters between the color camera and the projector (Calibration of a front projection profile with a bundle adjustment, writer penning, etc.).
The invention can realize the acquisition of high-precision absolute phase by only projecting a single color stripe image, thereby realizing the measurement of the three-dimensional surface type of the measured object. The invention firstly constructs a model based on a convolution neural network. In the present invention, it is called CNN. The input of the CNN comprises three channels, which are respectively the grayscale fringe images in the red, green and blue channels of the color fringe image, and the output data is the numerator term, denominator term used for calculating high-precision phase information and a low-precision absolute phase comprising fringe level information. During training, a projector is adopted to project 12-step phase shift stripes with three different frequencies, and a PS method and a projection minimum distance method (PDM) are utilized to generate training data required by CNN. After training is finished, inputting three channel gray scale fringe images of the color fringe image into the CNN to obtain a numerator item and a denominator item for calculating high-precision phase information and a low-precision absolute phase containing fringe level information. Substituting the numerator item and the denominator item into an arc tangent function, combining low-precision absolute phase calculation to obtain high-precision absolute phase information, and finally performing three-dimensional reconstruction.
Example (b):
to demonstrate the effectiveness of the present invention, a digital raster projection apparatus was constructed to capture color fringe images based on a color camera (model acA640-750uc, Basler, resolution 640 × 480), a projector (model L ightcraft 4500, TI, resolution 912 × 1140) and a computer, CNN was constructed with H, W, C of 480, 640, 64, using three fringe frequencies f, andR、fG、fBrespectively 9, 11, 13. During training data, 1000 groups of data are collected, 800 groups of data are used for training in the training process, and the remaining 200 groups of data are used for verification. After the training is finished, in order to verify the effectiveness of the invention, 2 scenes which are not seen in the training are selected as tests. To embody the advantages of the present invention, the present invention is compared with a conventional color stripe encoding method (Snapshot color project for absolute three-dimensional accuracy of video sequences, author Zhang Zonghua, etc.), and the results of the monochrome 12-step PS method and the PDM are selected as reference results. FIG. 3 shows the measurement results, where 3(a) and 3(e) are the corresponding of two scenariosComposite color images, 3(b) and 3(f) are the results measured by the conventional color stripe encoding method, 3(c) and 3(g) are the results of the method, and 3(d) and 3(h) are the reference results. As can be seen from the results, the invention can obtain more accurate absolute phase reconstruction, and the final three-dimensional reconstruction quality can be even comparable with the results obtained by the PS method and the PDM method. It should be noted that the invention only uses 1 color composite stripe image, and the method for reference results uses 36 stripe images.

Claims (6)

1. A single-frame color fringe projection phase unwrapping method based on deep learning is characterized by comprising the following specific steps:
step 1: constructing a model CNN based on a convolutional neural network;
step 2: generating CNN model training data, and training the model CNN;
and 3, inputting the gray level images in the three channels of the color composite stripe image of the measured object into the trained model CNN to obtain a numerator item, a denominator item and a low-precision absolute phase, substituting the numerator item and the denominator item into an arc tangent function, and calculating by combining the low-precision absolute phase to obtain final absolute phase information.
2. The method for measuring three-dimensional surface type based on deep learning single-frame color fringe projection of claim 1, wherein the model CNN comprises five data processing paths, a connection layer 1 and a convolution layer 11, wherein:
the data processing path 1 is arranged to: the input data sequentially passes through the convolutional layer 1 and the residual error module 1, the data output by the residual error module 1 and the data output by the convolutional layer 1 are input into the convolutional layer 2, and the output data of the convolutional layer 2 is input into the connecting layer 1;
the data processing path 2 is arranged to: the input data sequentially passes through the convolutional layer 3, the pooling layer 1, the residual error module 2 and the up-sampling layer 1, the data output by the up-sampling layer 1 and the data output by the pooling layer 1 are input into the convolutional layer 4, and the data output by the convolutional layer 4 is input into the connecting layer 1;
the data processing path 3 is arranged to: the input data sequentially passes through the convolutional layer 5, the pooling layer 2, the residual error module 3, the upper sampling layer 2 and the upper sampling layer 3, the data output by the upper sampling layer 3 and the data output by the pooling layer 2 are input into the convolutional layer 6, and the data output by the convolutional layer 6 is input into the connecting layer 1;
the data processing path 4 is arranged to: the input data sequentially passes through the convolutional layer 7, the pooling layer 3, the residual error module 4, the upper sampling layer 5 and the upper sampling layer 6, the data output by the upper sampling layer 6 and the data output by the pooling layer 3 are input into the convolutional layer 8, and the data output by the convolutional layer 8 is input into the connecting layer 1;
the data processing path 5 is arranged to: the input data sequentially passes through a convolutional layer 9, a pooling layer 4, a residual error module 5, an upper sampling layer 7, an upper sampling layer 8, an upper sampling layer 9 and an upper sampling layer 10, the data output by the upper sampling layer 10 and the data output by the pooling layer 4 are input into the convolutional layer 10, and the data output by the convolutional layer 10 is input into a connecting layer 1;
the connection layer 1 is configured to input 5 channels of data to the convolution layer 11, and obtain a 3D tensor with an output channel number of 3.
3. The method for measuring three-dimensional surface shape of single-frame color fringe projection based on deep learning of claim 2, wherein the pooling layers 1,2, 3, 4 and 5 respectively down-sample the data by 1/2, 1/4, 1/8 and 1/16.
4. The method for measuring the three-dimensional surface shape of the single-frame color fringe projection based on the deep learning of claim 1, wherein the specific method for generating the CNN model training data is as follows:
step 2.1: projecting 37 fringe images to an object using a projector, the 37 fringe images comprising 12 frequencies fRGreen phase shifted fringe image of
Figure FDA0002417177200000021
Frequency f of 12 amplitudesGGreen phase shifted fringe image of
Figure FDA0002417177200000022
And 12 frequencies fBGreen phase shifted fringe image of
Figure FDA0002417177200000023
And 1 composite color stripe image IRGBThe red channel is at frequency fRGray scale stripe image IRGreen channel is frequency fGGray scale stripe image IGBlue channel is frequency fBGray scale stripe image IB
Step 2.2: using a color camera to acquire 37 fringe images modulated by an object and generate a set of input and output data required for training CNN, specifically:
step 2.2.1, for the first 36 collected green stripe images
Figure FDA0002417177200000024
Obtaining the frequency f by using the PS method respectivelyR、fG、fBWrapped phase of
Figure FDA0002417177200000025
Obtaining frequency f by PDM methodGAbsolute phase of (phi)GWill frequency fGMolecular item M ofGDenominator term DGAnd absolute phase phiGAs a set of standard data for model CNN.
Step 2.2.2, the collected 37 th composite color stripe image IRGBGray scale image I in three channelsR、IG、IBAs a set of input data of the network CNN;
step 2.3: and (5) repeating the steps 2.1 and 2.2 to generate training data with set number.
5. The method for measuring the three-dimensional surface shape of the single-frame color fringe projection based on the deep learning of claim 4 is characterized in that the specific method for training the model CNN is as follows:
the gray level image I in the three channels of the 37 th composite color stripe imageR、IG、IBAs model CNN input data, frequency fGMolecular item M ofGDenominator term DGAnd absolute phase phiGAnd as model CNN standard data, calculating the difference between the standard data and the model CNN output value, and repeatedly and iteratively optimizing the internal parameters of the CNN by using a back propagation method until the loss function is converged.
6. The method for measuring the three-dimensional surface shape of the single-frame color fringe projection based on the deep learning of claim 4 is characterized in that the final absolute phase information obtained by substituting a numerator term and a denominator term into an arctangent function and combining with the low-precision absolute phase calculation is specifically as follows:
substituting the numerator item and the denominator item into an arc tangent function to obtain a wrapping phase;
combining the wrapped phase and the low-precision sensing phase to obtain a final absolute phase, wherein the specific formula is as follows:
Figure FDA0002417177200000031
wherein Round represents a rounding operation, phiGIn order to be the final absolute phase,
Figure FDA0002417177200000032
in order to wrap the phase,
Figure FDA0002417177200000033
a low precision absolute phase output for model CNN.
CN202010194707.6A 2020-03-19 2020-03-19 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning Pending CN111402240A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010194707.6A CN111402240A (en) 2020-03-19 2020-03-19 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning
PCT/CN2020/115539 WO2021184707A1 (en) 2020-03-19 2020-09-16 Three-dimensional surface profile measurement method for single-frame color fringe projection based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010194707.6A CN111402240A (en) 2020-03-19 2020-03-19 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning

Publications (1)

Publication Number Publication Date
CN111402240A true CN111402240A (en) 2020-07-10

Family

ID=71432625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010194707.6A Pending CN111402240A (en) 2020-03-19 2020-03-19 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning

Country Status (2)

Country Link
CN (1) CN111402240A (en)
WO (1) WO2021184707A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829458A (en) * 2020-07-20 2020-10-27 南京理工大学智能计算成像研究院有限公司 Gamma nonlinear error correction method based on deep learning
CN111928794A (en) * 2020-08-04 2020-11-13 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN112116616A (en) * 2020-08-05 2020-12-22 西安交通大学 Phase information extraction method based on convolutional neural network, storage medium and equipment
CN112802084A (en) * 2021-01-13 2021-05-14 广州大学 Three-dimensional topography measuring method, system and storage medium based on deep learning
CN112833818A (en) * 2021-01-07 2021-05-25 南京理工大学智能计算成像研究院有限公司 Single-frame fringe projection three-dimensional surface type measuring method
CN113256800A (en) * 2021-06-10 2021-08-13 南京理工大学 Accurate and rapid large-field-depth three-dimensional reconstruction method based on deep learning
WO2021184707A1 (en) * 2020-03-19 2021-09-23 南京理工大学 Three-dimensional surface profile measurement method for single-frame color fringe projection based on deep learning
CN113674370A (en) * 2021-08-02 2021-11-19 南京理工大学 Single-frame interference diagram tuning method based on convolutional neural network
CN114543707A (en) * 2022-04-25 2022-05-27 南京南暄禾雅科技有限公司 Phase expansion method in scene with large depth of field
CN114777677A (en) * 2022-03-09 2022-07-22 南京理工大学 Single-frame dual-frequency multiplexing fringe projection three-dimensional surface type measuring method based on deep learning
CN115187649A (en) * 2022-09-15 2022-10-14 中国科学技术大学 Three-dimensional measurement method, system, equipment and storage medium for resisting strong ambient light interference
CN117496499A (en) * 2023-12-27 2024-02-02 山东科技大学 Method and system for identifying and compensating false depth edges in 3D structured light imaging
CN117739861A (en) * 2024-02-20 2024-03-22 青岛科技大学 Improved single-mode self-phase-resolving stripe projection three-dimensional measurement method based on deep learning

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066959B (en) * 2021-11-25 2024-05-10 天津工业大学 Single fringe image depth estimation method based on transducer
CN114754703B (en) * 2022-04-19 2024-04-19 安徽大学 Three-dimensional measurement method and system based on color grating
TWI816511B (en) * 2022-08-15 2023-09-21 國立高雄大學 Method for image recognition using balance grey code
CN115775302B (en) * 2023-02-13 2023-04-14 南京航空航天大学 Transformer-based three-dimensional reconstruction method for high-reflectivity object
CN116105632B (en) * 2023-04-12 2023-06-23 四川大学 Self-supervision phase unwrapping method and device for structured light three-dimensional imaging
CN117011478B (en) * 2023-10-07 2023-12-22 青岛科技大学 Single image reconstruction method based on deep learning and stripe projection profilometry

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109253708B (en) * 2018-09-29 2020-09-11 南京理工大学 Stripe projection time phase unwrapping method based on deep learning
CN111402240A (en) * 2020-03-19 2020-07-10 南京理工大学 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021184707A1 (en) * 2020-03-19 2021-09-23 南京理工大学 Three-dimensional surface profile measurement method for single-frame color fringe projection based on deep learning
CN111829458A (en) * 2020-07-20 2020-10-27 南京理工大学智能计算成像研究院有限公司 Gamma nonlinear error correction method based on deep learning
CN111928794A (en) * 2020-08-04 2020-11-13 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN111928794B (en) * 2020-08-04 2022-03-11 北京理工大学 Closed fringe compatible single interference diagram phase method and device based on deep learning
CN112116616A (en) * 2020-08-05 2020-12-22 西安交通大学 Phase information extraction method based on convolutional neural network, storage medium and equipment
CN112833818A (en) * 2021-01-07 2021-05-25 南京理工大学智能计算成像研究院有限公司 Single-frame fringe projection three-dimensional surface type measuring method
CN112802084A (en) * 2021-01-13 2021-05-14 广州大学 Three-dimensional topography measuring method, system and storage medium based on deep learning
CN112802084B (en) * 2021-01-13 2023-07-07 广州大学 Three-dimensional morphology measurement method, system and storage medium based on deep learning
CN113256800B (en) * 2021-06-10 2021-11-26 南京理工大学 Accurate and rapid large-field-depth three-dimensional reconstruction method based on deep learning
CN113256800A (en) * 2021-06-10 2021-08-13 南京理工大学 Accurate and rapid large-field-depth three-dimensional reconstruction method based on deep learning
CN113674370A (en) * 2021-08-02 2021-11-19 南京理工大学 Single-frame interference diagram tuning method based on convolutional neural network
CN114777677A (en) * 2022-03-09 2022-07-22 南京理工大学 Single-frame dual-frequency multiplexing fringe projection three-dimensional surface type measuring method based on deep learning
CN114777677B (en) * 2022-03-09 2024-04-26 南京理工大学 Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
CN114543707A (en) * 2022-04-25 2022-05-27 南京南暄禾雅科技有限公司 Phase expansion method in scene with large depth of field
CN115187649A (en) * 2022-09-15 2022-10-14 中国科学技术大学 Three-dimensional measurement method, system, equipment and storage medium for resisting strong ambient light interference
CN117496499A (en) * 2023-12-27 2024-02-02 山东科技大学 Method and system for identifying and compensating false depth edges in 3D structured light imaging
CN117496499B (en) * 2023-12-27 2024-03-15 山东科技大学 Method and system for identifying and compensating false depth edges in 3D structured light imaging
CN117739861A (en) * 2024-02-20 2024-03-22 青岛科技大学 Improved single-mode self-phase-resolving stripe projection three-dimensional measurement method based on deep learning
CN117739861B (en) * 2024-02-20 2024-05-14 青岛科技大学 Improved single-mode self-phase-resolving stripe projection three-dimensional measurement method based on deep learning

Also Published As

Publication number Publication date
WO2021184707A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN111402240A (en) Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning
CN111351450B (en) Single-frame stripe image three-dimensional measurement method based on deep learning
CN109253708B (en) Stripe projection time phase unwrapping method based on deep learning
TWI414748B (en) Method for simultaneuos hue phase-shifting and system for 3-d surface profilometry using the same
CN110163817B (en) Phase principal value extraction method based on full convolution neural network
CN114549746B (en) High-precision true color three-dimensional reconstruction method
CN103994732B (en) A kind of method for three-dimensional measurement based on fringe projection
CN114777677B (en) Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
CN111879258A (en) Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet
CN110109105A (en) A method of the InSAR technical monitoring Ground Deformation based on timing
CN113379818A (en) Phase analysis method based on multi-scale attention mechanism network
CN112833818B (en) Single-frame fringe projection three-dimensional surface type measuring method
CN112880589A (en) Optical three-dimensional measurement method based on double-frequency phase coding
CN114549307B (en) High-precision point cloud color reconstruction method based on low-resolution image
CN115205360A (en) Three-dimensional outer contour online measurement and defect detection method of composite stripe projection steel pipe and application
CN115546255A (en) SIFT stream-based single-frame fringe projection high dynamic range error compensation method
CN115272065A (en) Dynamic fringe projection three-dimensional measurement method based on fringe image super-resolution reconstruction
Song et al. Super-resolution phase retrieval network for single-pattern structured light 3D imaging
Liu et al. A novel phase unwrapping method for binocular structured light 3D reconstruction based on deep learning
CN116934999A (en) NeRF three-dimensional reconstruction system and method based on limited view angle image
CN112348947B (en) Three-dimensional reconstruction method for deep learning based on reference information assistance
CN116645466A (en) Three-dimensional reconstruction method, electronic equipment and storage medium
CN111023999B (en) Dense point cloud generation method based on spatial coding structured light
Ding et al. Recovering the absolute phase maps of three selected spatial-frequency fringes with multi-color channels
CN113884027A (en) Geometric constraint phase unwrapping method based on self-supervision deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination