WO2021184707A1 - 基于深度学习的单帧彩色条纹投影的三维面型测量方法 - Google Patents

基于深度学习的单帧彩色条纹投影的三维面型测量方法 Download PDF

Info

Publication number
WO2021184707A1
WO2021184707A1 PCT/CN2020/115539 CN2020115539W WO2021184707A1 WO 2021184707 A1 WO2021184707 A1 WO 2021184707A1 CN 2020115539 W CN2020115539 W CN 2020115539W WO 2021184707 A1 WO2021184707 A1 WO 2021184707A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
data
input
cnn
fringe
Prior art date
Application number
PCT/CN2020/115539
Other languages
English (en)
French (fr)
Inventor
左超
钱佳铭
陈钱
冯世杰
李艺璇
陶天阳
胡岩
尚昱昊
Original Assignee
南京理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京理工大学 filed Critical 南京理工大学
Publication of WO2021184707A1 publication Critical patent/WO2021184707A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2509Color coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention belongs to the technical field of optical measurement, and is specifically a three-dimensional surface profile measurement method of single-frame color fringe projection based on deep learning.
  • Fringe projection profilometry has become one of the most widely used three-dimensional (3D) measurement techniques due to its simple hardware facilities, flexible implementation and high measurement accuracy.
  • FPP-based high-speed 3D shape measurement technology has become crucial (Robust dynamic 3-d measurements with motion- compensated phase-shifting profilometry, author S Feng, etc.).
  • phase shifting (PS) method with high measurement resolution (Phase shifting algorithms for fringe projection profilometry: A review, author C Zuo, etc.) should be preferred.
  • the PS method requires at least three fringe images. These fringe images occupy all the channels of the RGB image. Therefore, the spatial phase unwrapping method (unwrapping failure when encountering an isolated phase) can only be used to eliminate the phase ambiguity (Color-encoded digital) fringe projection technique for high-speed 3-d surface contouring, author P S Huang, etc.).
  • a strategy of combining fringe patterns with Gray codes or combining multi-frequency fringe images is usually adopted.
  • the former is still unable to stably expand the phase because it is difficult to identify the edges of the Gray code pattern (Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects, author WH Su).
  • the latter can be used to recover the absolute phase through the 3-fringe number selection method (Optical imaging of physical objects, author D Towers, etc.), but due to the use of the Fourier transform (FT) method (a single-frame imaging method, this method is The quality of discontinuous or isolated areas is poor) and the phase accuracy is poor.
  • FT Fourier transform
  • the color-coded projection method also has some inherent defects, such as the color difference between channels and color crosstalk, these factors will affect the quality of the phase calculation. Although researchers have proposed some pre-processing methods to compensate for this defect, they can only reduce the impact of these defects on the measurement to a certain extent.
  • the purpose of the present invention is to provide a single-frame color fringe projection three-dimensional surface measurement method based on deep learning.
  • the technical solution to achieve the objective of the present invention is: a single-frame color fringe projection phase unwrapping method based on deep learning, the specific steps are:
  • Step 1 Build a model CNN based on convolutional neural network
  • Step 2 Generate CNN model training data and train the model CNN
  • Step 3 Input the grayscale images in the three channels of the color composite fringe image of the test object into the trained model CNN to obtain the numerator, denominator, and low-precision absolute phase, and substitute the numerator and denominator into the arctangent function. Combine the low-precision absolute phase calculation to obtain the final absolute phase information.
  • the model CNN includes five data processing paths, a connection layer 1 and a convolution layer 11, wherein:
  • the data processing path 1 is set as follows: the input data sequentially passes through the convolutional layer 1 and the residual module 1, and the data output by the residual module 1 is input to the convolutional layer 2 together with the data output by the convolutional layer 1, and the convolutional layer 2 output data input connection layer 1;
  • the data processing path 2 is set to: the input data sequentially passes through the convolutional layer 3, the pooling layer 1, the residual module 2, the upsampling layer 1, and the data output by the upsampling layer 1 and the data output by the pooling layer 1 Input the convolutional layer 4 together, and the data output by the convolutional layer 4 is input to the connection layer 1;
  • the data processing path 3 is set as: the input data sequentially passes through the convolutional layer 5, the pooling layer 2, the residual module 3, the upsampling layer 2, the upsampling layer 3, the data output by the upsampling layer 3 and the pooling layer 2
  • the output data is input to the convolutional layer 6 together, and the data output from the convolutional layer 6 is input to the connection layer 1;
  • the data processing path 4 is set as follows: the input data sequentially passes through the convolutional layer 7, the pooling layer 3, the residual module 4, the up-sampling layer 4, the up-sampling layer 5, the up-sampling layer 6, and the output of the up-sampling layer 6.
  • the data is input to the convolutional layer 8 together with the data output by the pooling layer 3, and the data output from the convolutional layer 8 is input to the connection layer 1;
  • the data processing path 5 is set as follows: the input data sequentially passes through the convolutional layer 9, the pooling layer 4, the residual module 5, the up-sampling layer 7, the up-sampling layer 8, the up-sampling layer 9, and the up-sampling layer 10.
  • the data output by the sampling layer 10 and the data output by the pooling layer 4 are input into the convolution layer 10 together, and the data output by the convolution layer 10 is input into the connection layer 1;
  • connection layer 1 is used to input 5 channels of data to the convolutional layer 11 to obtain a 3D tensor with 3 output channels.
  • the pooling layer 1, the pooling layer 2, the pooling layer 3, the pooling layer 4, and the pooling layer 5 reduce the data by 1/2, 1/4, 1/8, and 1/16, respectively. sampling.
  • the specific method for generating CNN model training data is:
  • Step 2.1 Use a projector to project 37 fringe images on the object, 37 fringe images including 12 green phase-shift fringe images with frequency f R 12 green phase shift fringe images with frequency f G And 12 green phase shift fringe images with frequency f B And a composite color image stripes I RGB, red channel which is frequency f R gradation fringe image I R, the green channel is a grayscale image fringe frequency f G I G, and blue channels for the frequency f B gradation stripes Image I B ;
  • Step 2.2 Use a color camera to collect 37 fringe images modulated by the object, and generate a set of input and output data required for training CNN, specifically:
  • Step 2.2.1 For the first 36 green striped images collected Use the phase shift (PS) method to obtain the package phases with frequencies f R , f G , and f B respectively
  • Step 2.2.2 Use the grayscale images I R , I G , and I B in the three channels of the 37th composite color stripe image I RGB as a set of input data for the network CNN;
  • Step 2.3 Repeat steps 2.1 and 2.2 to generate training data for the set number of groups.
  • the specific method for training the model CNN is:
  • the grayscale images I R , I G , and I B in the three channels of the 37th composite color fringe image are used as model CNN input data, and the numerator term M G , denominator term DG and absolute phase ⁇ G of frequency f G are used as the model CNN standard data, calculate the difference between the standard data and the output value of the model CNN, use the back propagation method to iteratively optimize the internal parameters of the CNN until the loss function converges.
  • Round represents the rounding operation
  • ⁇ G is the final absolute phase
  • the present invention has significant advantages as follows: (1) The present invention can simultaneously achieve high-precision phase information acquisition and stable phase expansion through a single color image; (2) The present invention does not require any complicated system Pre/post processing can automatically compensate for color difference and color crosstalk between color channels.
  • Figure 1 is a flow chart of the present invention.
  • Figure 2 shows the structure and principle diagram of CNN.
  • Figure 3 is a comparison diagram of the results of the present invention and the traditional method.
  • a single-frame color fringe projection based three-dimensional surface measurement method based on deep learning. Obtaining high-precision absolute phase information from a single frame of color fringe image includes the following steps:
  • Step 1 Build a model CNN based on convolutional neural network.
  • the constructed model CNN is shown in Figure 2, where H represents the height (pixels) of the image, W represents the width of the image, C represents the number of channels, and the number of channels is equal to the number of filters used.
  • the input of the model CNN is a 3D tensor with three channels, and the output is also a 3D tensor with three channels.
  • the model CNN includes five data processing paths, a connection layer 1 and a convolution layer 11.
  • the data processing path 1 is set to: the input data sequentially passes through the convolution layer 1 and the residual module 1, and the data output by the residual module 1 is input to the convolution together with the data output by the convolution layer 1.
  • Layer 2 the output data of the convolutional layer 2 is input to the connection layer 1.
  • the data processing path 2 is set to: the input data sequentially passes through the convolutional layer 3, the pooling layer 1, the residual module 2, the upsampling layer 1, and the data output by the upsampling layer 1 and the data output by the pooling layer 1
  • the convolutional layer 4 is input together, and the data output by the convolutional layer 4 is input to the connection layer 1.
  • the data processing path 3 is set as: the input data sequentially passes through the convolutional layer 5, the pooling layer 2, the residual module 3, the upsampling layer 2, the upsampling layer 3, the data output by the upsampling layer 3 and the pooling layer 2
  • the output data is input to the convolution layer 6 together, and the data output from the convolution layer 6 is input to the connection layer 1.
  • the data processing path 4 is set as follows: the input data sequentially passes through the convolutional layer 7, the pooling layer 3, the residual module 4, the up-sampling layer 4, the up-sampling layer 5, the up-sampling layer 6, and the output of the up-sampling layer 6.
  • the data is input to the convolutional layer 8 together with the data output from the pooling layer 3, and the data output from the convolutional layer 8 is input to the connection layer 1.
  • the data processing path 5 is set as follows: the input data sequentially passes through the convolutional layer 9, the pooling layer 4, the residual module 5, the up-sampling layer 7, the up-sampling layer 8, the up-sampling layer 9, and the up-sampling layer 10.
  • the data output from the sampling layer 10 is input to the convolutional layer 10 together with the data output from the pooling layer 4, and the data output from the convolutional layer 10 is input to the connection layer 1.
  • each residual module refers to Deep residual learning for image recognition, author K He, etc.;
  • the pooling layer 1, the pooling layer 2, the pooling layer 3, the pooling layer 4, and the pooling layer 5 reduce the data by 1/2, 1/4, 1/8, and 1/16, respectively. Sampling to improve the model’s ability to recognize features while keeping the number of channels unchanged.
  • the functions of the up-sampling layer 1 to the up-sampling layer 10 are to up-sampling the resolution of the data, increasing the height and width of the data by a factor of 1, and the purpose is to restore the original resolution of the image.
  • connection layer 1 superimposes the five channels of data. Finally, after passing through the convolutional layer 11, a 3D tensor with 3 channels is output.
  • Step 2 Generate training data and train the model CNN.
  • the specific steps are as follows:
  • Step 2.1 The projector projects 37 fringe images (including 36 monochromatic fringe images and a composite fringe image) onto the object.
  • Step 2.2 Use a color camera to collect 37 fringe images modulated by the object, and generate a set of input and output data required for training CNN, specifically:
  • Step 2.2.1 for the first 36 green striped images collected Use the PS method to obtain the package phases with frequencies f R , f G , and f B respectively
  • Step 2.2.2 for the collected 37th composite color stripe image I RGB , use the grayscale images I R , I G , and I B in the three channels as a set of input data for the network CNN;
  • Step 2.3 Repeat steps 2.1 and 2.2 to generate 1000 sets of training data.
  • Step 2.4 Training CNN: The grayscale images I R , I G , and I B in the three channels of the 37th composite color fringe image are used as input data, and M G , D G , and ⁇ G are used as standard data to be sent to the model CNN. Using the mean square error as the loss function, calculate the difference between the standard value and the CNN output value. Combined with the back propagation method, iteratively optimize the internal parameters of the CNN until the loss function converges, at which time the model CNN training ends. In the training process of the model, except for the convolutional layer 11, the activation functions used in the remaining convolutional layers are all linear rectification functions (Relu). When iteratively optimize the loss function, the Adam algorithm is used to find the minimum value of the loss function.
  • Step 3 Use the trained model CNN to realize the three-dimensional measurement of the measured object, as follows:
  • Step 3.1 Obtain the information for calculating the high-precision wrapping phase and unwrapping at the same time.
  • Step 3.2 Obtain high-precision absolute phase
  • Step 3.2.1 according to the M G and D G obtained in step 3.1, obtain the high-precision wrap phase by formula (2)
  • the reason why this strategy can provide high-precision phase information is: predicting the structure of the numerator and denominator corresponding to the arctangent function overcomes the difficulty of reproducing the 2 ⁇ phase winding in the wrapped phase.
  • Step 3.2.2 obtain the high-precision absolute phase ⁇ G through the following formula:
  • Round represents the rounding operation.
  • three-dimensional reconstruction (Calibration of fringe projection profilometry with bundle adjustment strategy, author Peng Xiang, etc.) can be performed through the calibration parameters between the color camera and the projector.
  • the invention only needs to project a single color fringe image to achieve high-precision absolute phase acquisition, thereby realizing the measurement of the three-dimensional surface profile of the measured object.
  • the present invention first builds a model based on the convolutional neural network. In the present invention, it is called CNN.
  • the input of CNN consists of three channels.
  • the three channels are the gray-scale fringe images in the red, green and blue channels of the color fringe image.
  • the output data is the numerator, denominator, and a Low-precision absolute phase of fringe level information.
  • a projector is used to project three 12-step phase shift fringes with different frequencies, and the PS method and the projection minimum distance method (PDM) are used to generate the training data required by the CNN.
  • PDM projection minimum distance method
  • the three-channel gray fringe image of the color fringe image is input to CNN, and the numerator and denominator terms for calculating high-precision phase information and a low-precision absolute phase containing fringe level information are obtained.
  • the numerator and denominator terms are substituted into the arctangent function, combined with low-precision absolute phase calculation to obtain high-precision absolute phase information, and finally three-dimensional reconstruction is performed.
  • the present invention is constructed based on a color camera (model acA640-750uc, Basler, resolution 640 ⁇ 480), a projector (model LightCrafter 4500, TI, resolution 912 ⁇ 1140) and a computer
  • a set of digital raster projection device is used to collect color fringe images.
  • the H, W, and C of the constructed CNN are 480, 640, and 64, and the three fringe frequencies f R , f G , and f B used are 9, 11, and 13 respectively.
  • a total of 1,000 sets of data were collected.
  • 800 sets of data were used for training, and the remaining 200 sets of data were used for verification.
  • 3(c) and 3(g) are the results of this method, and 3(d) and 3(h) are the benchmark results. It can be seen from the results that the present invention can obtain more accurate absolute phase reconstruction, and the final three-dimensional reconstruction quality can even be comparable to the results obtained by the PS method and the PDM method. At the same time, it should be pointed out that the invention only uses one color composite fringe image, while the method used in the benchmark results uses 36 fringe images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

一种基于深度学习的单帧彩色条纹投影的三维面型测量方法,包括:构建基于卷积神经网络的模型CNN,输入包含三个通道,分别为彩色条纹图像的红色、绿色和蓝色通道内的灰度条纹图像,采用投影仪投影三个不同频率的12步相移条纹,利用相移(PS)法和投影最小距离法(PDM)生成CNN所需的训练数据对其进行训练,使用时,将彩色条纹图像的三个通道灰度条纹图像输入至CNN,得到分子项、分母项以及一个包含条纹级次信息的低精度绝对相位,将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到高精度的绝对相位信息。该方法可在无任何复杂预/后处理的情况下,提供更精确的相位信息和更可靠的相位展开。

Description

基于深度学习的单帧彩色条纹投影的三维面型测量方法 技术领域
本发明属于光学测量技术领域,具体为一种基于深度学习的单帧彩色条纹投影的三维面型测量方法。
背景技术
条纹投影轮廓术(FPP)由于其简单的硬件设施,灵活的实现方式和较高的测量精度而成为最广泛使用的三维(3D)测量技术之一。近年来,随着在线质量检测、快速逆向工程等应用中对高速场景下3D信息获取的要求不断增加,基于FPP的高速3D形状测量技术变得至关重要(Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,作者S Feng等)。
为了实现高速场景下的3D成像,有必要提高测量效率,减少单次三维重构所需的条纹图案数量。理想方法是从单个图像中恢复物体的高质量3D绝对表面。彩色编码投影技术(Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,作者Z Zhang)在动态场景测量中具有很大的优势,因为该技术可在红色,蓝色和绿色通道中对三个独立的条纹图像进行编码,进而成像效率较传统单色投影方式提高2倍。为充分利用彩色图像通道,学者们已提出了许多单帧彩色编码投影技术(Composite phase-shifting algorithm for three-dimensional shape compression,作者N Karpinsky等)。但是,这些技术很少可用于复杂对象的高精度测量。一方面,为获取高精度相位信息,应优先选择具有高测量分辨率的相移(PS)法(Phase shifting algorithms for fringe projection profilometry:A review,作者C Zuo等)。但是PS法至少需要三个条纹图像,这些条纹图像占据RGB图像的所有通道,因此只能通过空间相位展开方法(在遇到孤立的相位时会解包裹失败)消除相位模糊性(Color-encoded digital fringe projection technique for high-speed 3-d surface contouring,作者P S Huang等)。另一方面,为了实现稳定的相位展开,通常采用将条纹图案与格雷码相组合或多频条纹图像相组合的策略。前者由于难以识别格雷码图案的边缘,仍然无法稳定地展开相位(Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects,作者W H Su)。后者可以通过3条纹数选择法(Optical imaging of physical objects,作者D Towers等)恢复绝对相位, 但由于使用了傅立叶变换(FT)法(一种单帧成像方法,但该方法在相图图不连续或孤立区域的质量较差)而使得相位精度较差。此外,颜色编码投影方法还存在一些固有的缺陷,例如通道之间的色差及颜色串扰,这些因素会影响相位计算的质量。尽管研究人员提出了一些预处理方法来补偿这缺陷,但只能在一定程度上减少这些缺陷对测量的影响。
由上述分析可见,尽管彩色编码投影技术非常有潜力实现单帧三维测量,但仅有的三个颜色通道不足以编码既满足高质量的相位信息获取又满足稳定的相位展开的条纹图像,此外,该技术固有的色差及颜色串扰问题也很难通过传统的方法来解决。
发明内容
本发明的目的在于提供了一种基于深度学习的单帧彩色条纹投影的三维面型测量方法。
实现本发明目的的技术解决方案为:一种基于深度学习的单帧彩色条纹投影相位展开方法,具体步骤为:
步骤1:构建基于卷积神经网络的模型CNN;
步骤2:生成CNN模型训练数据,对模型CNN进行训练;
步骤3、将被测物的彩色复合条纹图像三个通道内的灰度图像输入训练好的模型CNN,获得分子项、分母项以及低精度绝对相位,将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到最终的绝对相位信息。
优选地,所述模型CNN包括五路数据处理路径、连接层1和卷积层11,其中:
所述数据处理路径1被设置为:输入数据依次经过卷积层1、残差模块1,经残差模块1输出的数据与卷积层1输出的数据一起输入卷积层2,卷积层2的输出数据输入连接层1;
所述数据处理路径2被设置为:输入数据依次经过卷积层3、池化层1、残差模块2、上采样层1,经上采样层1输出的数据与池化层1输出的数据一起输入卷积层4,卷积层4输出的数据输入连接层1;
所述数据处理路径3被设置为:输入数据依次经过卷积层5、池化层2、残差模块3、上采样层2、上采样层3,上采样层3输出的数据与池化层2输出的 数据一起输入卷积层6,卷积层6输出的数据输入连接层1;
所述数据处理路径4被设置为:输入数据依次经过卷积层7、池化层3、残差模块4、上采样层4、上采样层5、上采样层6,上采样层6输出的数据与池化层3输出的数据一起输入卷积层8,卷积层8输出的数据输入连接层1;
所述数据处理路径5被设置为:输入数据依次经过卷积层9、池化层4、残差模块5、上采样层7、上采样层8、上采样层9、上采样层10,上采样层10输出的数据与池化层4输出的数据一起输入卷积层10,卷积层10输出的数据输入连接层1;
所述连接层1用于将5路数据进行后输入至卷积层11,得到输出通道数为3的3D张量。
优选地,所述池化层1、池化层2、池化层3、池化层4、池化层5分别对数据进行1/2、1/4、1/8、1/16的降采样。
优选地,生成CNN模型训练数据的具体方法为:
步骤2.1:使用投影仪向物体投影37幅条纹图像,37幅条纹图像包括12幅频率f R的绿色相移条纹图像
Figure PCTCN2020115539-appb-000001
12幅频率f G的绿色相移条纹图像
Figure PCTCN2020115539-appb-000002
和12幅频率f B的绿色相移条纹图像
Figure PCTCN2020115539-appb-000003
以及1幅复合彩色条纹图像I RGB,其红色通道为频率f R的灰度条纹图像I R、绿色通道为频率f G的灰度条纹图像I G、蓝色通道为频率f B的灰度条纹图像I B
步骤2.2:使用彩色相机采集被物体调制的37幅条纹图像,并生成训练CNN所需的一组输入与输出数据,具体为:
步骤2.2.1:对于采集到的前36幅绿色条纹图像
Figure PCTCN2020115539-appb-000004
分别使用相移(PS)法获取频率为f R、f G、f B的包裹相位
Figure PCTCN2020115539-appb-000005
通过PDM法获取频率f G的绝对相位Φ G,将频率f G的分子项M G、分母项D G,以及绝对相位Φ G作为模型CNN的一组标准数据。
步骤2.2.2、将采集到的第37幅复合彩色条纹图像I RGB三个通道中的灰度图像I R、I G、I B作为网络CNN的一组输入数据;
步骤2.3:重复步骤2.1、2.2,生成设定组数训练数据。
优选地,对模型CNN进行训练的具体方法为:
将第37幅复合彩色条纹图像三个通道中的灰度图像I R、I G、I B作为模型CNN输入数据,频率f G的分子项M G、分母项D G以及绝对相位Φ G作为模型CNN标准数据,计算标准数据与模型CNN输出值之间的差异,利用反向传播法,反复迭代优化CNN的内部参数,直到损失函数收敛。
优选地,将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到最终的绝对相位信息具体为:
将分子项与分母项代入反正切函数获得包裹相位;
结合包裹相位和低精度觉得相位,获得最终的绝对相位,具体公式为:
Figure PCTCN2020115539-appb-000006
式中,Round表示四舍五入运算,Φ G为最终的绝对相位,
Figure PCTCN2020115539-appb-000007
为包裹相位,
Figure PCTCN2020115539-appb-000008
为模型CNN输出的低精度绝对相位。
本发明与现有技术相比,其显著优点为:(1)本发明通过单幅彩色图像可同时实现高精度相位信息获取与稳定的相位展开;(2)发明无需任何对系统进行任何复杂的预/后处理,可自动补偿彩色通道间的色差和颜色串扰问题。
下面结合附图对本发明作进一步详细描述。
附图说明
图1为本发明的流程图。
图2为CNN的结构与原理图。
图3为本发明与传统方法的结果对比图。
具体实施方式
一种基于深度学习的单帧彩色条纹投影的三维面型测量方法,通过单帧的彩色条纹图像获得高精度的绝对相位信息,包括以下步骤:
步骤1:构建基于卷积神经网络的模型CNN。
具体地,所构建的模型CNN如图2所示,其中,H表示图像的高度(像素),W表示图像的宽度,C表示通道数量,通道数量等于使用的滤波器数量。模型CNN的输入是一个具有三通道的3D张量,输出也是一个具有三通道的3D张量。所述模型CNN包含五路数据处理路径、连接层1和卷积层11。
进一步的实施例中,所述数据处理路径1被设置为:输入数据依次经过卷积层1、残差模块1,经残差模块1输出的数据与卷积层1输出的数据一起输入卷积层2,卷积层2的输出数据输入连接层1。
所述数据处理路径2被设置为:输入数据依次经过卷积层3、池化层1、残差模块2、上采样层1,经上采样层1输出的数据与池化层1输出的数据一起输入卷积层4,卷积层4输出的数据输入连接层1。
所述数据处理路径3被设置为:输入数据依次经过卷积层5、池化层2、残差模块3、上采样层2、上采样层3,上采样层3输出的数据与池化层2输出的数据一起输入卷积层6,卷积层6输出的数据输入连接层1。
所述数据处理路径4被设置为:输入数据依次经过卷积层7、池化层3、残差模块4、上采样层4、上采样层5、上采样层6,上采样层6输出的数据与池化层3输出的数据一起输入卷积层8,卷积层8输出的数据输入连接层1。
所述数据处理路径5被设置为:输入数据依次经过卷积层9、池化层4、残差模块5、上采样层7、上采样层8、上采样层9、上采样层10,上采样层10输出的数据与池化层4输出的数据一起输入卷积层10,卷积层10输出的数据输入连接层1。
每个残差模块具体的构建方法参考文献Deep residual learning for image recognition,作者K He等;
具体地,所述池化层1、池化层2、池化层3、池化层4、池化层5分别对数据进行1/2、1/4、1/8、1/16的降采样,以提高模型对特征的识别能力,同时保持通道数量保持不变。
具体地,上采样层1至上采样层10的作用是对数据进行分辨率的上采样,将数据的高度和宽度分别提升1倍,目的是恢复图像的原始分辨率。
随后,连接层1将五路数据进行叠加。最后,经过卷积层11,输出通道数为3的3D张量。
步骤2:生成训练数据,训练模型CNN,具体步骤如下:
步骤2.1:投影仪向物体投影37幅条纹图像(包括36幅单色条纹图像和一幅复合条纹图像)。
使用投影仪向物体投影37幅条纹图像,37幅条纹图像包括12幅频率f R的 绿色相移条纹图像
Figure PCTCN2020115539-appb-000009
12幅频率f G的绿色相移条纹图像
Figure PCTCN2020115539-appb-000010
和12幅频率f B的绿色相移条纹图像
Figure PCTCN2020115539-appb-000011
以及1幅复合彩色条纹图像I RGB,其红色通道为频率f R的灰度条纹图像I R、绿色通道为频率f G的灰度条纹图像I G、蓝色通道为频率f B的灰度条纹图像I B
步骤2.2:使用彩色相机采集被物体调制的37幅条纹图像,并生成训练CNN所需的一组输入与输出数据,具体为:
步骤2.2.1、对于采集到的前36幅绿色条纹图像
Figure PCTCN2020115539-appb-000012
分别使用PS法获取频率为f R、f G、f B的包裹相位
Figure PCTCN2020115539-appb-000013
Figure PCTCN2020115539-appb-000014
Figure PCTCN2020115539-appb-000015
Figure PCTCN2020115539-appb-000016
式中,
Figure PCTCN2020115539-appb-000017
分别表示频率为f R、f G、f B的第n幅绿色条纹图像,n=1,2,...,12,M和D分别表示反正切函数的分子项与分母项。
获取三个不同频率的包裹相位
Figure PCTCN2020115539-appb-000018
后,再通过PDM法(Micro fourier transform profilometry(mftp):3d shape measurement at 10,000 frames per second,作者左超等)获取频率f G的绝对相位Φ G,这里获得的绝对相位Φ G不带有彩色通道间的色差和颜色串扰问题,因为只使用了单色的条纹图像。将上述计算所得的频率f G的分子项M G、分母项D G,以及绝对相位Φ G作为CNN的一组标准(ground truth)数据。
步骤2.2.2、对于采集到的第37幅复合彩色条纹图像I RGB,将其三个通道中的灰度图像I R、I G、I B作为网络CNN的一组输入数据;
步骤2.3:重复步骤2.1、2.2,生成1000组训练数据。
步骤2.4:训练CNN:将第37幅复合彩色条纹图像三个通道中的灰度图像I R、I G、I B作为输入数据,M G、D G、Φ G作为标准数据送入模型CNN。将均方误差作为损失函数,计算标准值与CNN输出值之间的差异。结合反向传播法,反 复迭代优化CNN的内部参数,直到损失函数收敛,此时模型CNN训练结束。在模型的训练过程中,除卷积层11,其余卷积层中使用的激活函数均为线性整流函数(Relu)。迭代优化损失函数时,采用Adam算法寻找损失函数的最小值。
步骤3:利用训练完成的模型CNN实现对被测物的三维测量,具体如下:
步骤3.1:同时获取用于计算高精度包裹相位和用于解包裹的信息。
在训练完成后的CNN中输入被测物的彩色复合条纹图像三个通道内的灰度图像I R、I G、I B,获得用于计算高精度包裹相位信息的分子项M G、分母项D G以及包含条纹级次信息的低精度绝对相位
Figure PCTCN2020115539-appb-000019
(其误差在-π和π之间);
步骤3.2:获取高精度绝对相位
步骤3.2.1、根据步骤3.1中获得的M G和D G,通过公式(2)获得高精度包裹相位
Figure PCTCN2020115539-appb-000020
该策略能提供高精度相位信息的原因在于:预测反正切函数所对应的分子项和分母项的结构克服了再现包裹相位中2π相位缠绕的困难。
步骤3.2.2、通过下式获得高精度的绝对相位Φ G
Figure PCTCN2020115539-appb-000021
式中,Round表示四舍五入运算。
获得绝对相位后,可通过彩色相机和投影仪间的标定参数进行三维重构(Calibration of fringe projection profilometry with bundle adjustment strategy,作者彭翔等)。
本发明仅需投影单幅彩色条纹图像就可实现高精度绝对相位的获取,进而实现对被测物的三维面型的测量。本发明首先构建一个基于卷积神经网络的模型。本发明中,它被称为CNN。CNN的输入包含三个通道,三个通道分别为彩色条纹图像的红色、绿色和蓝色通道内的灰度条纹图像,输出数据为用于计算高精度相位信息的分子项、分母项以及一个包含条纹级次信息的低精度绝对相位。训练时,采用投影仪投影三个不同频率的12步相移条纹,利用PS法和投影最小距离法(PDM)生成CNN所需的训练数据。训练结束后,将彩色条纹图像的三个通道灰度条纹图像输入至CNN,得到用于计算高精度相位信息的分子项、分母项以及一个包含条纹级次信息的低精度绝对相位。将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到高精度的绝对相位信息,最后进行三维重构。
实施例:
为验证本发明的有效性,基于一台彩色相机(型号acA640-750uc,Basler,分辨率640×480),一台投影仪(型号LightCrafter 4500,TI,分辨率912×1140)以及一台计算机构建了一套数字光栅投影装置用以采集彩色条纹图像。构建的CNN的H、W、C为480、640、64,所使用的的三个条纹频率f R、f G、f B分别为9、11、13。训练数据时,共采集了1000组数据,训练过程中将800组数据用于训练,剩余200组数据用于验证。训练结束后,为验证本发明有效性,选用2个在训练时并未见过的场景作为测试。为了体现本发明的优点,将本发明与一种传统的彩色条纹编码方法(Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,作者张宗华等)进行了比较,并且选择单色12步PS法和PDM的结果作为基准结果。图3展示了测量结果,其中3(a)和3(e)为两个场景的所对应的的复合彩色图像,3(b)和3(f)为通过传统彩色条纹编码方法测量的结果,3(c)和3(g)被本方法的结果,3(d)和3(h)为基准结果。从结果中可看到,本发明可获得更准确的绝对相位重建,最终的三维重构质量甚至可以与PS法和PDM法获得的结果相媲美。同时需要指出的是发明仅利用了1幅彩色复合条纹图像,而基准结果所用方法使用了36幅条纹图像。

Claims (6)

  1. 一种基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,具体步骤为:
    步骤1:构建基于卷积神经网络的模型CNN;
    步骤2:生成CNN模型训练数据,对模型CNN进行训练;
    步骤3、将被测物的彩色复合条纹图像三个通道内的灰度图像输入训练好的模型CNN,获得分子项、分母项以及低精度绝对相位,将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到最终的绝对相位信息。
  2. 根据权利要求1所述的基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,所述模型CNN包括五路数据处理路径、连接层1和卷积层11,其中:
    所述数据处理路径1被设置为:输入数据依次经过卷积层1、残差模块1,经残差模块1输出的数据与卷积层1输出的数据一起输入卷积层2,卷积层2的输出数据输入连接层1;
    所述数据处理路径2被设置为:输入数据依次经过卷积层3、池化层1、残差模块2、上采样层1,经上采样层1输出的数据与池化层1输出的数据一起输入卷积层4,卷积层4输出的数据输入连接层1;
    所述数据处理路径3被设置为:输入数据依次经过卷积层5、池化层2、残差模块3、上采样层2、上采样层3,上采样层3输出的数据与池化层2输出的数据一起输入卷积层6,卷积层6输出的数据输入连接层1;
    所述数据处理路径4被设置为:输入数据依次经过卷积层7、池化层3、残差模块4、上采样层4、上采样层5、上采样层6,上采样层6输出的数据与池化层3输出的数据一起输入卷积层8,卷积层8输出的数据输入连接层1;
    所述数据处理路径5被设置为:输入数据依次经过卷积层9、池化层4、残差模块5、上采样层7、上采样层8、上采样层9、上采样层10,上采样层10输出的数据与池化层4输出的数据一起输入卷积层10,卷积层10输出的数据输入连接层1;
    所述连接层1用于将5路数据进行后输入至卷积层11,得到输出通道数为3的3D张量。
  3. 根据权利要求2所述的基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,所述池化层1、池化层2、池化层3、池化层4、池化层5 分别对数据进行1/2、1/4、1/8、1/16的降采样。
  4. 根据权利要求1所述的基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,生成CNN模型训练数据的具体方法为:
    步骤2.1:使用投影仪向物体投影37幅条纹图像,37幅条纹图像包括12幅频率f R的绿色相移条纹图像
    Figure PCTCN2020115539-appb-100001
    12幅频率f G的绿色相移条纹图像
    Figure PCTCN2020115539-appb-100002
    和12幅频率f B的绿色相移条纹图像
    Figure PCTCN2020115539-appb-100003
    以及1幅复合彩色条纹图像I RGB,其红色通道为频率f R的灰度条纹图像I R、绿色通道为频率f G的灰度条纹图像I G、蓝色通道为频率f B的灰度条纹图像I B
    步骤2.2:使用彩色相机采集被物体调制的37幅条纹图像,并生成训练CNN所需的一组输入与输出数据,具体为:
    步骤2.2.1、对于采集到的前36幅绿色条纹图像
    Figure PCTCN2020115539-appb-100004
    分别使用PS法获取频率为f R、f G、f B的包裹相位
    Figure PCTCN2020115539-appb-100005
    通过PDM法获取频率f G的绝对相位Φ G,将频率f G的分子项M G、分母项D G,以及绝对相位Φ G作为模型CNN的一组标准数据。
    步骤2.2.2、将采集到的第37幅复合彩色条纹图像I RGB三个通道中的灰度图像I R、I G、I B作为网络CNN的一组输入数据;
    步骤2.3:重复步骤2.1、2.2,生成设定组数训练数据。
  5. 根据权利要求4所述的基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,对模型CNN进行训练的具体方法为:
    将第37幅复合彩色条纹图像三个通道中的灰度图像I R、I G、I B作为模型CNN输入数据,频率f G的分子项M G、分母项D G以及绝对相位Φ G作为模型CNN标准数据,计算标准数据与模型CNN输出值之间的差异,利用反向传播法,反复迭代优化CNN的内部参数,直到损失函数收敛。
  6. 根据权利要求4所述的基于深度学习的单帧彩色条纹投影的三维面型测量方法,其特征在于,将分子项与分母项代入反正切函数,结合低精度绝对相位计算得到最终的绝对相位信息具体为:
    将分子项与分母项代入反正切函数获得包裹相位;
    结合包裹相位和低精度觉得相位,获得最终的绝对相位,具体公式为:
    Figure PCTCN2020115539-appb-100006
    式中,Round表示四舍五入运算,Φ G为最终的绝对相位,
    Figure PCTCN2020115539-appb-100007
    为包裹相位,
    Figure PCTCN2020115539-appb-100008
    为模型CNN输出的低精度绝对相位。
PCT/CN2020/115539 2020-03-19 2020-09-16 基于深度学习的单帧彩色条纹投影的三维面型测量方法 WO2021184707A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010194707.6A CN111402240A (zh) 2020-03-19 2020-03-19 基于深度学习的单帧彩色条纹投影的三维面型测量方法
CN202010194707.6 2020-03-19

Publications (1)

Publication Number Publication Date
WO2021184707A1 true WO2021184707A1 (zh) 2021-09-23

Family

ID=71432625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115539 WO2021184707A1 (zh) 2020-03-19 2020-09-16 基于深度学习的单帧彩色条纹投影的三维面型测量方法

Country Status (2)

Country Link
CN (1) CN111402240A (zh)
WO (1) WO2021184707A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066959A (zh) * 2021-11-25 2022-02-18 天津工业大学 基于Transformer的单幅条纹图深度估计方法
CN114754703A (zh) * 2022-04-19 2022-07-15 安徽大学 一种基于彩色光栅的三维测量方法及系统
CN115775302A (zh) * 2023-02-13 2023-03-10 南京航空航天大学 一种基于Transformer的高反光物体三维重建方法
CN116105632A (zh) * 2023-04-12 2023-05-12 四川大学 一种结构光三维成像的自监督相位展开方法及装置
TWI816511B (zh) * 2022-08-15 2023-09-21 國立高雄大學 運用平衡格雷碼之影像辨識方法
CN117011478A (zh) * 2023-10-07 2023-11-07 青岛科技大学 一种基于深度学习与条纹投影轮廓术的单张图像重建方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402240A (zh) * 2020-03-19 2020-07-10 南京理工大学 基于深度学习的单帧彩色条纹投影的三维面型测量方法
CN111829458B (zh) * 2020-07-20 2022-05-13 南京理工大学智能计算成像研究院有限公司 基于深度学习的伽马非线性误差矫正方法
CN111928794B (zh) * 2020-08-04 2022-03-11 北京理工大学 基于深度学习的闭合条纹兼容单幅干涉图解相方法及装置
CN112116616B (zh) * 2020-08-05 2022-06-07 西安交通大学 基于卷积神经网络的相位信息提取方法、存储介质及设备
CN112833818B (zh) * 2021-01-07 2022-11-15 南京理工大学智能计算成像研究院有限公司 一种单帧条纹投影三维面型测量方法
CN112802084B (zh) * 2021-01-13 2023-07-07 广州大学 基于深度学习的三维形貌测量方法、系统和存储介质
CN113256800B (zh) * 2021-06-10 2021-11-26 南京理工大学 基于深度学习的精确快速大景深三维重建方法
CN113674370A (zh) * 2021-08-02 2021-11-19 南京理工大学 基于卷积神经网络的单帧干涉图解调方法
CN114777677B (zh) * 2022-03-09 2024-04-26 南京理工大学 基于深度学习的单帧双频复用条纹投影三维面型测量方法
CN114543707A (zh) * 2022-04-25 2022-05-27 南京南暄禾雅科技有限公司 一种大景深场景下的相位展开方法
CN115187649B (zh) * 2022-09-15 2022-12-30 中国科学技术大学 抗强环境光干扰的三维测量方法、系统、设备及存储介质
CN117496499B (zh) * 2023-12-27 2024-03-15 山东科技大学 3d结构光成像中虚假深度边缘的识别和补偿方法、系统
CN117739861B (zh) * 2024-02-20 2024-05-14 青岛科技大学 一种基于深度学习的改进单模式自解相条纹投影三维测量方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109253708A (zh) * 2018-09-29 2019-01-22 南京理工大学 一种基于深度学习的条纹投影时间相位展开方法
CN111402240A (zh) * 2020-03-19 2020-07-10 南京理工大学 基于深度学习的单帧彩色条纹投影的三维面型测量方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109253708A (zh) * 2018-09-29 2019-01-22 南京理工大学 一种基于深度学习的条纹投影时间相位展开方法
CN111402240A (zh) * 2020-03-19 2020-07-10 南京理工大学 基于深度学习的单帧彩色条纹投影的三维面型测量方法

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHAO ZUO; TIANYANG TAO; SHIJIE FENG; LEI HUANG; ANAND ASUNDI; QIAN CHEN: "Micro Fourier Transform Profilometry (muFTP): 3D shape measurement at 10,000 frames per second", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 31 May 2017 (2017-05-31), 201 Olin Library Cornell University Ithaca, NY 14853, XP080950556, DOI: 10.1016/j.optlaseng.2017.10.013 *
GE WEI: "Research on 3D Measurement Based on Color Phase-Encoded Fringe Projection", INFORMATION SCIENCE AND TECHNOLOGY, CHINESE MASTER’S THESES FULL-TEXT DATABASE, 15 March 2017 (2017-03-15), XP055851709 *
YIN WEI, CHEN QIAN, FENG SHIJIE, TAO TIANYANG, HUANG LEI, TRUSIAK MACIEJ, ASUNDI ANAND, ZUO CHAO: "Temporal phase unwrapping using deep learning", SCIENTIFIC REPORTS, vol. 9, no. 1, 27 December 2019 (2019-12-27), pages 20175, XP055851731, DOI: 10.1038/s41598-019-56222-3 *
ZHANG ZONGHUA, TOWERS CATHERINE E., TOWERS DAVID P.: "Shape and colour measurement of colourful objects by fringe projection", PROCEEDINGS OF SPIE, SPIE, 1000 20TH ST. BELLINGHAM WA 98225-6705 USA, vol. 7063, 10 August 2008 (2008-08-10), 1000 20th St. Bellingham WA 98225-6705 USA, pages 70630N, XP055851732, ISSN: 0277-786X, ISBN: 978-1-5106-4548-6, DOI: 10.1117/12.794561 *
ZUO CHAO, FENG SHIJIE, ZHANG XIANGYU, HAN JING, CHEN QIAN: "Deep Learning Based Computational Imaging: Status, Challenges, and Future", ACTA OPTICA SINICA, vol. 40, no. 1, 31 January 2020 (2020-01-31), pages 45 - 70, XP055851715, DOI: 10.3788/AOS202040.0111003 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066959A (zh) * 2021-11-25 2022-02-18 天津工业大学 基于Transformer的单幅条纹图深度估计方法
CN114066959B (zh) * 2021-11-25 2024-05-10 天津工业大学 基于Transformer的单幅条纹图深度估计方法
CN114754703A (zh) * 2022-04-19 2022-07-15 安徽大学 一种基于彩色光栅的三维测量方法及系统
CN114754703B (zh) * 2022-04-19 2024-04-19 安徽大学 一种基于彩色光栅的三维测量方法及系统
TWI816511B (zh) * 2022-08-15 2023-09-21 國立高雄大學 運用平衡格雷碼之影像辨識方法
CN115775302A (zh) * 2023-02-13 2023-03-10 南京航空航天大学 一种基于Transformer的高反光物体三维重建方法
CN116105632A (zh) * 2023-04-12 2023-05-12 四川大学 一种结构光三维成像的自监督相位展开方法及装置
CN116105632B (zh) * 2023-04-12 2023-06-23 四川大学 一种结构光三维成像的自监督相位展开方法及装置
CN117011478A (zh) * 2023-10-07 2023-11-07 青岛科技大学 一种基于深度学习与条纹投影轮廓术的单张图像重建方法
CN117011478B (zh) * 2023-10-07 2023-12-22 青岛科技大学 一种基于深度学习与条纹投影轮廓术的单张图像重建方法

Also Published As

Publication number Publication date
CN111402240A (zh) 2020-07-10

Similar Documents

Publication Publication Date Title
WO2021184707A1 (zh) 基于深度学习的单帧彩色条纹投影的三维面型测量方法
US11906286B2 (en) Deep learning-based temporal phase unwrapping method for fringe projection profilometry
CN111351450B (zh) 基于深度学习的单帧条纹图像三维测量方法
TWI414748B (zh) 同步色相相移轉換方法以及其三維形貌量測系統
CN114777677B (zh) 基于深度学习的单帧双频复用条纹投影三维面型测量方法
CN111563564A (zh) 基于深度学习的散斑图像逐像素匹配方法
CN110163817B (zh) 一种基于全卷积神经网络的相位主值提取方法
CN112833818B (zh) 一种单帧条纹投影三维面型测量方法
Dai et al. A dual-frequency fringe projection three-dimensional shape measurement system using a DLP 3D projector
CN109945802B (zh) 一种结构光三维测量方法
CN113379818B (zh) 一种基于多尺度注意力机制网络的相位解析方法
WO2021184686A1 (zh) 基于多尺度生成对抗神经网络的单帧条纹分析方法
CN108195313A (zh) 一种基于光强响应函数的高动态范围三维测量方法
CN111879258A (zh) 基于条纹图像转换网络FPTNet的动态高精度三维测量方法
CN114549746A (zh) 一种高精度真彩三维重建方法
CN115205360A (zh) 复合条纹投影钢管的三维外轮廓在线测量与缺陷检测方法及应用
Liu et al. A novel phase unwrapping method for binocular structured light 3D reconstruction based on deep learning
CN115272065A (zh) 基于条纹图超分辨率重建的动态条纹投影三维测量方法
WO2023236725A1 (zh) 一种三维测量方法、设备和存储介质
Wu et al. Two-neighbor-wavelength phase-shifting approach for high-accuracy rapid 3D measurement
CN112348947B (zh) 一种基于参考信息辅助的深度学习的三维重构方法
CN111023999B (zh) 一种基于空间编码结构光的稠密点云生成方法
TWI719588B (zh) 適用於瞬時形貌量測二維編碼條紋投影的方法
Ding et al. Recovering the absolute phase maps of three selected spatial-frequency fringes with multi-color channels
Cheng et al. Color fringe projection profilometry using geometric constraints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926098

Country of ref document: EP

Kind code of ref document: A1