CN115615358A - A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning - Google Patents
A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning Download PDFInfo
- Publication number
- CN115615358A CN115615358A CN202211247398.XA CN202211247398A CN115615358A CN 115615358 A CN115615358 A CN 115615358A CN 202211247398 A CN202211247398 A CN 202211247398A CN 115615358 A CN115615358 A CN 115615358A
- Authority
- CN
- China
- Prior art keywords
- color
- image
- phase
- layer
- deformed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012937 correction Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 24
- 230000010363 phase shift Effects 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 22
- 238000004088 simulation Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 6
- 238000005259 measurement Methods 0.000 claims description 33
- 230000006870 function Effects 0.000 claims description 11
- 239000002131 composite material Substances 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 230000001537 neural effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000012876 topography Methods 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2504—Calibration devices
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2518—Projection by scanning of the object
- G01B11/2527—Projection by scanning of the object with phase change by in-plane movement of the patern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种非监督深度学习的彩色结构光色彩串扰校正方法,利用计算机生成彩色相移条纹图像,采集被被测物高度调制的变形彩色条纹图像,并对该图像提取出三张灰度图像;将三张灰度图分别输入到三个深度神经网络模块中进行色彩串扰校正,输出三张预测的校正灰度图像,并通过相移法求解出预测相位,利用此预测相位进行计算机逆投影仿真,可以得到该相位结果对应的变形彩色条纹图像;计算实际采集的变形彩色条纹图像与逆投影仿真的变形彩色条纹图像的损失值,经过多次迭代优化网络参数,直至损失值最小时,得到理想校正结果。本发明无需制作大量训练数据及其对应的标签,不依赖于特定数据集,显著提高深度学习的运作效率和泛化性能。
The invention discloses a color structured light color crosstalk correction method of unsupervised deep learning, which uses a computer to generate a color phase shift fringe image, collects a deformed color fringe image modulated by the height of the measured object, and extracts three gray images from the image. Input the three grayscale images into three deep neural network modules for color crosstalk correction, output three predicted corrected grayscale images, and solve the predicted phase by phase shift method, use this predicted phase for computer Back projection simulation can obtain the deformed color fringe image corresponding to the phase result; calculate the loss value between the actually collected deformed color fringe image and the deformed color fringe image simulated by back projection, and optimize the network parameters through multiple iterations until the loss value is minimized , to obtain the ideal calibration result. The present invention does not need to produce a large amount of training data and corresponding labels, does not depend on a specific data set, and significantly improves the operation efficiency and generalization performance of deep learning.
Description
技术领域technical field
本发明涉及结构光三维测量的技术领域,具体涉及一种非监督深度学习的彩色结构光色彩串扰校正方法。The invention relates to the technical field of three-dimensional measurement of structured light, in particular to a method for correcting color crosstalk of color structured light with unsupervised deep learning.
背景技术Background technique
结构光三维测量技术具有非接触,高灵敏度,高精度的优势,已被广泛应用于工业检测,逆向工程,智能制造等行业。随着计算机视觉的飞速发展,结构光三维测量技术目前朝着高速测量、实时测量发展。传统的相移法由于需要至少三张结构光图像才能恢复一个测量结果,因此在高速动态测量应用中非常受限。彩色结构光方法将三张相移图像分别载入RGB三通道,形成一张彩色结构光图像,从而实现了仅采集一张图像即可恢复测量结果,从而满足目前高速动态测量应用的趋势。Structured light three-dimensional measurement technology has the advantages of non-contact, high sensitivity and high precision, and has been widely used in industrial inspection, reverse engineering, intelligent manufacturing and other industries. With the rapid development of computer vision, structured light 3D measurement technology is currently developing towards high-speed measurement and real-time measurement. The traditional phase-shifting method is very limited in high-speed dynamic measurement applications because at least three structured light images are required to recover a measurement result. The color structured light method loads three phase-shifted images into the RGB three channels respectively to form a color structured light image, so that the measurement results can be recovered by only collecting one image, thus meeting the current trend of high-speed dynamic measurement applications.
然而,由于自然光的波长分布是连续的,对于某个具体颜色的光波长来说,其分布是一个小范围的区间;因此,投影仪或相机对RGB三色的响应不可避免地存在重叠部分,导致对彩色结构光进行RGB通道分离时存在一定程度的误差和干扰,这种现象被成为色彩串扰。显然,如果不对通道分离后的结构光图像进行色彩串扰消除和校正,求解得到的被测物相位分布必然存在严重的误差,从而无法恢复准确的三维表面形貌。因此,如何对色彩串扰现象进行高质量的校正成为彩色结构光测量技术中的必须克服的难题和重要的研究方向。However, since the wavelength distribution of natural light is continuous, for a specific color of light wavelength, its distribution is a small range of intervals; therefore, there are inevitably overlaps in the responses of projectors or cameras to RGB three colors, As a result, there is a certain degree of error and interference in the separation of RGB channels for color structured light, and this phenomenon is called color crosstalk. Obviously, if the color crosstalk elimination and correction are not performed on the structured light image after channel separation, there must be serious errors in the phase distribution of the measured object obtained from the solution, so that the accurate three-dimensional surface topography cannot be restored. Therefore, how to correct the color crosstalk phenomenon with high quality has become a difficult problem and an important research direction in the color structured light measurement technology.
发明内容Contents of the invention
本发明的目的是提供一种非监督深度学习的彩色结构光色彩串扰校正方法,以快速有效地实现彩色结构光色彩串扰校正。The purpose of the present invention is to provide a color structured light color crosstalk correction method based on unsupervised deep learning, so as to quickly and effectively realize the color structured light color crosstalk correction.
为了实现上述任务,本发明采用以下技术方案:In order to achieve the above tasks, the present invention adopts the following technical solutions:
一种非监督深度学习的彩色结构光色彩串扰校正方法,包括:A color structured light color crosstalk correction method for unsupervised deep learning, including:
利用计算机制作生成一张复合彩色相移条纹结构光图像IC并传送至测量系统中;Generate a composite color phase shift fringe structured light image I C by computer and send it to the measurement system;
利用测量系统的投影模块将所述复合彩色相移条纹结构光图像IC投影到被测物表面,同时测量系统的彩色相机在另一角度采集一张被被测物高度调制的变形彩色条纹图像Icolor;Use the projection module of the measurement system to project the composite color phase shift fringe structured light image I C onto the surface of the measured object, and at the same time, the color camera of the measurement system collects a highly modulated deformed color fringe image of the measured object at another angle I color ;
对所述变形彩色条纹图像Icolor的RGB三通道进行分离,分别提取出三张变形灰度条纹结构光图像IR,IG和IB,将图像IR,IG和IB分别输入到三个深度神经网络模块中进行色彩串扰校正;Separate the RGB three channels of the deformed color fringe image I color , extract three deformed grayscale fringe structured light images I R , I G and I B respectively, and input the images I R , I G and I B into Color crosstalk correction in three deep neural network modules;
所述三个深度神经网络输出三张预测的色彩串扰校正灰度图像I’R,I’G和I’B,基于该三张灰度图像利用相移法求解出预测相位Φ’;The three deep neural networks output three predicted color crosstalk corrected gray-scale images I' R , I' G and I' B , and use the phase shift method to solve the predicted phase Φ' based on the three gray-scale images;
利用预测相位Φ’进行计算机逆投影仿真,得到该相位结果对应的变形彩色条纹图像Irep_color;Utilize the predicted phase Φ' to carry out computer back projection simulation, and obtain the deformed color fringe image I rep_color corresponding to the phase result;
利用采集的变形彩色条纹图像Icolor与逆投影仿真得到的变形彩色条纹图像Irep_color构建深度神经网络的损失函数,并计算得到其损失值;Construct the loss function of the deep neural network by using the collected deformed color stripe image I color and the deformed color stripe image I rep_color obtained by back projection simulation, and calculate its loss value;
当损失函数计算的损失值达到最小值时,得到最终校正的串扰校正理想条纹图像;利用校正后的理想条纹图像结合相移法计算最终理想相位Φ,并恢复被测物真实三维形貌。When the loss value calculated by the loss function reaches the minimum value, the final corrected crosstalk corrected ideal fringe image is obtained; the corrected ideal fringe image is combined with the phase shift method to calculate the final ideal phase Φ, and restore the real three-dimensional shape of the measured object.
进一步地,计算机生成的彩色结构光图像可表示为:Further, the computer-generated color structured light image can be expressed as:
其中IC为复合彩色相移条纹结构光图像,其包含三个通道图像,分别用I1,I2和I3表示,每个通道均是一张灰度正弦条纹相移图像;f表示正弦条纹的频率,x表示图像的横向坐标索引,2nπ/3表示其相移量,n表示第n个通道。Among them, I C is a composite color phase-shift fringe structured light image, which contains three channel images, which are represented by I 1 , I 2 and I 3 respectively, and each channel is a gray-scale sinusoidal fringe phase-shift image; f represents sinusoidal The frequency of the stripes, x represents the horizontal coordinate index of the image, 2nπ/3 represents its phase shift, and n represents the nth channel.
进一步地,所述测量系统包括DLP投影模块、彩色工业相机以及计算机。其中,DLP投影模块光轴与被测物呈30度角将结构光图像IC投射至被测物表面,彩色工业相机的光轴垂直于被测物采集图像。Further, the measurement system includes a DLP projection module, a color industrial camera and a computer. Among them, the optical axis of the DLP projection module is at an angle of 30 degrees to the measured object to project the structured light image IC to the surface of the measured object, and the optical axis of the color industrial camera is perpendicular to the measured object to collect images.
进一步地,彩色相机采集到的被被测物高度调制的变形彩色条纹图像Icolor可表示为:Further, the highly modulated deformed color fringe image I color of the measured object collected by the color camera can be expressed as:
其中,IR,IG,IB为Icolor的RGB三通道图像,Φ表示被测物的真实相位分布,为三维测量过程中需要求解的未知量。Among them, I R , I G , and I B are RGB three-channel images of I color , and Φ represents the real phase distribution of the measured object, which is the unknown quantity that needs to be solved in the three-dimensional measurement process.
进一步地,负责处理三个变形灰度条纹结构光图像IR,IG和IB的三个深度神经子网络模块均为U型网络,U型网络由编码器与解码器组成;其中,编码器从上到下共有5层,每层之间通过特征提取下采样连接以逐层缩小图像尺寸,每层内部均有3个依次设置的卷积层,卷积层之间由残差块连接,上一层最后一个卷积层的输出作为下一层第一个卷积层的输入;解码器结构与编码器对称,共有5层,每层之间通过特征提取上采样连接以逐层恢复原始图像大小,最终得到预测的输出结果;其中,下一层最后一个卷积层的输出作为上一层第一个卷积层的输入;编码器的最下层与解码器的最下层之间由注意力机制模块连接。Furthermore, the three deep neural sub-network modules responsible for processing three deformed gray-scale stripe structured light images I R , I G and I B are all U-shaped networks, and the U-shaped network is composed of an encoder and a decoder; among them, the encoding The filter has 5 layers from top to bottom. Each layer is connected by feature extraction and downsampling to reduce the image size layer by layer. There are 3 convolutional layers arranged in sequence inside each layer, and the convolutional layers are connected by residual blocks. , the output of the last convolutional layer of the previous layer is used as the input of the first convolutional layer of the next layer; the decoder structure is symmetrical to the encoder, with a total of 5 layers, and each layer is restored layer by layer through feature extraction and upsampling connection The size of the original image, and finally get the predicted output; among them, the output of the last convolutional layer of the next layer is used as the input of the first convolutional layer of the previous layer; the bottom layer of the encoder and the bottom layer of the decoder are connected by Attention mechanism module connection.
进一步地,三个深度神经子网络的权重不可共享。Further, the weights of the three deep neural sub-networks are not shared.
进一步地,所述基于该三张灰度图像利用相移法求解出预测相位Φ’,包括:Further, the phase shift method is used to solve the predicted phase Φ' based on the three grayscale images, including:
利用灰度图像I’R,I’G和I’B代入相移法公式求解出预测相位Φ’:Use the grayscale image I' R , I' G and I' B to substitute into the phase shift method formula to solve the predicted phase Φ':
进一步地,所述利用预测相位Φ’进行计算机逆投影仿真,可以得到该相位结果对应的变形彩色条纹图像Irep_color,公式如下:Further, the computer back-projection simulation using the predicted phase Φ' can obtain the deformed color fringe image I rep_color corresponding to the phase result, and the formula is as follows:
其中,Irep_R,Irep_R,Irep_R为Irep_color的RGB三个通道图像。Among them, I rep_R , I rep_R , and I rep_R are RGB three-channel images of I rep_color .
进一步地,深度神经网络的损失函数表示为:Further, the loss function of the deep neural network is expressed as:
其中,x、y表示图像的横向、纵向坐标索引,H、W表示图像的高度、宽度;λR,λG和λB分别为RGB三个通道损失值的权重。Among them, x and y represent the horizontal and vertical coordinate indexes of the image, H and W represent the height and width of the image; λ R , λ G and λ B are the weights of the loss values of the three channels of RGB respectively.
进一步地,通过网络训练,将损失值达到最小时的三个深度神经网络保存下来作为最终的校正网络;将图像IR,IG和IB输入校正网络,使用该校正网络输出的对应于RGB三个通道的三张串扰校正理想条纹图像,利用相移法求解相位,最后将得到的理想相位Φ通过非线性映射得到被测物真实的三维形貌。Further, through network training, save the three deep neural networks when the loss value reaches the minimum as the final correction network; input the images I R , I G and I B into the correction network, and use the correction network output corresponding to RGB The three crosstalk corrected ideal fringe images of the three channels are used to solve the phase using the phase shift method, and finally the obtained ideal phase Φ is obtained through nonlinear mapping to obtain the real three-dimensional shape of the measured object.
进一步地,所述非线性映射采用的非线性标定模型表示为:Further, the nonlinear calibration model adopted by the nonlinear mapping is expressed as:
其中,h表示三维形貌深度信息,a,b和c为标定参数,其均在测量之前的测量系统标定环节被确定。Among them, h represents the depth information of the three-dimensional topography, and a, b and c are the calibration parameters, which are all determined in the calibration of the measurement system before the measurement.
与现有技术相比,本发明具有以下技术特点:Compared with the prior art, the present invention has the following technical characteristics:
1.本方案利用非监督的深度学习机制来对彩色结构光中存在的色彩串扰引起的条纹结构光图像失真和混叠现象进行校正,进而降低测量的相位误差,实现更高精度的三维形貌测量。与传统的色彩串扰校正方法相比,本发明提供的方法不需要进行复杂的串扰矩阵估计,而是利用深度神经网络强大的非线性拟合与预测能力进行快速校正。1. This solution uses an unsupervised deep learning mechanism to correct the distortion and aliasing of the striped structured light image caused by the color crosstalk in the colored structured light, thereby reducing the phase error of the measurement and achieving a higher-precision three-dimensional shape Measurement. Compared with the traditional color crosstalk correction method, the method provided by the present invention does not require complex crosstalk matrix estimation, but uses the powerful nonlinear fitting and prediction capabilities of the deep neural network to perform rapid correction.
2.与传统深度学习的输入-标签一一映射的学习机制相比,本发明提供的非监督学习机制可以完全抛弃数据集中的标签数据,大大降低了深度学习技术中数据集采集和制作的人工成本。此外,本发明提供的方法使用的逆投影仿真结果是严格按照测量原理的物理模型进行仿真的,因此相比传统的深度学习机制,本发明方法更有可解释性,并且可以不受训练数据集的限制,可适用于任何测量场景,因此具有良好的泛化性能。2. Compared with the learning mechanism of input-label one-to-one mapping of traditional deep learning, the non-supervised learning mechanism provided by the present invention can completely discard the label data in the data set, greatly reducing the labor of data collection and production in deep learning technology. cost. In addition, the back-projection simulation results used by the method provided by the present invention are simulated strictly according to the physical model of the measurement principle, so compared with the traditional deep learning mechanism, the method of the present invention is more interpretable and can be independent of the training data set It can be applied to any measurement scenario, so it has good generalization performance.
附图说明Description of drawings
图1为本发明使用的彩色结构光测量系统示意图Fig. 1 is the schematic diagram of the color structured light measuring system used in the present invention
图2为本发明非监督深度学习方法流程图Fig. 2 is the flow chart of unsupervised deep learning method of the present invention
图3为本发明深度神经网络结构图Fig. 3 is the deep neural network structural diagram of the present invention
附图标记说明:1-彩色工业相机,2-DLP投影模块,3-计算机,4-R通道编码灰度条纹图像,5-G通道编码灰度条纹图像,6-B通道编码灰度条纹图像,7-合成的彩色编码条纹图像。Description of reference signs: 1-color industrial camera, 2-DLP projection module, 3-computer, 4-R channel coded grayscale fringe image, 5-G channel coded grayscale fringe image, 6-B channel coded grayscale fringe image , 7 - Synthesized color-coded fringe images.
具体实施方式detailed description
发明提供的一种非监督深度学习的彩色结构光色彩串扰校正方法,首先利用计算机生成一张彩色相移条纹结构光图像,然后,利用彩色相机采集一张被被测物高度调制的变形彩色条纹图像,并对该彩色图像的R,G,B三通道进行分离,分别提取出三张灰度图像;接着将所述三张灰度图分别输入到三个深度神经网络模块中进行色彩串扰校正,随后,深度神经网络输出三张预测的校正灰度图像,基于该三张灰度图像利用相移法求解出预测相位;利用此预测相位进行计算机逆投影仿真,可以得到该相位结果对应的变形彩色条纹图像;最后,计算采集的变形彩色条纹图像与逆投影仿真得到的变形彩色条纹图像的损失值,经过多次迭代优化网络参数,直至损失值最小时,得到理想校正结果。本发明的非监督深度学习机制无需制作大量训练数据及其对应的标签,不依赖于特定数据集,显著提高深度学习的运作效率和泛化性能。The invention provides an unsupervised deep learning color structured light color crosstalk correction method. First, a computer is used to generate a color phase-shifted fringe structured light image, and then a color camera is used to collect a deformed color fringe that is highly modulated by the measured object. image, and separate the R, G, and B channels of the color image to extract three grayscale images respectively; then input the three grayscale images into three deep neural network modules for color crosstalk correction , and then, the deep neural network outputs three predicted corrected grayscale images, based on the three grayscale images, the phase shift method is used to solve the predicted phase; the predicted phase is used for computer back projection simulation, and the deformation corresponding to the phase result can be obtained Color fringe image; finally, calculate the loss value of the collected deformed color fringe image and the deformed color fringe image obtained by back-projection simulation, and optimize the network parameters through multiple iterations until the loss value is minimized, and the ideal correction result is obtained. The unsupervised deep learning mechanism of the present invention does not need to produce a large amount of training data and corresponding labels, does not depend on a specific data set, and significantly improves the operation efficiency and generalization performance of deep learning.
下面结合附图对本发明的具体实现过程做进一步详细说明。The specific implementation process of the present invention will be described in further detail below in conjunction with the accompanying drawings.
S1,利用计算机制作生成一张复合彩色相移条纹结构光图像IC并传送至测量系统中:S1, use computer to generate a composite color phase shift fringe structured light image I C and send it to the measurement system:
计算机生成的彩色结构光图像可表示为:The computer-generated color structured light image can be expressed as:
其中IC为复合彩色相移条纹结构光图像,其包含三个通道图像,分别用I1,I2和I3表示,每个通道均是一张灰度正弦条纹相移图像;f表示正弦条纹的频率,x表示图像的横向坐标索引,2nπ/3表示其相移量,n表示第n个通道。Among them, I C is a composite color phase-shifted fringe structured light image, which contains three channel images, represented by I 1 , I 2 and I 3 , and each channel is a gray-scale sinusoidal fringe phase-shifted image; f represents sinusoidal The frequency of the stripes, x represents the horizontal coordinate index of the image, 2nπ/3 represents its phase shift, and n represents the nth channel.
S2,利用测量系统的投影模块将所述复合彩色相移条纹结构光图像IC投影到被测物表面,同时测量系统的彩色相机在另一角度采集一张被被测物高度调制的变形彩色条纹图像Icolor;所述另一角度是指区别于投影角度的其他角度。S2, using the projection module of the measurement system to project the composite color phase-shifted fringe structured light image I C onto the surface of the measured object, and at the same time, the color camera of the measurement system collects a highly modulated distorted color image of the measured object at another angle The fringe image I color ; the other angle refers to other angles different from the projection angle.
参见图1,本实施例中所述测量系统包括DLP投影模块、彩色工业相机以及计算机。其中,DLP投影模块光轴与被测物呈约30度角将结构光图像IC投射至被测物表面,彩色工业相机的光轴垂直于被测物采集图像。Referring to Fig. 1, the measurement system in this embodiment includes a DLP projection module, a color industrial camera and a computer. Among them, the optical axis of the DLP projection module and the measured object form an angle of about 30 degrees to project the structured light image IC to the surface of the measured object, and the optical axis of the color industrial camera is perpendicular to the measured object to collect images.
彩色相机采集到的被被测物高度调制的变形彩色条纹图像Icolor可表示为:The deformed color fringe image I color that is highly modulated by the measured object collected by the color camera can be expressed as:
其中,IR,IG,IB(图1中的4、5、6)为Icolor(图1中的7)的RGB三通道图像,Φ表示被测物的真实相位分布,为三维测量过程中需要求解的未知量;将彩色图像Icolor的RGB三个通道分别提取出单独的图像,即可得到三张变形灰度条纹结构光图像IR,IG和IB。Among them, I R , I G , I B (4, 5, 6 in Figure 1) are RGB three-channel images of I color (7 in Figure 1), and Φ represents the real phase distribution of the measured object, which is a three-dimensional measurement The unknown quantity that needs to be solved in the process; the RGB three channels of the color image I color are extracted as separate images, and three deformed grayscale striped structured light images I R , I G and I B can be obtained.
S3,对所述变形彩色条纹图像Icolor的RGB三通道进行分离,分别提取出三张变形灰度条纹结构光图像IR,IG和IB,将图像IR,IG和IB分别输入到三个深度神经网络模块中进行色彩串扰校正:S3, separating the RGB three channels of the deformed color fringe image I color , extracting three deformed grayscale fringe structured light images I R , I G and I B respectively, and converting the images I R , I G and I B respectively Input to three deep neural network modules for color crosstalk correction:
负责处理三个变形灰度条纹结构光图像IR,IG和IB的三个深度神经子网络模块均为U型网络,U型网络由编码器与解码器组成;其中,编码器从上到下共有5层,每层之间通过特征提取下采样连接以逐层缩小图像尺寸,每层内部均有3个依次设置的卷积层,卷积层之间由残差块连接,上一层最后一个卷积层的输出作为下一层第一个卷积层的输入;解码器结构与编码器对称,共有5层,每层之间通过特征提取上采样连接以逐层恢复原始图像大小,最终得到预测的输出结果;其中,下一层最后一个卷积层的输出作为上一层第一个卷积层的输入;编码器的最下层与解码器的最下层之间由注意力机制模块连接,可提高网络对条纹边缘特征的注意;此外,编码器与解码器对应层之间用进行跳跃连接,用于直接传输同尺寸的特征图,提高学习的速度。The three deep neural sub-network modules responsible for processing the three deformed gray-scale stripe structured light images I R , I G and I B are all U-shaped networks, and the U-shaped network is composed of an encoder and a decoder; the encoder is from the top There are 5 layers to the bottom. Each layer is connected by feature extraction and downsampling to reduce the image size layer by layer. There are 3 convolutional layers arranged in sequence inside each layer. The convolutional layers are connected by residual blocks. The previous layer The output of the last convolutional layer of the layer is used as the input of the first convolutional layer of the next layer; the decoder structure is symmetrical to the encoder, with a total of 5 layers, and each layer is connected through feature extraction and upsampling to restore the original image size layer by layer , and finally get the predicted output; among them, the output of the last convolutional layer of the next layer is used as the input of the first convolutional layer of the previous layer; the bottom layer of the encoder and the bottom layer of the decoder are connected by the attention mechanism The module connection can improve the network's attention to the stripe edge features; in addition, the corresponding layer of the encoder and the decoder is connected by a skip connection, which is used to directly transmit the feature map of the same size to improve the learning speed.
由于三个变形灰度条纹结构光图像IR,IG和IB虽然均为正弦条纹图案,但每张图像的串扰影响大小和特征不同,因此三个深度神经子网络的权重不可共享。Since the three deformed gray-scale fringe structured light images I R , I G and I B are all sinusoidal fringe patterns, but the size and characteristics of the crosstalk influence of each image are different, so the weights of the three deep neural sub-networks cannot be shared.
S4,所述三个深度神经网络输出三张预测的色彩串扰校正灰度图像I’R,I’G和I’B,基于该三张灰度图像利用相移法求解出预测相位Φ’:S4, the three deep neural networks output three predicted color crosstalk corrected grayscale images I' R , I' G and I' B , based on the three grayscale images, the phase shift method is used to solve the predicted phase Φ':
其中,利用灰度图像I’R,I’G和I’B代入相移法公式求解出预测相位Φ’:Among them, the predicted phase Φ' is obtained by substituting the grayscale images I' R , I' G and I' B into the formula of the phase shift method:
S5,利用预测相位Φ’进行计算机逆投影仿真,可以得到该相位结果对应的变形彩色条纹图像Irep_color;该步骤包括:S5, using the predicted phase Φ' to carry out computer back projection simulation, the deformed color fringe image I rep_color corresponding to the phase result can be obtained; this step includes:
计算机逆投影仿真的原理是将求解得到的预测相位视为已知量,并将其添加到彩色条纹结构光的生成公式中,得到一个被该预测相位调制的变形彩色条纹图像Irep_color,所述生成公式如下:The principle of computer back-projection simulation is to regard the predicted phase obtained from the solution as a known quantity, and add it to the generation formula of the colored stripe structured light to obtain a deformed colored stripe image I rep_color modulated by the predicted phase. The generation formula is as follows:
其中,Irep_R,Irep_R,Irep_R为Irep_color的RGB三个通道图像;理论上,如果网络预测的校正灰度图像像足够准确,则逆投影仿真得到的变形彩色条纹结构光图像Irep_color会与实际采集的变形彩色条纹图像Icolor几乎相同。Among them, I rep_R , I rep_R , and I rep_R are RGB three-channel images of I rep_color ; theoretically, if the corrected grayscale image predicted by the network is accurate enough, the deformed color striped structured light image I rep_color obtained by backprojection simulation will be It is almost the same as the actually collected deformed color fringe image I color .
S6,利用采集的变形彩色条纹图像Icolor与逆投影仿真得到的变形彩色条纹图像Irep_color构建深度神经网络的损失函数,并计算得到其损失值;该步骤包括:S6, using the collected deformed color stripe image I color and the deformed color stripe image I rep_color obtained by back projection simulation to construct a loss function of the deep neural network, and calculate its loss value; this step includes:
基于逆投影仿真得到的变形彩色条纹图像Irep_color与采集的变形彩色条纹图像Icolor建立约束关系,来优化调整深度神经网络的参数,促使深度神经网络输出更高质量的校正结果;深度神经网络的损失函数定义为:Based on the deformed color stripe image I rep_color obtained by the back projection simulation and the collected deformed color stripe image I color , a constraint relationship is established to optimize and adjust the parameters of the deep neural network and prompt the deep neural network to output higher-quality correction results; the deep neural network The loss function is defined as:
其中,x、y表示图像的横向、纵向坐标索引,H、W表示图像的高度、宽度。Among them, x and y represent the horizontal and vertical coordinate indexes of the image, and H and W represent the height and width of the image.
在实际计算中,将两个彩色条纹图像RGB三个通道分别计算损失,最后再相加。损失函数可以进一步改写为:In the actual calculation, the RGB three channels of the two color fringe images are used to calculate the loss respectively, and finally add them together. The loss function can be further rewritten as:
其中λR,λG和λB分别为RGB三个通道损失值的权重,根据网络的具体性能进行设定。Among them, λ R , λ G and λ B are the weights of the loss values of the three channels of RGB respectively, which are set according to the specific performance of the network.
S7,当损失函数计算的损失值达到最小值时,得到最终校正的串扰校正理想条纹图像;利用校正后的理想条纹图像结合相移法计算最终理想相位Φ,并恢复被测物真实三维形貌;该步骤包括:S7, when the loss value calculated by the loss function reaches the minimum value, the final corrected crosstalk corrected ideal fringe image is obtained; the corrected ideal fringe image is combined with the phase shift method to calculate the final ideal phase Φ, and restore the true three-dimensional shape of the measured object ; This step includes:
通过网络训练,将损失值达到最小时的三个深度神经网络保存下来作为最终的校正网络;将图像IR,IG和IB输入校正网络,使用该校正网络输出的对应于RGB三个通道的三张串扰校正理想条纹图像,利用S4中的相移法求解相位,最后将得到的理想相位Φ通过如下非线性标定模型映射得到被测物真实的三维形貌:Through network training, save the three deep neural networks when the loss value reaches the minimum as the final correction network; input the images I R , I G and I B into the correction network, and use the correction network to output the three channels corresponding to RGB The three crosstalk-corrected ideal fringe images of , use the phase shift method in S4 to solve the phase, and finally map the obtained ideal phase Φ through the following nonlinear calibration model to obtain the real three-dimensional shape of the measured object:
其中,h表示三维形貌深度信息,a,b和c为标定参数,其均在测量之前的测量系统标定环节被确定。Among them, h represents the depth information of the three-dimensional topography, a, b and c are the calibration parameters, which are all determined in the calibration of the measurement system before the measurement.
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still apply to the foregoing embodiments Modifications to the technical solutions recorded, or equivalent replacements for some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of each embodiment of the application, and should be included in this application. within the scope of protection.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211247398.XA CN115615358A (en) | 2022-10-12 | 2022-10-12 | A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211247398.XA CN115615358A (en) | 2022-10-12 | 2022-10-12 | A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115615358A true CN115615358A (en) | 2023-01-17 |
Family
ID=84862636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211247398.XA Pending CN115615358A (en) | 2022-10-12 | 2022-10-12 | A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115615358A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116105632A (en) * | 2023-04-12 | 2023-05-12 | 四川大学 | A self-supervised phase unwrapping method and device for structured light three-dimensional imaging |
CN118293825A (en) * | 2024-04-03 | 2024-07-05 | 北京微云智联科技有限公司 | A phase compensation method and device for sinusoidal grating projection system |
-
2022
- 2022-10-12 CN CN202211247398.XA patent/CN115615358A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116105632A (en) * | 2023-04-12 | 2023-05-12 | 四川大学 | A self-supervised phase unwrapping method and device for structured light three-dimensional imaging |
CN118293825A (en) * | 2024-04-03 | 2024-07-05 | 北京微云智联科技有限公司 | A phase compensation method and device for sinusoidal grating projection system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402240A (en) | A 3D surface measurement method based on single-frame color fringe projection based on deep learning | |
CN100443854C (en) | Phase Unwrapping Method Based on Gray Code in 3D Scanning System | |
CN103530880B (en) | Based on the camera marking method of projection Gaussian network pattern | |
CN115615358A (en) | A Color Crosstalk Correction Method for Color Structured Light Based on Unsupervised Deep Learning | |
CN113379818B (en) | A Phase Resolution Method Based on Multiscale Attention Mechanism Network | |
CN103697815B (en) | Mixing structural light three-dimensional information getting method based on phase code | |
CN114777677B (en) | Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning | |
CN112697071B (en) | Three-dimensional measurement method for color structured light projection based on DenseNet shadow compensation | |
CN105046743A (en) | Super-high-resolution three dimensional reconstruction method based on global variation technology | |
CN114549307B (en) | High-precision point cloud color reconstruction method based on low-resolution image | |
CN104197861A (en) | Three-dimensional digital imaging method based on structured light gray level vector | |
CN114663496B (en) | A Monocular Visual Odometry Method Based on Kalman Pose Estimation Network | |
CN102800127A (en) | Light stream optimization based three-dimensional reconstruction method and device | |
CN105180904A (en) | High-speed moving target position and posture measurement method based on coding structured light | |
CN111879258A (en) | Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet | |
CN102519395B (en) | Color response calibration method in colored structure light three-dimensional measurement | |
CN100449258C (en) | Real-time 3D Vision System Based on 2D Color Light Encoding | |
CN101750029A (en) | Characteristic point three-dimensional reconstruction method based on trifocal tensor | |
CN117011478B (en) | Single image reconstruction method based on deep learning and stripe projection profilometry | |
CN115272065A (en) | Dynamic fringe projection 3D measurement method based on fringe image super-resolution reconstruction | |
CN110608687A (en) | A Color-coded Grating Crosstalk Compensation Method Based on Projection Plane | |
CN111640084A (en) | High-speed pixel matching method based on LK optical flow | |
CN106468562B (en) | A Radial Chromatic Aberration Calibration Method for Color Cameras Based on Absolute Phase | |
CN113884027B (en) | Geometric constraint phase unwrapping method based on self-supervision deep learning | |
CN115482268A (en) | High-precision three-dimensional shape measurement method and system based on speckle matching network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |