CN115615358A - Color structure light color crosstalk correction method for unsupervised deep learning - Google Patents

Color structure light color crosstalk correction method for unsupervised deep learning Download PDF

Info

Publication number
CN115615358A
CN115615358A CN202211247398.XA CN202211247398A CN115615358A CN 115615358 A CN115615358 A CN 115615358A CN 202211247398 A CN202211247398 A CN 202211247398A CN 115615358 A CN115615358 A CN 115615358A
Authority
CN
China
Prior art keywords
color
image
phase
deformed
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211247398.XA
Other languages
Chinese (zh)
Inventor
何昭水
谈季
苏文青
林志洁
董博
白玉磊
谢胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202211247398.XA priority Critical patent/CN115615358A/en
Publication of CN115615358A publication Critical patent/CN115615358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a color structure light color crosstalk correction method of unsupervised deep learning, which comprises the steps of generating a color phase shift stripe image by using a computer, collecting a deformed color stripe image modulated by the height of a measured object, and extracting three gray level images from the image; respectively inputting the three gray-scale images into three deep neural network modules for color crosstalk correction, outputting three predicted corrected gray-scale images, solving a predicted phase through a phase shift method, and performing computer back projection simulation by using the predicted phase to obtain a deformed color stripe image corresponding to the phase result; and calculating loss values of the actually acquired deformed color fringe image and the deformed color fringe image simulated by the back projection, and performing iterative optimization on network parameters for multiple times until the loss value is minimum to obtain an ideal correction result. According to the invention, a large amount of training data and corresponding labels do not need to be made, a specific data set is not depended on, and the operation efficiency and the generalization performance of deep learning are obviously improved.

Description

Color structure light color crosstalk correction method for unsupervised deep learning
Technical Field
The invention relates to the technical field of structured light three-dimensional measurement, in particular to a color crosstalk correction method of color structured light for unsupervised deep learning.
Background
The structured light three-dimensional measurement technology has the advantages of non-contact, high sensitivity and high precision, and is widely applied to industries such as industrial detection, reverse engineering, intelligent manufacturing and the like. With the rapid development of computer vision, the structured light three-dimensional measurement technology is currently developing towards high-speed measurement and real-time measurement. Conventional phase shift methods are very limited in high speed dynamic measurement applications because at least three structured light images are required to recover one measurement. The color structure light method loads three phase shift images into an RGB three-channel respectively to form a color structure light image, thereby realizing that the measurement result can be recovered by only collecting one image, and further meeting the trend of the current high-speed dynamic measurement application.
However, since the wavelength distribution of natural light is continuous, the distribution is a small range of intervals for a particular color of light; therefore, the response of the projector or camera to the three RGB colors inevitably has overlapping portions, resulting in some degree of errors and interference in the RGB channel separation of the color structured light, which is referred to as color crosstalk. Obviously, if the color crosstalk elimination and correction are not performed on the structured light image after the channel separation, the phase distribution of the measured object obtained by solving has a serious error, so that the accurate three-dimensional surface morphology cannot be recovered. Therefore, how to correct the color crosstalk phenomenon with high quality becomes a difficult problem and an important research direction that must be overcome in the color structured light measurement technology.
Disclosure of Invention
The invention aims to provide a color crosstalk correction method for color structure light and color of unsupervised deep learning, so as to quickly and effectively realize color crosstalk correction of color structure light and color.
In order to realize the task, the invention adopts the following technical scheme:
a color crosstalk correction method for color structure light of unsupervised deep learning comprises the following steps:
generating a composite color phase shift stripe structure light image I by computer operation C And transmitting to a measuring system;
using a projection module of a measurement system to shift the composite color phase fringe structure light image I C Projecting the image on the surface of the measured object, and simultaneously collecting a deformed color fringe image I modulated by the height of the measured object at another angle by a color camera of the measuring system color
For the deformed color stripe image I color The RGB three channels are separated, and three deformed gray scale stripe structure light images I are respectively extracted R ,I G And I B An image I R ,I G And I B Respectively inputting the color crosstalk correction data into three deep neural network modules for color crosstalk correction;
the three deep neural networks output three predicted color crosstalk corrected grayscale images I' R ,I’ G And l' B Solving a predicted phase phi' by using a phase shift method based on the three gray level images;
carrying out computer back projection simulation by using the predicted phase phi' to obtain a deformed color stripe image I corresponding to the phase result rep_color
Using acquired deformed colour stripe image I color Deformed color stripe image I obtained by simulation with inverse projection rep_color Constructing a loss function of the deep neural network, and calculating to obtain a loss value of the loss function;
when the loss value calculated by the loss function reaches the minimum value, obtaining a final corrected crosstalk correction ideal fringe image; and calculating a final ideal phase phi by using the corrected ideal fringe image and combining a phase shift method, and recovering the real three-dimensional shape of the measured object.
Further, the computer generated color structured light image may be represented as:
Figure BDA0003887244410000021
in which I C Is a composite color phase-shifted fringe-structured light image comprising three channel images, each with I 1 ,I 2 And I 3 Showing that each channel is a gray scale sine stripe phase shift image; f denotes the frequency of the sinusoidal fringe, x denotes the lateral coordinate index of the image, 2n π/3 denotes its phase shift amount, and n denotes the nth channel.
Further, the measurement system includes a DLP projection module, a color industrial camera, and a computer. Wherein, DLP projection module optical axis is 30 degrees angles with the measured object with structured light image I C And projecting the image to the surface of the measured object, wherein the optical axis of the color industrial camera is vertical to the image acquired by the measured object.
Further, a deformed color stripe image I which is acquired by a color camera and is modulated by the height of the measured object color Can be expressed as:
Figure BDA0003887244410000022
wherein, I R ,I G ,I B Is I color The three-channel RGB image phi represents the real phase distribution of the measured object and is the unknown quantity to be solved in the three-dimensional measurement process.
Further, the system is responsible for processing three deformed gray scale stripe structure light images I R ,I G And I B The three deep neural sub-network modules are all U-shaped networks, and each U-shaped network consists of an encoder and a decoder; the encoder has 5 layers from top to bottom, each layer is connected by feature extraction and downsampling to reduce image size layer by layer, each layer has 3 sequentially arranged convolution layers, and the convolution layers are separated by residual errorBlock connection, the output of the last convolution layer of the previous layer is used as the input of the first convolution layer of the next layer; the decoder structure is symmetrical to the encoder, 5 layers are provided, the original image size is restored layer by layer through feature extraction up-sampling connection between each layer, and finally a predicted output result is obtained; wherein, the output of the last convolution layer of the next layer is used as the input of the first convolution layer of the previous layer; the lowest layer of the encoder and the lowest layer of the decoder are connected through an attention mechanism module.
Further, the weights of the three deep neural sub-networks are not shared.
Further, the solving the predicted phase Φ' based on the three gray-scale images by using a phase shift method includes:
using gray-scale image I' R ,I’ G And l' B Substituting into a phase shift method formula to solve a predicted phase phi':
Figure BDA0003887244410000031
further, the computer inverse projection simulation is carried out by utilizing the predicted phase phi', and a deformed color stripe image I corresponding to the phase result can be obtained rep_color The formula is as follows:
Figure BDA0003887244410000032
wherein, I rep_R ,I rep_R ,I rep_R Is I rep_color RGB three-channel image of (1).
Further, the loss function of the deep neural network is expressed as:
Figure BDA0003887244410000033
wherein x and y represent the horizontal and vertical coordinate indexes of the image, and H and W represent the height and width of the image; lambda R ,λ G And λ B The weights of the three channel loss values of RGB are respectively.
Further, three deep neural networks with the minimum loss value are saved as a final correction network through network training; image I R ,I G And I B Inputting a correction network, correcting an ideal fringe image by using three crosstalk signals corresponding to three channels of RGB (red, green and blue) output by the correction network, solving a phase by using a phase shift method, and finally obtaining a real three-dimensional shape of a measured object by nonlinear mapping the obtained ideal phase phi.
Further, the nonlinear calibration model adopted by the nonlinear mapping is represented as:
Figure BDA0003887244410000034
wherein h represents three-dimensional topography depth information, a, b and c are calibration parameters which are determined in a measurement system calibration link before measurement.
Compared with the prior art, the invention has the following technical characteristics:
1. the scheme utilizes an unsupervised deep learning mechanism to correct the fringe structure light image distortion and aliasing caused by color crosstalk existing in the color structure light, thereby reducing the phase error of measurement and realizing the three-dimensional shape measurement with higher precision. Compared with the traditional color crosstalk correction method, the method provided by the invention does not need to carry out complex crosstalk matrix estimation, and utilizes strong nonlinear fitting and prediction capability of a deep neural network to carry out rapid correction.
2. Compared with the traditional learning mechanism of deep learning, in which input-labels are mapped one by one, the unsupervised learning mechanism provided by the invention can completely abandon label data in a data set, and greatly reduces the labor cost for data set acquisition and production in the deep learning technology. In addition, the back projection simulation result used by the method provided by the invention is simulated according to the physical model of the measurement principle strictly, so that compared with the traditional deep learning mechanism, the method provided by the invention has better interpretability, is not limited by a training data set, can be suitable for any measurement scene, and has good generalization performance.
Drawings
FIG. 1 is a schematic view of a color structured light measurement system used in the present invention
FIG. 2 is a flow chart of the unsupervised deep learning method of the present invention
FIG. 3 is a diagram of the deep neural network structure of the present invention
Description of reference numerals: the system comprises a 1-color industrial camera, a 2-DLP projection module, a 3-computer, a 4-R channel coding gray stripe image, a 5-G channel coding gray stripe image, a 6-B channel coding gray stripe image and a 7-synthesized color coding stripe image.
Detailed Description
The invention provides a color structure light color crosstalk correction method of unsupervised deep learning, which comprises the steps of firstly generating a color phase shift stripe structure light image by using a computer, then collecting a deformed color stripe image which is highly modulated by a measured object by using a color camera, separating three channels of R, G and B of the color image, and respectively extracting three gray level images; then, the three gray level images are respectively input into three deep neural network modules for color crosstalk correction, then, the deep neural network outputs three predicted corrected gray level images, and a predicted phase is solved by utilizing a phase shift method based on the three gray level images; performing computer back projection simulation by using the predicted phase to obtain a deformed color stripe image corresponding to the phase result; and finally, calculating the loss value of the acquired deformed color fringe image and the deformed color fringe image obtained by back projection simulation, and performing multiple times of iterative optimization on network parameters until the loss value is minimum to obtain an ideal correction result. The unsupervised deep learning mechanism does not need to make a large amount of training data and corresponding labels thereof, does not depend on a specific data set, and obviously improves the operation efficiency and generalization performance of deep learning.
The following describes the specific implementation process of the present invention in detail with reference to the accompanying drawings.
S1, generating a composite color phase-shift fringe structure light pattern by utilizing computer manufacturingImage I C And transmitted to a measurement system:
the computer-generated color structured light image may be represented as:
Figure BDA0003887244410000051
wherein I C Is a composite color phase-shift fringe-structured light image comprising three channel images, each with I 1 ,I 2 And I 3 Showing that each channel is a gray scale sine stripe phase shift image; f denotes the frequency of the sinusoidal fringe, x denotes the lateral coordinate index of the image, 2n π/3 denotes its phase shift amount, and n denotes the nth channel.
S2, utilizing a projection module of the measurement system to perform phase shift on the composite color phase shift fringe structure light image I C Projecting the image on the surface of the measured object, and simultaneously collecting a deformed color fringe image I modulated by the height of the measured object at another angle by a color camera of the measuring system color (ii) a The other angle refers to an angle other than the projection angle.
Referring to fig. 1, the measurement system in this embodiment includes a DLP projection module, a color industrial camera, and a computer. Wherein, DLP projection module optical axis is about 30 degrees angles with the measured object with structured light image I C And projecting the image to the surface of the measured object, wherein the optical axis of the color industrial camera is vertical to the image acquired by the measured object.
Deformed color stripe image I acquired by color camera and modulated by height of measured object color Can be expressed as:
Figure BDA0003887244410000052
wherein, I R ,I G ,I B (4, 5, 6 in FIG. 1) is I color (7 in fig. 1), phi of the RGB three-channel image represents the true phase distribution of the measured object, and is an unknown quantity to be solved in the three-dimensional measurement process; rendering a color image I color Respectively extracting independent images from the three channels of RGB (red, green and blue), namelyObtaining three deformed gray scale stripe structure light images I R ,I G And I B
S3, for the deformed color stripe image I color The three channels of RGB are separated, and three deformed gray scale stripe structure light images I are respectively extracted R ,I G And I B Image I R ,I G And I B Respectively inputting the color crosstalk correction data into three deep neural network modules for color crosstalk correction:
is responsible for processing three deformed gray scale stripe structure light images I R ,I G And I B The three deep neural sub-network modules are all U-shaped networks, and each U-shaped network consists of an encoder and a decoder; the encoder has 5 layers from top to bottom, each layer is connected through feature extraction and downsampling to reduce the image size layer by layer, 3 sequentially arranged convolution layers are arranged in each layer, the convolution layers are connected through a residual block, and the output of the last convolution layer of the previous layer is used as the input of the first convolution layer of the next layer; the decoder structure is symmetrical to the encoder, 5 layers are provided, the original image size is restored layer by layer through feature extraction up-sampling connection between each layer, and finally a predicted output result is obtained; wherein, the output of the last convolution layer of the next layer is used as the input of the first convolution layer of the previous layer; the lowest layer of the encoder is connected with the lowest layer of the decoder through an attention mechanism module, so that the attention of a network to stripe edge characteristics can be improved; in addition, the corresponding layers of the encoder and the decoder are connected in a jumping mode, the feature maps with the same size are directly transmitted, and the learning speed is improved.
Structured light image I due to three deformed gray stripes R ,I G And I B Although all are sinusoidal fringe patterns, the crosstalk contribution magnitude and characteristics of each image are different, and thus the weights of the three deep neural sub-networks are not shared.
S4, outputting three predicted color crosstalk correction gray level images I 'by the three deep neural networks' R ,I’ G And l' B And solving a predicted phase phi' by using a phase shift method based on the three gray level images:
whereinUsing gray-scale image I' R ,I’ G And l' B Substituting into a phase shift method formula to solve the predicted phase phi':
Figure BDA0003887244410000061
s5, carrying out computer back projection simulation by utilizing the predicted phase phi', and obtaining a deformed color stripe image I corresponding to the phase result rep_color (ii) a The method comprises the following steps:
the principle of computer back projection simulation is that the predicted phase obtained by solving is regarded as a known quantity, and the known quantity is added into a generation formula of color stripe structured light to obtain a deformed color stripe image I modulated by the predicted phase rep_color The generation formula is as follows:
Figure BDA0003887244410000062
wherein, I rep_R ,I rep_R ,I rep_R Is shown as I rep_color Three channel images of RGB (red, green, blue); theoretically, if the corrected gray-scale image predicted by the network is accurate enough, the deformed color stripe structure light image I obtained by back projection simulation rep_color Will be compared with the actually acquired deformed color stripe image I color Are almost identical.
S6, utilizing the acquired deformed color stripe image I color Deformed color stripe image I obtained by simulation with back projection rep_color Constructing a loss function of the deep neural network, and calculating to obtain a loss value of the loss function; the method comprises the following steps:
deformed color stripe image I obtained based on inverse projection simulation rep_color With the acquired deformed color stripe image I color Establishing a constraint relation to optimize and adjust parameters of the deep neural network and promote the deep neural network to output a higher-quality correction result; the loss function of the deep neural network is defined as:
Figure BDA0003887244410000071
wherein, x and y represent the horizontal and vertical coordinate indexes of the image, and H and W represent the height and width of the image.
In actual calculation, loss is calculated by three channels of two color fringe images RGB respectively, and finally, the loss is added. The loss function can be further rewritten as:
Figure BDA0003887244410000072
wherein λ R ,λ G And λ B The weights of the loss values of the three channels of RGB are respectively set according to the specific performance of the network.
S7, when the loss value calculated by the loss function reaches the minimum value, obtaining a final corrected crosstalk correction ideal fringe image; calculating a final ideal phase phi by using the corrected ideal fringe image and combining a phase shift method, and recovering the real three-dimensional morphology of the measured object; the method comprises the following steps:
through network training, three deep neural networks with the minimum loss value are stored as final correction networks; image I R ,I G And I B Inputting a correction network, correcting an ideal fringe image by using three crosstalk signals corresponding to three channels of RGB (red, green and blue) output by the correction network, solving a phase by using a phase shift method in S4, and mapping the obtained ideal phase phi through a nonlinear calibration model to obtain the real three-dimensional shape of a measured object:
Figure BDA0003887244410000073
wherein h represents three-dimensional topography depth information, a, b and c are calibration parameters which are determined in a measurement system calibration link before measurement.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (10)

1. A color structure light color crosstalk correction method of unsupervised deep learning is characterized by comprising the following steps:
generating a composite color phase shift fringe structure optical image I by computer C And transmitting to a measuring system;
using a projection module of a measurement system to shift the composite color phase fringe structure light image I C Projecting the image on the surface of the measured object, and simultaneously collecting a deformed color fringe image I modulated by the height of the measured object at another angle by a color camera of the measuring system color
For the deformed color stripe image I color The RGB three channels are separated, and three deformed gray scale stripe structure light images I are respectively extracted R ,I G And I B An image I R ,I G And I B Respectively inputting the color crosstalk correction data into three deep neural network modules for color crosstalk correction;
the three deep neural networks output three predicted color crosstalk corrected grayscale images I' R ,I’ G And I' B Solving a predicted phase phi' by using a phase shift method based on the three gray level images;
carrying out computer back projection simulation by utilizing the predicted phase phi' to obtain a deformed color stripe image I corresponding to the phase result rep_color
Using acquired deformed colour stripe image I color Deformed color stripe image I obtained by simulation with inverse projection rep_color Constructing a loss function of the deep neural network, and calculating to obtain a loss value of the loss function;
when the loss value calculated by the loss function reaches the minimum value, obtaining a finally corrected crosstalk correction ideal fringe image; and calculating a final ideal phase phi by using the corrected ideal fringe image and combining a phase shift method, and recovering the real three-dimensional shape of the measured object.
2. The unsupervised deep learning color structured light color crosstalk correction method of claim 1, wherein the computer generated color structured light image is representable as:
Figure FDA0003887244400000011
in which I C Is a composite color phase-shift fringe-structured light image comprising three channel images, each with I 1 ,I 2 And I 3 Showing that each channel is a gray scale sine stripe phase shift image; f denotes the frequency of the sine stripe, x denotes the lateral coordinate index of the image, 2n pi/3 denotes its phase shift amount, and n denotes the nth channel.
3. The unsupervised deep learning color structured light color crosstalk correction method of claim 1, wherein the measurement system comprises a DLP projection module, a color industry camera, and a computer. Wherein, DLP projection module optical axis is 30 degrees angles with the measured object with structured light image I C The image is projected to the surface of the measured object, and the optical axis of the color industrial camera is vertical to the measured object to acquire the image.
4. The method as claimed in claim 1, wherein the deformed color stripe image I acquired by the color camera and modulated by the height of the measured object is used for correcting the color crosstalk of the color structure of the unsupervised deep learning color structure color Can be expressed as:
Figure FDA0003887244400000021
wherein,I R ,I G ,I B is I color The RGB three-channel image phi represents the real phase distribution of the measured object and is an unknown quantity to be solved in the three-dimensional measurement process.
5. The unsupervised deep learning color structure light color crosstalk correction method according to claim 1, characterized in that it is responsible for processing three deformed gray stripe structure light images I R ,I G And I B The three deep neural sub-network modules are all U-shaped networks, and each U-shaped network consists of an encoder and a decoder; the encoder has 5 layers from top to bottom, each layer is connected through feature extraction and downsampling to reduce the image size layer by layer, 3 sequentially arranged convolution layers are arranged in each layer, the convolution layers are connected through a residual block, and the output of the last convolution layer of the previous layer is used as the input of the first convolution layer of the next layer; the decoder structure is symmetrical to the encoder, 5 layers are provided, the original image size is restored layer by layer through feature extraction and upsampling connection between every two layers, and finally a predicted output result is obtained; wherein, the output of the last convolution layer of the next layer is used as the input of the first convolution layer of the previous layer; the lowest layer of the encoder and the lowest layer of the decoder are connected by an attention mechanism module.
6. The unsupervised deep learning color structure light color crosstalk correction method of claim 1, wherein the solving the predicted phase Φ' based on the three gray images by using a phase shift method comprises:
using gray-scale image I' R ,I’ G And l' B Substituting into a phase shift method formula to solve the predicted phase phi':
Figure FDA0003887244400000022
7. the unsupervised deep learning color structure light color crosstalk correction method of claim 1, wherein the method is characterized in thatThen, the computer inverse projection simulation is carried out by utilizing the predicted phase phi', and the deformed color stripe image I corresponding to the phase result can be obtained rep_color The formula is as follows:
Figure FDA0003887244400000023
wherein, I rep_R ,I rep_R ,I rep_R Is I rep_color Three channel images of RGB.
8. The unsupervised deep learning color structure light color crosstalk correction method of claim 1, wherein the loss function of the deep neural network is expressed as:
Figure FDA0003887244400000031
wherein x and y represent the horizontal and vertical coordinate indexes of the image, and H and W represent the height and width of the image; lambda [ alpha ] R ,λ G And λ B The weights of the three channel loss values of RGB are respectively.
9. The method for correcting color crosstalk of an unsupervised deep learning color structure according to claim 1, wherein three deep neural networks with the minimum loss value are saved as a final correction network through network training; image I R ,I G And I B Inputting a correction network, correcting an ideal fringe image by using three crosstalk signals corresponding to three channels of RGB (red, green and blue) output by the correction network, solving a phase by using a phase shift method, and finally obtaining a real three-dimensional shape of a measured object by nonlinear mapping the obtained ideal phase phi.
10. The unsupervised deep learning color structure light color crosstalk correction method according to claim 1, wherein the nonlinear calibration model adopted by the nonlinear mapping is expressed as:
Figure FDA0003887244400000032
wherein h represents three-dimensional topography depth information, a, b and c are calibration parameters, which are determined in a measurement system calibration link before measurement.
CN202211247398.XA 2022-10-12 2022-10-12 Color structure light color crosstalk correction method for unsupervised deep learning Pending CN115615358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211247398.XA CN115615358A (en) 2022-10-12 2022-10-12 Color structure light color crosstalk correction method for unsupervised deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211247398.XA CN115615358A (en) 2022-10-12 2022-10-12 Color structure light color crosstalk correction method for unsupervised deep learning

Publications (1)

Publication Number Publication Date
CN115615358A true CN115615358A (en) 2023-01-17

Family

ID=84862636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211247398.XA Pending CN115615358A (en) 2022-10-12 2022-10-12 Color structure light color crosstalk correction method for unsupervised deep learning

Country Status (1)

Country Link
CN (1) CN115615358A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116105632A (en) * 2023-04-12 2023-05-12 四川大学 Self-supervision phase unwrapping method and device for structured light three-dimensional imaging
CN118293825A (en) * 2024-04-03 2024-07-05 北京微云智联科技有限公司 Phase compensation method and device for sinusoidal grating projection system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116105632A (en) * 2023-04-12 2023-05-12 四川大学 Self-supervision phase unwrapping method and device for structured light three-dimensional imaging
CN118293825A (en) * 2024-04-03 2024-07-05 北京微云智联科技有限公司 Phase compensation method and device for sinusoidal grating projection system

Similar Documents

Publication Publication Date Title
CN115615358A (en) Color structure light color crosstalk correction method for unsupervised deep learning
CN110473217B (en) Binocular stereo matching method based on Census transformation
WO2021184707A1 (en) Three-dimensional surface profile measurement method for single-frame color fringe projection based on deep learning
CN101576379B (en) Fast calibration method of active projection three dimensional measuring system based on two-dimension multi-color target
CN114777677B (en) Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
CN103697815B (en) Mixing structural light three-dimensional information getting method based on phase code
CN113763269B (en) Stereo matching method for binocular images
CN114549307B (en) High-precision point cloud color reconstruction method based on low-resolution image
CN112697071B (en) Three-dimensional measurement method for color structured light projection based on DenseNet shadow compensation
CN101871773B (en) Synchronous hue shift conversion method and three-dimensional appearance measurement system thereof
CN105046743A (en) Super-high-resolution three dimensional reconstruction method based on global variation technology
CN105180904A (en) High-speed moving target position and posture measurement method based on coding structured light
CN111879258A (en) Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet
CN102519395B (en) Color response calibration method in colored structure light three-dimensional measurement
CN102184542A (en) Stereo matching method for stereo binocular vision measurement
CN109945802A (en) A kind of structural light three-dimensional measurement method
CN100449258C (en) Real time three-dimensional vision system based on two-dimension colorama encoding
CN114941999A (en) Binary coding stripe design method for structured light projection
CN110033483A (en) Based on DCNN depth drawing generating method and system
CN113587852A (en) Color fringe projection three-dimensional measurement method based on improved three-step phase shift
CN115482268A (en) High-precision three-dimensional shape measurement method and system based on speckle matching network
CN115564692A (en) Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference
Song et al. Super-resolution phase retrieval network for single-pattern structured light 3D imaging
US10893252B2 (en) Image processing apparatus and 2D image generation program
CN112611341B (en) Color response model-based rapid three-dimensional measurement method for color object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination