CN111523618A - Phase unwrapping method based on deep learning - Google Patents

Phase unwrapping method based on deep learning Download PDF

Info

Publication number
CN111523618A
CN111523618A CN202010557204.0A CN202010557204A CN111523618A CN 111523618 A CN111523618 A CN 111523618A CN 202010557204 A CN202010557204 A CN 202010557204A CN 111523618 A CN111523618 A CN 111523618A
Authority
CN
China
Prior art keywords
phase
deep learning
layer
stripe
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010557204.0A
Other languages
Chinese (zh)
Inventor
左超
张晓磊
沈德同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd
Original Assignee
Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd filed Critical Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd
Priority to CN202010557204.0A priority Critical patent/CN111523618A/en
Publication of CN111523618A publication Critical patent/CN111523618A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a phase unwrapping method based on deep learning. The built fully-connected neural network is used for training, and the trained model can be used for predicting the stripe level pixel by pixel. The invention can extract absolute phase information from two wrapped phase diagrams, not only overcomes the defect of low phase unwrapping efficiency, but also obviously reduces the model training time, and is suitable for a miniaturized rapid three-dimensional imaging system.

Description

Phase unwrapping method based on deep learning
Technical Field
The invention belongs to the technical field of three-dimensional imaging based on fringe projection, and particularly relates to a phase unwrapping method based on deep learning.
Background
The three-dimensional imaging technology can be widely applied to various fields, such as defect detection, reverse modeling, cultural relic protection, human-computer interaction and the like. Among a plurality of three-dimensional imaging technologies, the three-dimensional imaging technology based on fringe projection is obvious by virtue of the advantages of high measurement speed, high precision, easiness in implementation and the like, and becomes a hotspot of research in the field of three-dimensional imaging at present. A typical fringe projection system comprises a projection device and at least one camera, and the technology is to project a series of pre-generated fringes onto a measured object through a digital projection device, then capture the fringe pattern modulated by the object through the camera, and obtain phase information of the object according to a corresponding decoding algorithm, so as to finally reconstruct the three-dimensional topography of the object.
Since the fringes projected onto the object are sinusoidal fringes, it is necessary to extract phase information using fourier method or phase shift method, and the corresponding three-dimensional imaging techniques are called fourier profile and phase shift profile. The Fourier method can obtain the wrapping phase information through a single stripe picture, so the picture utilization rate is high. Although the phase shift method needs at least three stripe pictures to obtain the wrapping phase, the phase shift method has stronger anti-interference capability and can obtain higher measurement accuracy.
In any case, due to the truncation effect of the arctan function, the phase information obtained from the fringe image is wrapped, so that the phase information has discontinuity and is distributed in a sawtooth shape, and the absolute phase must be obtained by a phase unwrapping algorithm so as to be further used for three-dimensional reconstruction.
Typical phase expansion methods include spatial phase expansion, stereo phase expansion, and multi-frequency temporal phase expansion. The spatial phase unwrapping method unwrapps the current pixel by analyzing the neighboring pixels, and has a limitation in that it cannot handle a scene in which there are a plurality of objects or a jump in phase. Compared with a space phase expansion method, the method can expand phase information of discontinuous objects without additional coding patterns, but when the fringe density is high, the stereo phase expansion method is difficult to find matching points to cause wrong phase expansion, and the method has large calculation amount and needs to use parallel calculation to improve the phase expansion speed. The multi-frequency time phase expansion method is not limited by the density of fringes and can be applied to a scene with jump of phase positions by calculating the wrapping phase of the same pixel under different frequencies to expand the phase position, so that the method is widely applied to a fringe projection system since the last 90 th century. However, the number of stripes required by single reconstruction is large, and the assumption that a moving object is quasi-stationary under the camera view angle in single measurement is difficult to guarantee, so that the method can only be applied to static or low-speed scenes.
In order to make the multi-frequency time phase expansion method applicable to fast measurement scenarios, researchers have made efforts from three directions: hardware equipment with a faster frame rate is developed to reduce the inter-frame interval; compensating motion errors; the measurement efficiency is improved. However, the use of a device with a higher frame rate often means higher investment cost and larger hardware volume, and the compensation for the motion error has instability, so that the device cannot be applied to a miniaturized three-dimensional imaging system or a mobile device. Therefore, in the following, we will focus on the improvement of the measurement efficiency.
In recent years, a number of deep learning methods have been applied to fringe projection techniques to break through the limitations of measurement efficiency in conventional measurement methods. Feng et al successfully extracted wrapped phases from a single picture using a convolutional neural network ([1 ]]S. Fenget al., “Fringe pattern analysis using deep learning,”Adv. PhotonicsVol. 1, No. 2, p. 025001, 2019.). Yin et al propose a time-phase unwrapping method based on deep learning, which works to achieve phase unwrapping effects superior to conventional methods using wrapped phases at two frequencies only, and which can be used for phase unwrapping at higher frequencies ([2 ]]W, Yin et al, "Temporal phaseunwarping using deep learning," Arxiv Prepr. Arxiv190309836, 2019 "). Wang et al propose a one-step deep learning phase expansion method with excellent anti-noise and anti-aliasing properties ([3 ]]K. Wang, Y.Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phaseunwarping, "Opt Express, vol 27, No. 10, pp. 15100-. Zhang et al also proposed a method based on deep convolutional neural networks to perform fast and robust two-dimensional phase unwrapping ([4 ]]T.Zhang et al, "Rapid and robust two-dimensional phase unwarping via discarding," Opt. Express, vol. 27, No. 16, pp. 23173-. In the above work, however, a large scale convolutional neural network was used and a large amount of data was required to support training. Therefore, in order to obtain a model for prediction, not only a large amount of training data needs to be collected, but also training takes a long time (tens of hours or even days).
Disclosure of Invention
The invention aims to provide a phase unwrapping method based on deep learning, which can reduce the number of pictures required by single phase unwrapping, improve the efficiency of three-dimensional measurement, and remarkably reduce data required by model training and time required by measurement.
The technical solution for realizing the purpose of the invention is that a phase unwrapping method based on deep learning comprises the following steps:
firstly, generating simulation data to serve as a training set of network training, acquiring a picture coded by stripes through a camera, solving a wrapping phase by using a phase-shift profilometry, and taking the picture as a verification set, wherein Label data are stripe levels obtained by using a time phase expansion method of six frequencies;
secondly, building a fully-connected neural network;
thirdly, inputting corresponding pixel values in the two wrapped phase diagrams pair by pair to obtain the fringe level of the whole picture, further obtaining the absolute phase by the formula (1),
Figure 941508DEST_PATH_IMAGE001
……(1),
in the formulaΦ(x,y)In the form of an absolute phase, the phase,ϕ(x,y)in order to wrap the phase,k(x,y)in the order of stripes.
Preferably: in the first step, simulation data is obtained by directly using a three-step phase shift method on a stripe picture and adding random noise, and the process is that the three-step phase shift method is firstly used for the stripe picture under two frequencies of 1 and 48 to obtain wrapping phase information of an original stripe, and various noises including Gaussian noise and random noise are added to the wrapping phase information, so that a training data set is generated.
Preferably: in a first step, the object pattern with projected fringes is captured by a camera in the form of equation (2) and a validation set is obtained using the three-step phase shift method in equation (3),
Figure 812512DEST_PATH_IMAGE002
……(2),
Figure 183188DEST_PATH_IMAGE003
……(3)。
preferably: in a first step, Label data is obtained by a time phase expansion at six frequencies, in the form of equation (4),
Figure 442131DEST_PATH_IMAGE004
……(4)。
preferably: in the second step, each layer of the fully-connected neural network is used only
Figure 399723DEST_PATH_IMAGE006
The number of the nodes is one,Nthe number of the nodes is an integer between 6 and 9, the number of the nodes is gradually increased layer by layer, the number of the nodes of the network is gradually decreased layer by layer at the tail end of the network, and the number of the nodes at the output end is set as the number of the stripe levels.
Preferably: in the second step, in 27And the network layer with the above number of nodes adds regularization.
Preferably: in the second step, the output value at the output is converted into a K-dimensional vector having a length equivalent to the fringe frequency using equation (5) (6):
Figure 543259DEST_PATH_IMAGE007
……(5),
Figure 332224DEST_PATH_IMAGE008
……(6),
the second in the K-dimensional vector
Figure 168593DEST_PATH_IMAGE010
The two expressions can be converted to each other, with the value defined as 1 and the remaining values defined as 0.
Compared with the prior art, the invention has the following remarkable advantages: (1) compared with the traditional multi-frequency time phase unfolding method, the efficiency and the stability of unfolding the phase by using two frequencies are obviously improved by adopting a deep learning method; (2) compared with other deep learning methods, the method adopts a structure of a lightweight full-connection network, so that data and time consumption required by training are obviously shortened, and the time required by single prediction is shortened; (3) and the simulation data is used as a training set, so that the number of actual pictures required to be acquired is reduced.
Drawings
Fig. 1 is a flowchart of a phase unwrapping method based on deep learning according to the present invention.
FIG. 2 is a flow chart of the present invention for generating training set data.
FIG. 3 shows the results of the experiments performed in the present invention for predicting and three-dimensional reconstruction of a plurality of different objects, and the Label data is also reproduced for comparison.
Fig. 4 shows the result of an experiment for phase unwrapping the ceramic wafer, which shows the error rate in a quantified manner.
FIG. 5 shows the experimental results under different SNR conditions, and reproduces the results of the method and the multi-frequency time phase expansion method under the condition that the SNR is gradually reduced.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The invention relates to a phase unwrapping method based on deep learning, which comprises the following specific implementation steps:
firstly, generating simulation data to serve as a training set of network training, acquiring a picture coded by stripes through a camera, solving a wrapping phase by using a phase-shift profilometry, and taking the picture as a verification set, wherein Label data are stripe levels obtained by using a time phase expansion method of six frequencies;
secondly, building a fully-connected neural network;
thirdly, inputting corresponding pixel values in the two wrapped phase diagrams pair by pair to obtain the fringe level of the whole picture, further obtaining the absolute phase by the formula (1),
Figure 126664DEST_PATH_IMAGE001
……(1),
in the formulaΦ(x,y)In the form of an absolute phase, the phase,ϕ(x,y)in order to wrap the phase,k(x,y)in the order of stripes.
With reference to fig. 1, the phase unwrapping method based on deep learning of the present invention first generates simulation data as a training set for network training, acquires a stripe-coded picture through a camera and solves a wrapping phase by using a phase-shift profilometry, thereby using the obtained wrapping phase as a verification set. The data is sent into a built full-connection deep neural network for training, and each layer of the network is only used
Figure 932946DEST_PATH_IMAGE006
(N is an integer between 6 and 9) nodes, the number of the nodes is increased gradually layer by layer, the number of the nodes of the network is decreased gradually layer by layer at the tail end of the network, and the number of the nodes at the output end is set as the total number of the stripe level. In order to prevent the occurrence of overfitting, regularization is added to the network layer with a large number of nodes. In addition, in order to make the classification process independent of a specific output value, the output value at the output end is converted into a K-dimensional vector having a length equivalent to the fringe frequency using equations (5) (6):
Figure 717363DEST_PATH_IMAGE007
……(5),
Figure 990212DEST_PATH_IMAGE008
……(6),
the second in the K-dimensional vector
Figure 515871DEST_PATH_IMAGE010
The values are defined as 1 and the remaining values are defined as 0, and the above two expressions can be converted to each other.
The trained model can predict the fringe level pixel by pixel, wrapping phase pictures under two frequencies are input pixel by pixel, the fringe level of each pixel can be obtained, and the absolute phase is further obtained through the formula (1).
Fig. 2 is a flow chart of the present invention for generating training set data, which inevitably has noise interference if the training data set and the verification data set used for training are both calculated from images captured by a camera, and thus the present invention replaces the training data set with a simulation data set. By performing phase-shift profilometry directly on the original fringes at the two frequencies, a training data set free of noise interference can be obtained. In addition, in order to enlarge the training data set and improve the noise suppression effect of the network, the method introduces a proper amount of noise into the wrapped phase diagram.
In this process, first, 3 high frequency (frequency 48) and 3 low frequency (frequency 1) fringe patterns are calculated by phase shift profilometry using equation 3 to obtain the wrapped phase. During the measurement, these fringe patterns are projected onto the measured object. Then, a variety of noises including gaussian noise and random noise are added to the wrapped phases to generate 10 sets of data with different noises. These data will eventually be used for training.
The object pattern on which the fringes were projected is captured by the camera in the form of (2), and the Label data is obtained by the time phase expansion method at six frequencies in the form of (4).
Figure 735631DEST_PATH_IMAGE002
……(2),
Figure 935406DEST_PATH_IMAGE003
……(3),
Figure 972632DEST_PATH_IMAGE004
……(4)。
Fig. 3 shows the result of the measurement of different objects by the trained model. Due to the use of lightweight deep neural networks, the number of parameters that need to be optimized is reduced to tens of thousands (about 50000 in our method), the time for training is about 45 minutes, and furthermore, the prediction speed is faster. When the phase unwrapping is performed using the conventional TPU method, the error rate of unwrapping is extremely high since only two frequencies are available, whereas when the method of the present invention is used, the error rate is greatly reduced, giving almost the same result as Label.
Fig. 4 is an experimental result of phase unwrapping performed on a ceramic wafer, and the error rate is quantitatively shown, so that the advantages of the invention are more intuitively and quantitatively shown. The error rate of the ceramic chip obtained by the method is lower than 0.01 percent, and the error rate of the TPU method reaches 1.16 percent.
In some miniaturized three-dimensional imaging systems or mobile devices, hardware conditions are greatly limited, a projector cannot project high-intensity stripes, projection light intensity is low, the signal-to-noise ratio of pictures captured by a camera is greatly reduced due to the existence of interference signals, the error rate of phase unwrapping is increased, and under the condition, high requirements are put on the anti-noise capability of a phase unwrapping algorithm. In order to test the anti-interference capability of the method, fig. 5 shows the experimental results under different signal-to-noise ratios. With the decrease of the signal to noise ratio, the performance of TPU decreases sharply, showing an error rate of 5.63%, whereas the method of the present invention has good phase unwrapping results in most locations, with an error rate of only 1.66% and errors occurring only in certain locations where the light intensity is very low. While the error rates of the results obtained with the process of the invention are 0.12% and 0.23%, respectively, when the signal-to-noise ratio is increased (40 and 35), which is much less than the error rate of the results obtained from the TPU. Therefore, the method has good noise resistance and can play an important role under the condition of limited hardware conditions. In addition, the above results also demonstrate that the method of the present invention is universal and not limited to a specific measurement target.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A phase unwrapping method based on deep learning is characterized by comprising the following steps:
firstly, generating simulation data to serve as a training set of network training, acquiring a picture coded by stripes through a camera, solving a wrapping phase by using a phase-shift profilometry, and taking the picture as a verification set, wherein Label data are stripe levels obtained by using a time phase expansion method of six frequencies; the simulation data is obtained by directly using a three-step phase shift method and adding random noise on a stripe picture, and the process is that firstly, the three-step phase shift method is used for the stripe picture under two frequencies of 1 and 48 to obtain the wrapping phase information of the original stripe, and Gaussian noise and random noise are added to the wrapping phase information to generate a training data set;
secondly, building a fully-connected neural network;
thirdly, inputting corresponding pixel values in the two wrapped phase diagrams pair by pair to obtain the fringe level of the whole picture, further obtaining the absolute phase by the formula (1),
Figure 778607DEST_PATH_IMAGE001
……(1),
in the formulaΦ(x,y)In the form of an absolute phase, the phase,ϕ(x,y)in order to wrap the phase,k(x,y)in the order of stripes.
2. The deep learning based phase unwrapping method according to claim 1, wherein:
in a first step, the object pattern with projected fringes is captured by a camera in the form of equation (2) and a validation set is obtained using the three-step phase shift method in equation (3),
Figure DEST_PATH_IMAGE002
……(2),
Figure 492485DEST_PATH_IMAGE003
……(3)。
3. the deep learning based phase unwrapping method according to claim 1, wherein:
in a first step, Label data is obtained by a time phase expansion at six frequencies, in the form of equation (4),
Figure DEST_PATH_IMAGE004
……(4)。
4. the deep learning based phase unwrapping method according to claim 1, wherein:
in the second step, each layer of the fully-connected neural network only uses one node,Nis an integer of 6 to 9, and gradually increases the number of nodes layer by layer, gradually decreases the number of network nodes layer by layer at the end of the network, and sets the number of nodes at the output end as the number of stripe levels。
5. The deep learning based phase unwrapping method according to claim 4, wherein:
in the second step, in 27And the network layer with the above number of nodes adds regularization.
6. The deep learning based phase unwrapping method according to claim 4, wherein:
in the second step, the output value at the output is converted into a K-dimensional vector having a length equivalent to the fringe frequency using equation (5) (6):
Figure 209905DEST_PATH_IMAGE005
……(5),
Figure DEST_PATH_IMAGE006
……(6),
the first value in the K-dimensional vector is defined as 1 and the remaining values are defined as 0, and the above two equations can be converted to each other.
CN202010557204.0A 2020-06-18 2020-06-18 Phase unwrapping method based on deep learning Pending CN111523618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010557204.0A CN111523618A (en) 2020-06-18 2020-06-18 Phase unwrapping method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010557204.0A CN111523618A (en) 2020-06-18 2020-06-18 Phase unwrapping method based on deep learning

Publications (1)

Publication Number Publication Date
CN111523618A true CN111523618A (en) 2020-08-11

Family

ID=71912795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010557204.0A Pending CN111523618A (en) 2020-06-18 2020-06-18 Phase unwrapping method based on deep learning

Country Status (1)

Country Link
CN (1) CN111523618A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning
CN113884027A (en) * 2021-12-02 2022-01-04 南京理工大学 Geometric constraint phase unwrapping method based on self-supervision deep learning
WO2022043746A1 (en) * 2020-08-25 2022-03-03 Artec Europe S.A R.L. Systems and methods of 3d object reconstruction using a neural network
CN114152217A (en) * 2022-02-10 2022-03-08 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning
KR20220061590A (en) * 2020-11-06 2022-05-13 한국생산기술연구원 Moire interferometer measurement system and moire interferometer measurement method using artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109932708A (en) * 2019-03-25 2019-06-25 西北工业大学 A method of the underwater surface class object based on interference fringe and deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning
CN109253708A (en) * 2018-09-29 2019-01-22 南京理工大学 A kind of fringe projection time phase method of deploying based on deep learning
CN109932708A (en) * 2019-03-25 2019-06-25 西北工业大学 A method of the underwater surface class object based on interference fringe and deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯世杰: "《深度学习技术在条纹投影三维成像中的应用》", 《红外与激光工程》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022043746A1 (en) * 2020-08-25 2022-03-03 Artec Europe S.A R.L. Systems and methods of 3d object reconstruction using a neural network
KR20220061590A (en) * 2020-11-06 2022-05-13 한국생산기술연구원 Moire interferometer measurement system and moire interferometer measurement method using artificial intelligence
KR102512873B1 (en) 2020-11-06 2023-03-23 한국생산기술연구원 Moire interferometer measurement system and moire interferometer measurement method using artificial intelligence
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning
CN113884027A (en) * 2021-12-02 2022-01-04 南京理工大学 Geometric constraint phase unwrapping method based on self-supervision deep learning
CN113884027B (en) * 2021-12-02 2022-03-18 南京理工大学 Geometric constraint phase unwrapping method based on self-supervision deep learning
CN114152217A (en) * 2022-02-10 2022-03-08 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning
CN114152217B (en) * 2022-02-10 2022-04-12 南京南暄励和信息技术研发有限公司 Binocular phase expansion method based on supervised learning

Similar Documents

Publication Publication Date Title
CN111523618A (en) Phase unwrapping method based on deep learning
CN109253708B (en) Stripe projection time phase unwrapping method based on deep learning
Wu et al. Two-frequency phase-shifting method vs. Gray-coded-based method in dynamic fringe projection profilometry: A comparative review
WO2019153326A1 (en) Intra-frame prediction-based point cloud attribute compression method
Wang et al. Dynamic three-dimensional shape measurement with a complementary phase-coding method
TWI739151B (en) Method, device and electronic equipment for image generation network training and image processing
CN106791273B (en) A kind of video blind restoration method of combination inter-frame information
CN113379818B (en) Phase analysis method based on multi-scale attention mechanism network
CN114777677B (en) Single-frame double-frequency multiplexing stripe projection three-dimensional surface type measurement method based on deep learning
WO2011020647A1 (en) Method and apparatus for estimation of interframe motion fields
JP2014505389A (en) Method for processing an image in the invisible spectral region, corresponding camera and measuring device
Yang et al. Global auto-regressive depth recovery via iterative non-local filtering
CN112529794A (en) High dynamic range structured light three-dimensional measurement method, system and medium
Yuan et al. Temporal upsampling of depth maps using a hybrid camera
Kong et al. Fdflownet: Fast optical flow estimation using a deep lightweight network
CN106910246B (en) Space-time combined speckle three-dimensional imaging method and device
CN112802186A (en) Dynamic scene real-time three-dimensional reconstruction method based on binarization characteristic coding matching
CN113008163A (en) Encoding and decoding method based on frequency shift stripes in structured light three-dimensional reconstruction system
Wang et al. Dynamic three-dimensional surface reconstruction approach for continuously deformed objects
CN116596794A (en) Combined motion blur removal and video frame inserting method based on event camera
CN117132704A (en) Three-dimensional reconstruction method of dynamic structured light, system and computing equipment thereof
CN115760590A (en) Video image stabilizing method and system
Wu et al. Two-neighbor-wavelength phase-shifting approach for high-accuracy rapid 3D measurement
CN117474956B (en) Light field reconstruction model training method based on motion estimation attention and related equipment
Li et al. Improving resolution of 3D surface with convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200811

RJ01 Rejection of invention patent application after publication