CN113884027B - Geometric constraint phase unwrapping method based on self-supervision deep learning - Google Patents

Geometric constraint phase unwrapping method based on self-supervision deep learning Download PDF

Info

Publication number
CN113884027B
CN113884027B CN202111458588.1A CN202111458588A CN113884027B CN 113884027 B CN113884027 B CN 113884027B CN 202111458588 A CN202111458588 A CN 202111458588A CN 113884027 B CN113884027 B CN 113884027B
Authority
CN
China
Prior art keywords
phase
image
dimensional
camera
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111458588.1A
Other languages
Chinese (zh)
Other versions
CN113884027A (en
Inventor
韩静
韩博文
于浩天
郑东亮
蒋琦
冮顺奎
张明星
施继玲
王晓颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202111458588.1A priority Critical patent/CN113884027B/en
Publication of CN113884027A publication Critical patent/CN113884027A/en
Application granted granted Critical
Publication of CN113884027B publication Critical patent/CN113884027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a geometric constraint phase unwrapping method based on self-supervision deep learning, and belongs to the technical field of image processing. The method comprises the following steps: s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration; s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network; s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information. The method can solve the problems of low generalization capability and strong data dependence existing in the process of carrying out phase expansion based on supervised learning.

Description

Geometric constraint phase unwrapping method based on self-supervision deep learning
Technical Field
The invention relates to a geometric constraint phase unwrapping method based on self-supervision deep learning, and belongs to the technical field of image analysis.
Background
In fringe projection profilometry, fringe images shot by a camera are subjected to phase recovery, and wrapped phases with periodically changing phase values can be obtained. In order to achieve unambiguous one-to-one correspondence between the projector coordinates and the camera coordinates, the camera needs to take additional fringe images to achieve phase unwrapping, i.e., unwrapping the wrapped phase into a continuous absolute phase. At present, in the phase unwrapping step, researchers pay attention to how to calculate the correct fringe order without taking an additional fringe image on the premise of ensuring the phase unwrapping accuracy.
In order to achieve high-precision and high-robustness phase unwrapping, a typical solution is to add an additional hardware device, that is, an additional camera is added in a conventional three-dimensional measurement system with a single camera and a single projector, and such a method may be referred to as a geometric constraint phase unwrapping method.
Recently, deep learning is introduced into fringe projection profilometry, and in the phase unwrapping step, many technical bottlenecks have been broken through by using a method based on deep learning. However, currently in fringe projection profilometry, all methods based on deep learning are supervised learning, and such methods generally comprise two steps of training and testing. During training, a large amount of marked data needs to be shot in advance, which is very time-consuming, and in some special scenes such as animal hearts, moving machine wings, etc., it is not practical to acquire a large amount of marked data. Moreover, these large amounts of labeled data need to be distributed independently and uniformly, otherwise, the model obtained after training may have serious network generalization problems during the testing process. In other words, when the data volume of the training set is limited and the data distribution in the test set is different from the training set greatly, the trained model cannot obtain ideal results, which brings great limitation to the practical application of the supervised learning-based fringe projection profilometry, and therefore the frequency is questioned by low universality and strong data dependency.
Disclosure of Invention
The invention aims to overcome the problems in the prior art, provides a geometric constraint phase unwrapping method based on self-supervised deep learning, and can solve the problems of low generalization capability and strong data dependence in the phase unwrapping based on supervised learning.
In order to solve the above problems, the geometric constraint phase unwrapping method based on the self-supervision deep learning of the present invention comprises the following steps:
s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration;
s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network;
s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information.
Further, the convolutional neural network in S2 includes an encor module and a decor module, where the encor module performs feature extraction on the input image information, and the decor module processes the extracted features to recover the stripe level information.
Further, S2 specifically includes the following steps:
s2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model;
s2.2: enhancing the input wrapped phase image by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model;
s2.3: and predicting and regressing the input wrapped phase image, and outputting a fringe level image.
Further, the operation procedure of the phase mapping in S3 is as follows:
Figure 361436DEST_PATH_IMAGE001
in the formula (I), wherein,
Figure 516474DEST_PATH_IMAGE002
in order to spread out the phase image,
Figure 175994DEST_PATH_IMAGE003
in order to wrap the phase image,
Figure 961547DEST_PATH_IMAGE004
in the form of a stripe-level secondary image,fthe frequency of the fringes.
Further, the calibration parameter operation process described in S3 is as follows:
Figure 870466DEST_PATH_IMAGE005
Figure 298037DEST_PATH_IMAGE006
Figure 828375DEST_PATH_IMAGE007
is the coordinates of the pixels of the camera,
Figure 378305DEST_PATH_IMAGE008
is the coordinate of the corresponding point on the projector, m is the calibration parameter of the camera and the projector,X w Y w andZ w is the coordinates of the object in the world coordinate system.
Further, the loss functions required in the iterative optimization process of constructing the convolutional neural network in S2 are obtained based on three-dimensional consistency, structural consistency, and phase consistency, respectively.
Further, the coordinates obtained by the transformation of the camera imaging model are set
Figure 994094DEST_PATH_IMAGE009
On-camera
Figure 743132DEST_PATH_IMAGE010
The corresponding coordinate is
Figure 393556DEST_PATH_IMAGE011
(ii) a Coordinates obtained by a three-dimensional measurement method based on phase matching and arranged under the same world coordinate system
Figure 52070DEST_PATH_IMAGE012
On-camera
Figure 686314DEST_PATH_IMAGE013
The corresponding coordinate is
Figure 173796DEST_PATH_IMAGE014
The function for calculating three-dimensional consistency is as follows:
Figure 209885DEST_PATH_IMAGE016
in the formula:
Figure 773722DEST_PATH_IMAGE017
indicating the number of pixels, subscripts
Figure 426420DEST_PATH_IMAGE018
Is shown as
Figure 514331DEST_PATH_IMAGE018
And (4) a pixel.
Further, the loss function for calculating structural consistency is as follows:
Figure 608189DEST_PATH_IMAGE019
in the formula (I), the compound is shown in the specification,
Figure 874085DEST_PATH_IMAGE020
for the original input background light intensity image,
Figure 951762DEST_PATH_IMAGE021
an image reconstructed based on the resolved three-dimensional coordinates.
Further, the loss function for calculating phase consistency is as follows:
Figure 843364DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure 322887DEST_PATH_IMAGE024
the phase image is wrapped for the original input,
Figure 25263DEST_PATH_IMAGE025
for the reconstructed wrapped phase image based on the resolved three-dimensional coordinates,
Figure 590237DEST_PATH_IMAGE017
indicating the number of pixels, subscripts
Figure 226142DEST_PATH_IMAGE018
Is shown as
Figure 825751DEST_PATH_IMAGE018
And (4) a pixel.
Further, the convolutional neural network model described in S2.1 includes a plurality of convolutional layers, a Batch-norm layer, a ReLu layer, and a drop-out layer, and the sizes of the convolutional layers include 3x3,3x1, and 1x 3.
The invention has the beneficial effects that: 1) the correct unfolded phase image can be obtained only by inputting the wrapped phase image and the background light intensity image, other structured light patterns do not need to be projected, and phase unfolding can be carried out at high speed and high precision.
2) The problems of weak generalization ability, strong data dependence and the like in supervised learning can be solved.
Drawings
FIG. 1 is a flow chart of a geometric constraint phase unwrapping method based on unsupervised deep learning according to the present invention;
FIG. 2 is a diagram of the basic structure of the convolutional neural network GCPUNet of the present invention;
FIG. 3 is a schematic structural diagram of a three-dimensional testing system according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
As shown in FIG. 1, the geometric constraint phase unwrapping method based on the self-supervised deep learning of the present invention comprises the following steps:
s1: the method comprises the steps of collecting an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration.
Wrapped phase and background intensity images are shown in FIG. 1
Figure 433450DEST_PATH_IMAGE026
Figure 734987DEST_PATH_IMAGE027
Figure 187965DEST_PATH_IMAGE028
And obtaining calibration parameters of the projector and the camera in the system through calibration.
S2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network; the loss functions required in the iterative optimization process of constructing the convolutional neural network in the S2 are obtained based on three-dimensional consistency, structural consistency and phase consistency, respectively. The convolutional neural network in the S2 comprises an Encoder module and a Decode module, wherein the Encoder module extracts the characteristics of the input image information, and the Decode module processes the extracted characteristics to recover the stripe level information.
Three-dimensional consistency is defined as three-dimensional data reconstructed by one camera and one projector, which should be consistent with three-dimensional data reconstructed by two cameras. The loss function based on structural consistency requires that the original input image have structural similarity to the image reconstructed based on the resolved three-dimensional coordinates. The loss function based on phase congruency requires that the original input wrapped phase be consistent with the wrapped phase value reconstructed based on the resolved three-dimensional coordinates.
As shown in fig. 3, using
Figure 642080DEST_PATH_IMAGE029
And the fringe order of the output
Figure 217418DEST_PATH_IMAGE030
The coordinates can be calculated
Figure 756984DEST_PATH_IMAGE031
Corresponding projector coordinates
Figure 731762DEST_PATH_IMAGE032
Thereby obtaining coordinates
Figure 40383DEST_PATH_IMAGE031
Three-dimensional data of (a). In each iterative optimization process, the three-dimensional data is continuously updated and can be obtained through the transformation of a camera imaging model
Figure 707994DEST_PATH_IMAGE031
On-camera
Figure 266014DEST_PATH_IMAGE033
Coordinates corresponding to
Figure 591953DEST_PATH_IMAGE034
. Under the same world coordinate system, a three-dimensional measurement method based on phase matching is adopted, namely, coordinates with consistent absolute phases are searched on polar lines corresponding to two cameras, and the coordinates can be obtained
Figure 489502DEST_PATH_IMAGE031
On-camera
Figure 406643DEST_PATH_IMAGE033
The coordinates corresponding to the above are set as
Figure 920801DEST_PATH_IMAGE035
Which is also updated during each iterative optimization. At the moment, a set of three-dimensional data can be obtained, and when the two sets of three-dimensional data are oneTime of arrival, coordinate
Figure 759751DEST_PATH_IMAGE034
And
Figure 777385DEST_PATH_IMAGE035
should remain consistent. The loss function based on three-dimensional consistency is:
Figure 131006DEST_PATH_IMAGE037
wherein
Figure 866881DEST_PATH_IMAGE038
Indicating the number of pixels, subscripts
Figure 269043DEST_PATH_IMAGE039
Is shown as
Figure 656031DEST_PATH_IMAGE039
And (4) a pixel.
The loss function for structural consistency is calculated as follows:
Figure 383816DEST_PATH_IMAGE040
in the formula (I), the compound is shown in the specification,
Figure 810249DEST_PATH_IMAGE041
for the original input background light intensity image,
Figure 734212DEST_PATH_IMAGE042
an image reconstructed based on the resolved three-dimensional coordinates.
Structural Similarity ssim (structural Similarity index) can measure the Similarity of two images from three aspects of brightness, contrast and structure, as shown in the following formula:
Figure 523176DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 890704DEST_PATH_IMAGE044
Figure 319280DEST_PATH_IMAGE045
respectively representing the mean values of the images X and Y,
Figure 328824DEST_PATH_IMAGE046
Figure 237874DEST_PATH_IMAGE047
representing the variance of the images X and Y respectively,
Figure 776303DEST_PATH_IMAGE048
representing the covariance of images X and Y.
The loss function for calculating phase consistency is as follows:
Figure 974066DEST_PATH_IMAGE049
in the formula (I), the compound is shown in the specification,
Figure 773919DEST_PATH_IMAGE050
the phase image is wrapped for the original input,
Figure 740738DEST_PATH_IMAGE051
the wrapped phase image is reconstructed based on the resolved three-dimensional coordinates.
S2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model; the convolutional neural network model in S2.1 comprises a plurality of convolutional layers, a Batch-norm layer, a ReLu layer and a drop-out layer, wherein the sizes of the convolutional layers comprise 3x3,3x1 and 1x 3.
S2.2: as shown in fig. 2, the input image is enhanced by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model; residual connection in the non-bottleneck residual module is established between input and output, so that the learning capability of the network can be effectively improved, and the degradation problem of a deep network is solved.
S2.3: and predicting and regressing the output stripe level image.
S3: and (5) carrying out phase depth mapping on the stripe level image in the S2 and corresponding system calibration parameters to obtain accurate three-dimensional information.
The three-dimensional information of the object is shown in fig. 1 as 3D.
As shown in fig. 3, the operation of the phase mapping in S3 is as follows:
Figure 715648DEST_PATH_IMAGE052
in the formula (I), wherein,
Figure 400707DEST_PATH_IMAGE053
in order to spread out the phase image,
Figure 1321DEST_PATH_IMAGE054
in order to wrap the phase image,
Figure 822647DEST_PATH_IMAGE055
in the form of a stripe-level secondary image,fthe frequency of the fringes.
The operation process of the calibration parameters in the step S3 is as follows:
Figure 234037DEST_PATH_IMAGE057
Figure 937550DEST_PATH_IMAGE058
Figure 561430DEST_PATH_IMAGE059
is the coordinates of the pixels of the camera,
Figure 752109DEST_PATH_IMAGE060
coordinates of the corresponding point on the projector; m is a calibration parameter of the camera and the projector,X w Y w andZ w is the coordinates of the object in the world coordinate system.
Aiming at Fringe Projection Profilometry (FPP) technology, the invention introduces deep learning to design a phase unwrapped convolution neural network (GCPUNet) which is used for converting a wrapped phase image and a background light intensity image into a fringe level image used for calculating an unwrapped phase, and accurate three-dimensional information can be obtained by combining calibration parameters.
The invention solves the problem that the FPP technology is difficult to obtain the high-precision unfolding phase of the object to be measured at high speed and high efficiency in special measurement scenes such as animal hearts and moving machine wings, and effectively improves the precision and the speed of three-dimensional measurement. Meanwhile, the problem that a deep learning model based on supervised learning cannot have strong generalization performance is solved by using a self-supervised learning scheme, and the generalization effect of the neural network is effectively improved.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (4)

1. A geometric constraint phase unwrapping method based on self-supervision deep learning is characterized by comprising the following steps:
s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration;
s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network;
s2 specifically includes the following steps:
s2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model; the convolutional neural network model in S2.1 comprises a Batch-norm layer, a ReLu layer, a drop-out layer and a plurality of convolutional layers, wherein the sizes of the convolutional layers comprise 3x3,3x1 and 1x 3;
s2.2: enhancing the input wrapped phase image by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model;
s2.3: predicting and regressing the input wrapped phase image, and outputting a fringe level image;
loss functions required in the iterative optimization process of constructing the convolutional neural network in the S2 are obtained based on three-dimensional consistency, structural consistency and phase consistency respectively;
setting coordinates obtained by camera imaging model transformation
Figure 108911DEST_PATH_IMAGE001
On-camera
Figure 24665DEST_PATH_IMAGE002
The corresponding coordinate is
Figure 277837DEST_PATH_IMAGE003
(ii) a Coordinates obtained by a three-dimensional measurement method based on phase matching and arranged under the same world coordinate system
Figure 141888DEST_PATH_IMAGE001
On-camera
Figure 425102DEST_PATH_IMAGE002
The corresponding coordinate is
Figure 981985DEST_PATH_IMAGE004
The function for calculating three-dimensional consistency is as follows:
Figure 738738DEST_PATH_IMAGE003
in the formula:
Figure 900448DEST_PATH_IMAGE006
indicating the number of pixels, subscripts
Figure 38169DEST_PATH_IMAGE007
Is shown as
Figure 500374DEST_PATH_IMAGE007
A plurality of pixels;
the loss function for structural consistency is calculated as follows:
Figure 723545DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,
Figure 680131DEST_PATH_IMAGE009
for the original input background light intensity image,
Figure 937937DEST_PATH_IMAGE010
reconstructing an image based on the resolved three-dimensional coordinates;
the loss function for calculating phase consistency is as follows:
Figure 836623DEST_PATH_IMAGE011
in the formula (I), the compound is shown in the specification,
Figure 547090DEST_PATH_IMAGE012
the phase image is wrapped for the original input,
Figure 291055DEST_PATH_IMAGE013
for the reconstructed wrapped phase image based on the resolved three-dimensional coordinates,
Figure 652635DEST_PATH_IMAGE006
indicating the number of pixels, subscripts
Figure 722222DEST_PATH_IMAGE007
Is shown as
Figure 919985DEST_PATH_IMAGE007
A plurality of pixels;
s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information.
2. The geometric constraint phase unwrapping method based on the unsupervised deep learning of claim 1, wherein the convolutional neural network of S2 comprises an encor module for performing feature extraction on input image information and a decor module for processing the extracted features to recover stripe level information.
3. The geometric constraint phase unwrapping method based on the supervised deep learning as recited in claim 1, wherein the operation procedure of the phase depth mapping in the step S3 is as follows:
Figure 467641DEST_PATH_IMAGE014
in the formula (I), wherein,
Figure 185193DEST_PATH_IMAGE015
in order to spread out the phase image,
Figure 425681DEST_PATH_IMAGE016
in order to wrap the phase image,
Figure 845161DEST_PATH_IMAGE017
in the form of a stripe-level secondary image,fthe frequency of the fringes.
4. The geometric constraint phase unwrapping method based on the supervised deep learning as recited in claim 1, wherein the calibration parameter computation process in S3 is as follows:
Figure 196508DEST_PATH_IMAGE018
Figure 283413DEST_PATH_IMAGE019
Figure 944070DEST_PATH_IMAGE020
is the coordinates of the pixels of the camera,
Figure 850846DEST_PATH_IMAGE021
coordinates of the corresponding point on the projector; m is a calibration parameter of the camera and the projector,X w Y w andZ w is the coordinates of the object in the world coordinate system.
CN202111458588.1A 2021-12-02 2021-12-02 Geometric constraint phase unwrapping method based on self-supervision deep learning Active CN113884027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111458588.1A CN113884027B (en) 2021-12-02 2021-12-02 Geometric constraint phase unwrapping method based on self-supervision deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111458588.1A CN113884027B (en) 2021-12-02 2021-12-02 Geometric constraint phase unwrapping method based on self-supervision deep learning

Publications (2)

Publication Number Publication Date
CN113884027A CN113884027A (en) 2022-01-04
CN113884027B true CN113884027B (en) 2022-03-18

Family

ID=79016251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111458588.1A Active CN113884027B (en) 2021-12-02 2021-12-02 Geometric constraint phase unwrapping method based on self-supervision deep learning

Country Status (1)

Country Link
CN (1) CN113884027B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914792A (en) * 2022-12-22 2023-04-04 长春理工大学 Real-time multidimensional imaging self-adaptive adjustment system and method based on deep learning
CN117689705B (en) * 2024-01-31 2024-05-28 南昌虚拟现实研究院股份有限公司 Deep learning stripe structure light depth reconstruction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN111523618A (en) * 2020-06-18 2020-08-11 南京理工大学智能计算成像研究院有限公司 Phase unwrapping method based on deep learning
CN111879258A (en) * 2020-09-28 2020-11-03 南京理工大学 Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet
CN113505626A (en) * 2021-03-15 2021-10-15 南京理工大学 Rapid three-dimensional fingerprint acquisition method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110487216A (en) * 2019-09-20 2019-11-22 西安知象光电科技有限公司 A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN111523618A (en) * 2020-06-18 2020-08-11 南京理工大学智能计算成像研究院有限公司 Phase unwrapping method based on deep learning
CN111879258A (en) * 2020-09-28 2020-11-03 南京理工大学 Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet
CN113505626A (en) * 2021-03-15 2021-10-15 南京理工大学 Rapid three-dimensional fingerprint acquisition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Fringe pattern analysis using deep learning";Shijie Feng;《ADVANCED PHOTONICS》;20190228;正文第025001-1-025001-6页 *
"深度学习技术在条纹投影三维成像中的应用";冯世杰;《红外与激光工程》;20200331;第49卷(第3期);正文第0303018-1-0303018-14页 *

Also Published As

Publication number Publication date
CN113884027A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN113884027B (en) Geometric constraint phase unwrapping method based on self-supervision deep learning
CN108038902A (en) A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN102445165B (en) Stereo vision measurement method based on single-frame color coding grating
CN103743352B (en) A kind of 3 D deformation measuring method based on polyphaser coupling
CN104182982A (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN108734776A (en) A kind of three-dimensional facial reconstruction method and equipment based on speckle
CN109945802B (en) Structured light three-dimensional measurement method
CN114494388B (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
Zheng et al. Minimal solvers for 3d geometry from satellite imagery
CN108305277A (en) A kind of heterologous image matching method based on straightway
CN105787464B (en) A kind of viewpoint scaling method of a large amount of pictures in three-dimensional scenic
CN103424087B (en) A kind of large-scale steel plate three-dimensional measurement joining method
CN110378995A (en) A method of three-dimensional space modeling is carried out using projection feature
Ye et al. Accurate and dense point cloud generation for industrial Measurement via target-free photogrammetry
CN116592792A (en) Measurement method and system for assisting relative phase stereo matching by using speckle
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
CN116934981A (en) Stripe projection three-dimensional reconstruction method and system based on dual-stage hybrid network
CN113066165B (en) Three-dimensional reconstruction method and device for multi-stage unsupervised learning and electronic equipment
CN112330814B (en) Structured light three-dimensional reconstruction method based on machine learning
Albouy et al. Accurate 3D structure measurements from two uncalibrated views
CN111462199B (en) Rapid speckle image matching method based on GPU
CN112927299B (en) Calibration method and device and electronic equipment
CN113432550A (en) Large-size part three-dimensional measurement splicing method based on phase matching
CN113610906A (en) Fusion image guidance-based multi-parallax image sequence registration method
CN112950697B (en) Monocular unsupervised depth estimation method based on CBAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant