CN113884027B - Geometric constraint phase unwrapping method based on self-supervision deep learning - Google Patents
Geometric constraint phase unwrapping method based on self-supervision deep learning Download PDFInfo
- Publication number
- CN113884027B CN113884027B CN202111458588.1A CN202111458588A CN113884027B CN 113884027 B CN113884027 B CN 113884027B CN 202111458588 A CN202111458588 A CN 202111458588A CN 113884027 B CN113884027 B CN 113884027B
- Authority
- CN
- China
- Prior art keywords
- phase
- image
- dimensional
- camera
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2518—Projection by scanning of the object
- G01B11/2527—Projection by scanning of the object with phase change by in-plane movement of the patern
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2504—Calibration devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a geometric constraint phase unwrapping method based on self-supervision deep learning, and belongs to the technical field of image processing. The method comprises the following steps: s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration; s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network; s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information. The method can solve the problems of low generalization capability and strong data dependence existing in the process of carrying out phase expansion based on supervised learning.
Description
Technical Field
The invention relates to a geometric constraint phase unwrapping method based on self-supervision deep learning, and belongs to the technical field of image analysis.
Background
In fringe projection profilometry, fringe images shot by a camera are subjected to phase recovery, and wrapped phases with periodically changing phase values can be obtained. In order to achieve unambiguous one-to-one correspondence between the projector coordinates and the camera coordinates, the camera needs to take additional fringe images to achieve phase unwrapping, i.e., unwrapping the wrapped phase into a continuous absolute phase. At present, in the phase unwrapping step, researchers pay attention to how to calculate the correct fringe order without taking an additional fringe image on the premise of ensuring the phase unwrapping accuracy.
In order to achieve high-precision and high-robustness phase unwrapping, a typical solution is to add an additional hardware device, that is, an additional camera is added in a conventional three-dimensional measurement system with a single camera and a single projector, and such a method may be referred to as a geometric constraint phase unwrapping method.
Recently, deep learning is introduced into fringe projection profilometry, and in the phase unwrapping step, many technical bottlenecks have been broken through by using a method based on deep learning. However, currently in fringe projection profilometry, all methods based on deep learning are supervised learning, and such methods generally comprise two steps of training and testing. During training, a large amount of marked data needs to be shot in advance, which is very time-consuming, and in some special scenes such as animal hearts, moving machine wings, etc., it is not practical to acquire a large amount of marked data. Moreover, these large amounts of labeled data need to be distributed independently and uniformly, otherwise, the model obtained after training may have serious network generalization problems during the testing process. In other words, when the data volume of the training set is limited and the data distribution in the test set is different from the training set greatly, the trained model cannot obtain ideal results, which brings great limitation to the practical application of the supervised learning-based fringe projection profilometry, and therefore the frequency is questioned by low universality and strong data dependency.
Disclosure of Invention
The invention aims to overcome the problems in the prior art, provides a geometric constraint phase unwrapping method based on self-supervised deep learning, and can solve the problems of low generalization capability and strong data dependence in the phase unwrapping based on supervised learning.
In order to solve the above problems, the geometric constraint phase unwrapping method based on the self-supervision deep learning of the present invention comprises the following steps:
s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration;
s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network;
s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information.
Further, the convolutional neural network in S2 includes an encor module and a decor module, where the encor module performs feature extraction on the input image information, and the decor module processes the extracted features to recover the stripe level information.
Further, S2 specifically includes the following steps:
s2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model;
s2.2: enhancing the input wrapped phase image by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model;
s2.3: and predicting and regressing the input wrapped phase image, and outputting a fringe level image.
Further, the operation procedure of the phase mapping in S3 is as follows:
in the formula (I), wherein,in order to spread out the phase image,in order to wrap the phase image,in the form of a stripe-level secondary image,fthe frequency of the fringes.
Further, the calibration parameter operation process described in S3 is as follows:
,is the coordinates of the pixels of the camera,is the coordinate of the corresponding point on the projector, m is the calibration parameter of the camera and the projector,X w 、Y w andZ w is the coordinates of the object in the world coordinate system.
Further, the loss functions required in the iterative optimization process of constructing the convolutional neural network in S2 are obtained based on three-dimensional consistency, structural consistency, and phase consistency, respectively.
Further, the coordinates obtained by the transformation of the camera imaging model are setOn-cameraThe corresponding coordinate is(ii) a Coordinates obtained by a three-dimensional measurement method based on phase matching and arranged under the same world coordinate systemOn-cameraThe corresponding coordinate is,
The function for calculating three-dimensional consistency is as follows:
Further, the loss function for calculating structural consistency is as follows:
in the formula (I), the compound is shown in the specification,for the original input background light intensity image,an image reconstructed based on the resolved three-dimensional coordinates.
Further, the loss function for calculating phase consistency is as follows:
in the formula (I), the compound is shown in the specification,the phase image is wrapped for the original input,for the reconstructed wrapped phase image based on the resolved three-dimensional coordinates,indicating the number of pixels, subscriptsIs shown asAnd (4) a pixel.
Further, the convolutional neural network model described in S2.1 includes a plurality of convolutional layers, a Batch-norm layer, a ReLu layer, and a drop-out layer, and the sizes of the convolutional layers include 3x3,3x1, and 1x 3.
The invention has the beneficial effects that: 1) the correct unfolded phase image can be obtained only by inputting the wrapped phase image and the background light intensity image, other structured light patterns do not need to be projected, and phase unfolding can be carried out at high speed and high precision.
2) The problems of weak generalization ability, strong data dependence and the like in supervised learning can be solved.
Drawings
FIG. 1 is a flow chart of a geometric constraint phase unwrapping method based on unsupervised deep learning according to the present invention;
FIG. 2 is a diagram of the basic structure of the convolutional neural network GCPUNet of the present invention;
FIG. 3 is a schematic structural diagram of a three-dimensional testing system according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
As shown in FIG. 1, the geometric constraint phase unwrapping method based on the self-supervised deep learning of the present invention comprises the following steps:
s1: the method comprises the steps of collecting an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration.
Wrapped phase and background intensity images are shown in FIG. 1、、And obtaining calibration parameters of the projector and the camera in the system through calibration.
S2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network; the loss functions required in the iterative optimization process of constructing the convolutional neural network in the S2 are obtained based on three-dimensional consistency, structural consistency and phase consistency, respectively. The convolutional neural network in the S2 comprises an Encoder module and a Decode module, wherein the Encoder module extracts the characteristics of the input image information, and the Decode module processes the extracted characteristics to recover the stripe level information.
Three-dimensional consistency is defined as three-dimensional data reconstructed by one camera and one projector, which should be consistent with three-dimensional data reconstructed by two cameras. The loss function based on structural consistency requires that the original input image have structural similarity to the image reconstructed based on the resolved three-dimensional coordinates. The loss function based on phase congruency requires that the original input wrapped phase be consistent with the wrapped phase value reconstructed based on the resolved three-dimensional coordinates.
As shown in fig. 3, usingAnd the fringe order of the outputThe coordinates can be calculatedCorresponding projector coordinatesThereby obtaining coordinatesThree-dimensional data of (a). In each iterative optimization process, the three-dimensional data is continuously updated and can be obtained through the transformation of a camera imaging modelOn-cameraCoordinates corresponding to. Under the same world coordinate system, a three-dimensional measurement method based on phase matching is adopted, namely, coordinates with consistent absolute phases are searched on polar lines corresponding to two cameras, and the coordinates can be obtainedOn-cameraThe coordinates corresponding to the above are set asWhich is also updated during each iterative optimization. At the moment, a set of three-dimensional data can be obtained, and when the two sets of three-dimensional data are oneTime of arrival, coordinateAndshould remain consistent. The loss function based on three-dimensional consistency is:
The loss function for structural consistency is calculated as follows:
in the formula (I), the compound is shown in the specification,for the original input background light intensity image,an image reconstructed based on the resolved three-dimensional coordinates.
Structural Similarity ssim (structural Similarity index) can measure the Similarity of two images from three aspects of brightness, contrast and structure, as shown in the following formula:
wherein the content of the first and second substances,、respectively representing the mean values of the images X and Y,、representing the variance of the images X and Y respectively,representing the covariance of images X and Y.
The loss function for calculating phase consistency is as follows:
in the formula (I), the compound is shown in the specification,the phase image is wrapped for the original input,the wrapped phase image is reconstructed based on the resolved three-dimensional coordinates.
S2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model; the convolutional neural network model in S2.1 comprises a plurality of convolutional layers, a Batch-norm layer, a ReLu layer and a drop-out layer, wherein the sizes of the convolutional layers comprise 3x3,3x1 and 1x 3.
S2.2: as shown in fig. 2, the input image is enhanced by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model; residual connection in the non-bottleneck residual module is established between input and output, so that the learning capability of the network can be effectively improved, and the degradation problem of a deep network is solved.
S2.3: and predicting and regressing the output stripe level image.
S3: and (5) carrying out phase depth mapping on the stripe level image in the S2 and corresponding system calibration parameters to obtain accurate three-dimensional information.
The three-dimensional information of the object is shown in fig. 1 as 3D.
As shown in fig. 3, the operation of the phase mapping in S3 is as follows:
in the formula (I), wherein,in order to spread out the phase image,in order to wrap the phase image,in the form of a stripe-level secondary image,fthe frequency of the fringes.
The operation process of the calibration parameters in the step S3 is as follows:
,is the coordinates of the pixels of the camera,coordinates of the corresponding point on the projector; m is a calibration parameter of the camera and the projector,X w 、Y w andZ w is the coordinates of the object in the world coordinate system.
Aiming at Fringe Projection Profilometry (FPP) technology, the invention introduces deep learning to design a phase unwrapped convolution neural network (GCPUNet) which is used for converting a wrapped phase image and a background light intensity image into a fringe level image used for calculating an unwrapped phase, and accurate three-dimensional information can be obtained by combining calibration parameters.
The invention solves the problem that the FPP technology is difficult to obtain the high-precision unfolding phase of the object to be measured at high speed and high efficiency in special measurement scenes such as animal hearts and moving machine wings, and effectively improves the precision and the speed of three-dimensional measurement. Meanwhile, the problem that a deep learning model based on supervised learning cannot have strong generalization performance is solved by using a self-supervised learning scheme, and the generalization effect of the neural network is effectively improved.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (4)
1. A geometric constraint phase unwrapping method based on self-supervision deep learning is characterized by comprising the following steps:
s1: acquiring an original fringe picture of an object to be measured through a three-dimensional measuring system, calculating to obtain a wrapping phase image and a background light intensity image, and obtaining calibration parameters of a projector and a camera in the system through calibration;
s2: converting the wrapped phase map and the background light intensity map in the S1 into an image of a fringe order required by the unwrapping phase through a convolutional neural network;
s2 specifically includes the following steps:
s2.1: performing phase expansion on the input wrapped phase image to construct a convolutional neural network model; the convolutional neural network model in S2.1 comprises a Batch-norm layer, a ReLu layer, a drop-out layer and a plurality of convolutional layers, wherein the sizes of the convolutional layers comprise 3x3,3x1 and 1x 3;
s2.2: enhancing the input wrapped phase image by adding a one-dimensional non-bottleneck residual error module in the convolutional neural network model;
s2.3: predicting and regressing the input wrapped phase image, and outputting a fringe level image;
loss functions required in the iterative optimization process of constructing the convolutional neural network in the S2 are obtained based on three-dimensional consistency, structural consistency and phase consistency respectively;
setting coordinates obtained by camera imaging model transformationOn-cameraThe corresponding coordinate is(ii) a Coordinates obtained by a three-dimensional measurement method based on phase matching and arranged under the same world coordinate systemOn-cameraThe corresponding coordinate is,
The function for calculating three-dimensional consistency is as follows:
the loss function for structural consistency is calculated as follows:
in the formula (I), the compound is shown in the specification,for the original input background light intensity image,reconstructing an image based on the resolved three-dimensional coordinates;
the loss function for calculating phase consistency is as follows:
in the formula (I), the compound is shown in the specification,the phase image is wrapped for the original input,for the reconstructed wrapped phase image based on the resolved three-dimensional coordinates,indicating the number of pixels, subscriptsIs shown asA plurality of pixels;
s3: and (5) calculating the fringe level image in the S2 through phase depth mapping and corresponding system calibration parameters to obtain accurate three-dimensional information.
2. The geometric constraint phase unwrapping method based on the unsupervised deep learning of claim 1, wherein the convolutional neural network of S2 comprises an encor module for performing feature extraction on input image information and a decor module for processing the extracted features to recover stripe level information.
3. The geometric constraint phase unwrapping method based on the supervised deep learning as recited in claim 1, wherein the operation procedure of the phase depth mapping in the step S3 is as follows:
4. The geometric constraint phase unwrapping method based on the supervised deep learning as recited in claim 1, wherein the calibration parameter computation process in S3 is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111458588.1A CN113884027B (en) | 2021-12-02 | 2021-12-02 | Geometric constraint phase unwrapping method based on self-supervision deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111458588.1A CN113884027B (en) | 2021-12-02 | 2021-12-02 | Geometric constraint phase unwrapping method based on self-supervision deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113884027A CN113884027A (en) | 2022-01-04 |
CN113884027B true CN113884027B (en) | 2022-03-18 |
Family
ID=79016251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111458588.1A Active CN113884027B (en) | 2021-12-02 | 2021-12-02 | Geometric constraint phase unwrapping method based on self-supervision deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113884027B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115914792A (en) * | 2022-12-22 | 2023-04-04 | 长春理工大学 | Real-time multidimensional imaging self-adaptive adjustment system and method based on deep learning |
CN117689705B (en) * | 2024-01-31 | 2024-05-28 | 南昌虚拟现实研究院股份有限公司 | Deep learning stripe structure light depth reconstruction method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110487216A (en) * | 2019-09-20 | 2019-11-22 | 西安知象光电科技有限公司 | A kind of fringe projection 3-D scanning method based on convolutional neural networks |
CN111523618A (en) * | 2020-06-18 | 2020-08-11 | 南京理工大学智能计算成像研究院有限公司 | Phase unwrapping method based on deep learning |
CN111879258A (en) * | 2020-09-28 | 2020-11-03 | 南京理工大学 | Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet |
CN113505626A (en) * | 2021-03-15 | 2021-10-15 | 南京理工大学 | Rapid three-dimensional fingerprint acquisition method and system |
-
2021
- 2021-12-02 CN CN202111458588.1A patent/CN113884027B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110487216A (en) * | 2019-09-20 | 2019-11-22 | 西安知象光电科技有限公司 | A kind of fringe projection 3-D scanning method based on convolutional neural networks |
CN111523618A (en) * | 2020-06-18 | 2020-08-11 | 南京理工大学智能计算成像研究院有限公司 | Phase unwrapping method based on deep learning |
CN111879258A (en) * | 2020-09-28 | 2020-11-03 | 南京理工大学 | Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet |
CN113505626A (en) * | 2021-03-15 | 2021-10-15 | 南京理工大学 | Rapid three-dimensional fingerprint acquisition method and system |
Non-Patent Citations (2)
Title |
---|
"Fringe pattern analysis using deep learning";Shijie Feng;《ADVANCED PHOTONICS》;20190228;正文第025001-1-025001-6页 * |
"深度学习技术在条纹投影三维成像中的应用";冯世杰;《红外与激光工程》;20200331;第49卷(第3期);正文第0303018-1-0303018-14页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113884027A (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113884027B (en) | Geometric constraint phase unwrapping method based on self-supervision deep learning | |
CN108038902A (en) | A kind of high-precision three-dimensional method for reconstructing and system towards depth camera | |
CN102445165B (en) | Stereo vision measurement method based on single-frame color coding grating | |
CN103743352B (en) | A kind of 3 D deformation measuring method based on polyphaser coupling | |
CN104182982A (en) | Overall optimizing method of calibration parameter of binocular stereo vision camera | |
CN108734776A (en) | A kind of three-dimensional facial reconstruction method and equipment based on speckle | |
CN109945802B (en) | Structured light three-dimensional measurement method | |
CN114494388B (en) | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment | |
Zheng et al. | Minimal solvers for 3d geometry from satellite imagery | |
CN108305277A (en) | A kind of heterologous image matching method based on straightway | |
CN105787464B (en) | A kind of viewpoint scaling method of a large amount of pictures in three-dimensional scenic | |
CN103424087B (en) | A kind of large-scale steel plate three-dimensional measurement joining method | |
CN110378995A (en) | A method of three-dimensional space modeling is carried out using projection feature | |
Ye et al. | Accurate and dense point cloud generation for industrial Measurement via target-free photogrammetry | |
CN116592792A (en) | Measurement method and system for assisting relative phase stereo matching by using speckle | |
CN116935013B (en) | Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction | |
CN116934981A (en) | Stripe projection three-dimensional reconstruction method and system based on dual-stage hybrid network | |
CN113066165B (en) | Three-dimensional reconstruction method and device for multi-stage unsupervised learning and electronic equipment | |
CN112330814B (en) | Structured light three-dimensional reconstruction method based on machine learning | |
Albouy et al. | Accurate 3D structure measurements from two uncalibrated views | |
CN111462199B (en) | Rapid speckle image matching method based on GPU | |
CN112927299B (en) | Calibration method and device and electronic equipment | |
CN113432550A (en) | Large-size part three-dimensional measurement splicing method based on phase matching | |
CN113610906A (en) | Fusion image guidance-based multi-parallax image sequence registration method | |
CN112950697B (en) | Monocular unsupervised depth estimation method based on CBAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |