CN113129294A - Multi-scale connection deep learning one-step phase unwrapping method - Google Patents

Multi-scale connection deep learning one-step phase unwrapping method Download PDF

Info

Publication number
CN113129294A
CN113129294A CN202110470485.0A CN202110470485A CN113129294A CN 113129294 A CN113129294 A CN 113129294A CN 202110470485 A CN202110470485 A CN 202110470485A CN 113129294 A CN113129294 A CN 113129294A
Authority
CN
China
Prior art keywords
pixels
phase
feature
multiplied
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110470485.0A
Other languages
Chinese (zh)
Inventor
谢先明
田宪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202110470485.0A priority Critical patent/CN113129294A/en
Publication of CN113129294A publication Critical patent/CN113129294A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9023SAR image post-processing techniques combined with interferometric techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-scale connection deep learning one-step phase unwrapping method, which comprises the steps of creating an InSAR simulation data set; putting the two kinds of data created in the S1 into a modified DeepLabV3+ model for training; and putting the phase image to be unwound into a trained DeepLab V3+ model to obtain a true unwound phase image. According to the method, DeepLab V3+ is used as a framework, optimization design is carried out, a network architecture suitable for unwrapping of different types of interferograms is built, and direct mapping from an wrapped phase to a real phase is realized; the multi-scale jump connection organically combines the semantic information of the feature map with different scales in the coding module with the high-level semantic information of the feature map in the decoding module; the number of parameters of the network model is greatly reduced, and the phase unwrapping precision and the training efficiency of the network are improved. After the network is trained, the operation speed is high, no post-processing is needed, and experimental results prove that the method has good generalization capability and stability, high time efficiency and important application value.

Description

Multi-scale connection deep learning one-step phase unwrapping method
Technical Field
The invention belongs to the field of image phase unwrapping, relates to image phase unwrapping in interferometric technology application, and particularly relates to a multi-scale connection depth learning one-step phase unwrapping method.
Background
The existing phase unwrapping algorithm is roughly divided into a path tracking algorithm, a minimum norm algorithm, a network planning unwrapping algorithm, a Kalman filtering algorithm with noise robustness and the like. The path tracking algorithm comprises the following steps: 1) a quality guide algorithm, 2) a branch cutting method, 3) a mask cutting method, and 4) a minimum discontinuity algorithm; the path tracking method is to prevent the global transmission of phase errors by setting a proper integration path to limit the errors in a certain area. The minimum norm algorithm has good unwrapping efficiency, and converts the phase unwrapping problem into a global optimization problem under a minimum norm frame, but an over-smooth phase is generated; the grid planning method is a network optimization problem for converting a phase unwrapping problem into a solution cost stream, and mainly comprises a minimum cost stream, a statistical cost stream and the like, but noise in the method is transmitted along an integral path, so that an unwrapping result is not ideal; the Kalman filtering algorithm can perform interference phase noise suppression while phase unwrapping, and is not influenced by phase residual error points, so that the method reduces the limitation of pre-filtering on phase unwrapping, but has high time consumption and cost. In the deep learning algorithm proposed in recent years, phase unwrapping is realized by constructing a coding-decoding network, phase unwrapping is regarded as a regression problem by utilizing an end-to-end corresponding relation, and a nonlinear mapping relation from an input wrapped phase to an output real phase is established, so that one-step unwrapping is realized.
The methods such as the path tracking algorithm, the minimum norm algorithm, the network planning algorithm and the like are easily influenced by interference phase noise, sometimes the noise interferogram is difficult to be effectively unwrapped, and the path tracking algorithm and the network planning algorithm have the problem that the phase unwrapping precision and the efficiency are difficult to take into account to a certain extent; the state estimation algorithm has stronger phase noise resistance, can effectively process the phase unwrapping problem of the low signal-to-noise ratio interferogram, but has higher time consumption cost; more and more Deep learning unwrapping methods are successful in fixed application scenes, but no proper Deep Convolutional Neural Network (DCNN) framework is available at present, and the good effect can be obtained in measured data in multiple fields.
Disclosure of Invention
In order to solve the problems, the invention provides an efficient multi-scale connection deep learning one-step phase unwrapping method which is relatively high in unwrapping precision and relatively strong in anti-noise performance.
The technical scheme for realizing the purpose of the invention is as follows:
the multi-scale connection deep learning one-step phase unwrapping method comprises the following steps:
s1, creating an interferogram dataset to obtain a real phase image and a winding phase image;
s2, putting the two kinds of data created in the S1 into a deep Lab V3+ model connected in a multi-scale mode for training to obtain a trained weight;
and S3, putting the phase image to be unwrapped into the trained deep Lab V3+ model connected in a multi-scale mode to obtain an unwrapped real phase image.
Further, the S1 includes the following steps:
s1-1, obtaining a random initial matrix of 2 x 2-20 x 20 through a random function;
s1-2, carrying out interpolation and amplification on the initial matrix expansion by a bicubic interpolation method to 256 pixels multiplied by 256 pixels to obtain a real phase interference image;
s1-3, obtaining an initial matrix with the size of 400 pixels multiplied by 400 pixels by using 20 coefficient polynomials before Zernike;
s1-4, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels by utilizing a Zernike matrix to obtain a true phase diagram;
s1-5, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels from the promotion true phase to obtain a true phase diagram;
and S1-6, rewinding the phase of the generated real phase diagram and adding noises with different signal-to-noise ratios to obtain a wound phase diagram.
Further, the S2 includes the following steps:
s2-1, a single-channel winding phase diagram enters from an input layer of a coding module of a deep Lab V3+ model in multi-scale connection, passes through an X-convergence module, and outputs a feature diagram of 16 pixels multiplied by 2024, and a feature diagram set of three skip connections skip1, skip2 and skip3 of 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels is obtained;
s2-2, the upper layer characteristic diagram is output after passing through An Spatial Pyramid Pool (ASPP) module, and the module adopts parallel operation of four convolutional layers and a global Pooling layer with different sampling rates to obtain the characteristic diagram fused with information of different scales;
s2-3, performing up-sampling operation on the upper layer feature map, and adjusting and outputting 64 pixels × 256 feature maps;
s2-4, connecting and fusing the upper-layer feature graph with skip1, skip2 and skip3 which are changed in size, and performing up-sampling operation to obtain 256 pixels multiplied by 128 feature graphs;
and S2-5, repeating the separable convolution operation twice on the upper layer feature diagram to obtain 256 pixel multiplied by 128 feature diagrams, and finally obtaining the single-channel diagram output feature diagram through the convolution operation.
Further, in the improved deep lab V3+ model of S2, the network takes deep lab V3+ as a framework, and the encoding and decoding links thereof are optimally designed, so as to build a network architecture suitable for unwrapping different types of interferograms, and realize direct mapping from an wrapped phase to a real phase; the encoding module mainly comprises a Deep Convolutional Neural Network (DCNN) and pyramid pooling ASPP (asynchronous nonsupercollage & noise plus) and is characterized in that the DCNN takes a Modified Aligned Xconcentration network as a framework, comprises an input layer, a middle layer and an output layer, outputs a dense feature map with high semantic information, and leads out low-layer feature maps with three different scales, wherein the resolution of the low-layer feature maps is respectively 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels; the number of residual network units repeatedly executed in the DCNN middle layer is reduced to 4 times; the ASSP module performs multi-scale feature map information fusion, and adjusts the number of feature map channels by using operations of '1 × 1 convolution + Batch Normalization (BN) + modified Linear Unit (ReLU)', and the encoder outputs a feature map carrying high-level semantic information, wherein the number of channels is 256, and the resolution is 16 pixels × 16 pixels.
In a decoding link, channel number adjustment is completed on three low-layer feature maps of different scales output by DCNN by using '1 × 1 convolution + BN + ReLU' operation to obtain feature maps with resolutions of 64 pixels × 64 pixels, 32 pixels × 32 pixels and 16 pixels × 16 pixels respectively, and then the resolution of the feature maps is improved to 64 pixels × 64 pixels through up-sampling convolution operation;
meanwhile, 4 × 4 bilinear interpolation upsampling convolution operation is carried out on the feature map output by the encoding link to obtain 64 channel feature maps with the resolution of 64 × 64 pixels, the 64 channel feature maps are organically spliced with the low-layer feature map output by the DCNN through skip connection, and finally, the feature map with the resolution of 256 × 256, namely the interference map unwrapping phase map, can be obtained through 4 × 4 bilinear interpolation upsampling convolution operation and '1 × 1 convolution + BN + ReLU' operation. S2 the input of the network is the wound phase diagram and its output is its unwound phase diagram.
The invention has the advantages that:
according to the invention, three low-level feature maps with different scales, with resolutions of 64 × 64, 32 × 32 and 16 × 16 respectively, are led out from a DCNN input layer, so that abundant phase detail information can be provided for feature maps carrying higher semantic information output by a coding link; the number of residual network units repeatedly executed in the DCNN middle layer is reduced to 4 times, so that better balance between the feature extraction and detail maintenance of the interference pattern is achieved, the parameter quantity of a network model is greatly reduced, and the phase unwrapping precision and the training efficiency of the network are improved. After the network is trained, winding phases under different scenes can be unfolded, the operation speed is high, no post-processing is needed, the algorithm is compared with algorithms such as a classical quality guide algorithm, a least square method based on FFT (fast Fourier transform) and a phase unwrapping algorithm (UNETPU) based on U-NET (universal-network-based) so that the algorithm S2 shows better robustness on winding phases containing various noises under different types, and experimental results prove that the method has good generalization capability and stability, is high in time efficiency and has important application value.
Drawings
FIG. 1a is a schematic diagram of multi-scale connection deep learning one-step phase unwrapping model network training in an embodiment of the present invention;
FIG. 1b is a schematic diagram of a multi-scale connection deep learning one-step phase unwrapping model inputting a wrapping interferogram to be unwrapped into a trained network model to obtain an unwrapping result according to an embodiment of the present invention;
FIG. 2I is a schematic diagram illustrating an initial matrix expansion interpolation process performed by a bicubic interpolation method according to an embodiment of the present invention to obtain a true phase interferogram and a noisy interferogram by 256 pixels × 256 pixels;
FIG. 2II is a schematic diagram of an embodiment of the present invention, in which a true phase matrix with a size of 256 pixels × 256 pixels and a noisy interferogram are extracted from a Zernike matrix;
fig. 2iii is a schematic diagram of a phase matrix with 256 × 256 pixels extracted from the promoted real phase and a noisy interferogram according to an embodiment of the invention;
FIG. 3 is a diagram of a modified aligned Xceptance network, i.e., a convolutional DCNN network, according to an embodiment of the present invention;
FIG. 4 is a diagram of a multi-scale connection deep learning neural network structure with deep Lab V3+ according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples and the accompanying drawings, in which the described examples are intended to illustrate only some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
the basic process of the multi-scale connection deep learning one-step phase unwrapping method proposed by the present invention is described below with reference to the accompanying drawings.
The phase unwrapping model based on deep learning is shown in fig. 1a and 1b, wherein fig. 1a is a network training schematic diagram, and a trained network model is obtained by establishing a nonlinear mapping between a wrapping phase and a real phase through a training data set; the unwrapping result can be obtained by inputting the wrapping interferogram to be unwrapped into the trained network model, as shown in fig. 1 b.
The multi-scale connection deep learning one-step phase unwrapping method comprises the following steps:
s1, creating an interferogram dataset to obtain 27000 groups of real phase diagrams and winding phase diagrams;
s2, putting the two kinds of data created in the S1 into a deep Lab V3+ model connected in a multi-scale mode for training to obtain a trained weight;
and S3, putting the phase image to be unwrapped into the trained deep Lab V3+ model connected in a multi-scale mode to obtain an unwrapped real phase image.
Further, the S1 includes the following steps:
s1-1, obtaining a random initial matrix of 2 x 2-20 x 20 through a random function;
s1-2, carrying out interpolation and amplification on the initial matrix expansion by a bicubic interpolation method to 256 pixels multiplied by 256 pixels to obtain a real phase interference image;
s1-3, obtaining an initial matrix with the size of 400 pixels multiplied by 400 pixels by using 20 coefficient polynomials before Zernike;
s1-4, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels by utilizing a Zernike matrix to obtain a true phase diagram;
s1-5, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels from the promotion true phase to obtain a true phase diagram;
and S1-6, rewinding the phase of the generated real phase diagram and adding noises with different signal-to-noise ratios to obtain a wound phase diagram.
True interferometric phase phi and actual observed phase in interferometric technique applications
Figure BDA0003045183480000041
(i.e., winding phase) there is the following relationship between:
Figure BDA0003045183480000051
wherein j represents a complex number, j phi represents a complex phase,
Figure BDA0003045183480000052
commonly called winding phase, when a data set required by a network model is constructed, a real interference phase diagram is generated, then a winding phase diagram is obtained by using a formula (1), different types of noise are added into the winding interference diagram to obtain noise winding phase diagrams with different signal-to-noise ratios, namely, Gaussian noise with a standard deviation of 0 to 0.2 and salt-pepper noise with a distribution density of 0.01 are added into the winding interference diagram to obtain the noise winding phase diagrams with different signal-to-noise ratios, and the noise winding phase diagrams are produced in the following three ways as shown in fig. 2-fig. 2 iii:
(I) carrying out interpolation amplification on the initial matrix expansion according to a bicubic interpolation method to 256 pixels multiplied by 256 pixels to obtain a real phase interference image, wherein the phase range of the label image is 0-60 radians;
(II) according to the Zernike function, the size of the first 20 coefficient polynomial pixels is 400 pixels multiplied by 400 pixels (which can be adjusted according to specific conditions), the clipped pixels are 256 pixels multiplied by 256 pixels, and the phase range of the initial matrix label image is 0-60 radians;
(III) intercepting a real phase matrix with the size of 256 pixels multiplied by 256 pixels from a promotion real phase, namely firstly converting Digital Elevation Model (DEM) data into a real interference phase according to a Synthetic Aperture Radar Interferometry (InSAR) theory, and then generating a winding phase diagram added with different noises, wherein the image size is 256 pixels multiplied by 256 pixels, and the phase range of a label image is different from 0 radian to 60 radians.
The data set contains a training data set 20000 groups and a verification data set 7000 groups. Fig. 2I-2 III are partial interferogram data generated according to the three data set generation methods in S1, wherein fig. 2I is a partial interferogram data generated according to the S1-2 data generation method (I) including a true interferometric phase map and a wrapped phase map with different noise added thereto, fig. 2II is a partial interferogram data generated according to the S1-4 data generation method (II) including a true interferometric phase map and a wrapped phase map with different noise added thereto, and fig. 2III is a partial interferogram data generated according to the S1-5 data generation method (III) including a true interferometric phase map and a wrapped phase map with different noise added thereto, respectively.
Further, the S2 includes the following steps:
s2-1, a single-channel winding phase diagram enters from an input layer of a coding module of a deep Lab V3+ model in multi-scale connection, passes through an X-convergence module, and outputs a feature diagram of 16 pixels multiplied by 2024, and a feature diagram set of three skip connections skip1, skip2 and skip3 of 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels is obtained;
s2-2, the upper layer characteristic diagram is output after passing through An Spatial Pyramid Pool (ASPP) module, and the module adopts parallel operation of four convolutional layers and a global Pooling layer with different sampling rates to obtain the characteristic diagram fused with information of different scales;
s2-3, performing up-sampling operation on the upper layer feature map, and adjusting and outputting 64 pixels × 256 feature maps;
s2-4, connecting and fusing the upper-layer feature graph with skip1, skip2 and skip3 which are changed in size, and performing up-sampling operation to obtain 256 pixels multiplied by 128 feature graphs;
and S2-5, repeating the separable convolution operation twice on the upper layer feature diagram to obtain 256 pixel multiplied by 128 feature diagrams, and finally obtaining the single-channel diagram output feature diagram through the convolution operation.
Wherein the X-acceptance (DCNN) module is introduced as follows:
as shown in FIG. 3, the X-convergence module is divided into an entry block, a middle block and a push-out block; the block is entered, firstly the size of a feature map is changed into 128 pixels multiplied by 32 through 32 3 multiplied by 3 convolution kernels, the number of channels is adjusted to 64 through convolution again, then the feature map continuously passes through three residual modules (including three convolution operations) and three jump connections with different sizes are respectively led out, and abundant phase detail information is provided for a feature map which is output by a coding link and carries high semantic information; the number of residual network units repeatedly executed in the DCNN middle layer is reduced to 4 times (no size change exists), so that the interference pattern feature extraction and detail maintenance are well balanced, the network model parameter quantity is greatly reduced, and the phase unwrapping precision and the training efficiency of the network are improved; the exit block consists of a residual block convolved with three separable convolutions, with the final output resolution being 16 pixel by 2024 feature map.
Further, as shown in fig. 4, the improved deep lab V3+ model in S2 is a network with deep lab V3+ as a framework, and the encoding and decoding links are optimally designed to build a network architecture suitable for unwrapping different types of interferograms, thereby implementing direct mapping from an wrapped phase to a real phase.
The encoding module mainly comprises a DCNN and an ASPP, wherein the DCNN takes a Modified Aligned Xconcentration network as a framework, comprises an input layer, a middle layer and an output layer, and outputs a dense feature map with high semantic information; extracting low-level feature maps with three different scales of 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels respectively on a DCNN input layer; the number of residual network units repeatedly executed in the DCNN middle layer is reduced to 4 times; the ASSP module executes multi-scale feature map information fusion, adjusts the number of feature map channels by using 1 multiplied by 1 convolution + BN + ReLU operation, and the encoder outputs a feature map carrying high-level semantic information, wherein the number of channels and the resolution ratio are respectively 256 and 16 pixels multiplied by 16 pixels; in a decoding link, channel number adjustment is completed on three low-layer feature maps of different scales output by DCNN by using '1 × 1 convolution + BN + ReLU' operation to obtain feature maps with resolutions of 64 pixels × 64 pixels, 32 pixels × 32 pixels and 16 pixels × 16 pixels respectively, and then the resolution of the feature maps is improved to 64 pixels × 64 pixels through up-sampling convolution operation; meanwhile, 4 × 4 bilinear interpolation upsampling convolution operation is carried out on the feature map output by the encoding link to obtain a 64-channel feature map with the resolution of 64 × 64 pixels, the feature map is organically spliced with the low-layer feature map output by the DCNN through skip connection, and finally the feature map with the resolution of 256 pixels × 256 pixels, namely the interference map unwrapping phase map, can be obtained through 4 × 4 bilinear interpolation upsampling convolution operation and '1 × 1 convolution + BN + ReLU' operation. S2 the input of the network is the wound phase diagram and its output is its unwound phase diagram.
In S3, a single-channel winding phase diagram enters from an input layer, the number of image channels is adjusted to 16 × 2024 feature diagrams by DCNN, then feature diagrams are output by ASPP, the number of channels is adjusted by using "1 × 1 convolution + BN + ReLU" operation on three low-layer feature diagrams of different scales which are jump-connected by DCNN to obtain feature diagrams with resolutions of 64 × 64 pixels, 32 × 32 pixels and 16 × 16 pixels respectively, and then the resolution of the feature diagrams is increased to 64 × 64 pixels by up-sampling convolution operation; meanwhile, 4 × 4 bilinear interpolation up-sampling convolution operation is carried out on the feature map output by the coding link to obtain a 64-channel feature map with the resolution of 64 × 64 pixels, the feature map is organically spliced with the low-layer feature map output by the DCNN through skip connection, the feature map with the resolution of 256 × 256 pixels is obtained through 4 × 4 bilinear interpolation up-sampling convolution operation, and finally the interference map unwrapping phase map can be obtained through two depth separable convolution layers and the operations of '1 × 1 convolution + BN + ReLU'.
The dataset of this example was produced based on MATLAB 2016a simulation software, with the model development platform python3.6, using Tensorflow-1.13.0 framework, Keras version 2.2.4; the main computer parameters for network model training and experimental testing are as follows: NVIDIA GeForce RTX 2080Ti GPU, Inter i9-10900X +64GB RAM. It takes 0.016s to unwind a picture.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. The one-step phase unwrapping method for multi-scale connection deep learning is characterized by comprising the following steps of:
s1, creating an InSAR data set to obtain a real phase diagram and a winding phase diagram;
s2, putting the two kinds of data created in the S1 into a deep Lab V3+ model connected in a multi-scale mode for training to obtain a trained weight;
and S3, putting the phase image to be unwrapped into the trained deep Lab V3+ model connected in a multi-scale mode to obtain an unwrapped real phase image.
2. The multi-scale connection deep learning one-step phase unwrapping method according to claim 1, wherein S1 includes the steps of:
s1-1, obtaining a random initial matrix of 2 x 2-20 x 20 through a random function;
s1-2, carrying out interpolation and amplification on the initial matrix expansion by a bicubic interpolation method to 256 pixels multiplied by 256 pixels to obtain a real phase interference image;
s1-3, obtaining an initial matrix with the size of 400 pixels multiplied by 400 pixels by using 20 coefficient polynomials before Zernike;
s1-4, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels by utilizing a Zernike matrix to obtain a true phase diagram;
s1-5, intercepting a true phase matrix with the size of 256 pixels multiplied by 256 pixels from the promotion true phase to obtain a true phase diagram;
and S1-6, rewinding the phase of the generated real phase diagram and adding noises with different signal-to-noise ratios to obtain a wound phase diagram.
3. The multi-scale connection deep learning one-step phase unwrapping method according to claim 1, wherein S2 includes the steps of:
s2-1, a single-channel winding phase diagram enters from an input layer of a coding module of a deep Lab V3+ model in multi-scale connection, passes through an X-convergence module, and outputs a feature diagram of 16 pixels multiplied by 2024, and a feature diagram set of three skip connections skip1, skip2 and skip3 of 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels is obtained;
s2-2, the upper layer characteristic diagram is output after passing through An Spatial Pyramid Pool (ASPP) module, and the module adopts parallel operation of four convolutional layers and a global Pooling layer with different sampling rates to obtain the characteristic diagram fused with information of different scales;
s2-3, performing up-sampling operation on the upper layer feature map, and adjusting and outputting 64 pixels × 256 feature maps;
s2-4, connecting and fusing the upper-layer feature graph with skip1, skip2 and skip3 which are changed in size, and performing up-sampling operation to obtain 256 pixels multiplied by 128 feature graphs;
and S2-5, repeating the separable convolution operation twice on the upper layer feature diagram to obtain 256 pixel multiplied by 128 feature diagrams, and finally obtaining the single-channel diagram output feature diagram through the convolution operation.
4. The multi-scale connection deep learning one-step phase unwrapping method according to claim 1, wherein the improved deep lab V3+ model of S2 is characterized in that a network takes deep lab V3+ as a framework, and encoding and decoding links of the network are optimally designed, so that a network architecture suitable for unwrapping different types of interferograms is built, and direct mapping from a wrapped phase to a real phase is realized.
5. The multi-scale connection deep learning one-step phase unwrapping method according to claim 4, wherein the encoding module is mainly composed of DCNN and ASPP, wherein the DCNN takes a Modified Aligned Xconcentration network as a framework, comprises an input layer, an intermediate layer and an output layer, and outputs a dense feature map with higher semantic information; extracting low-level feature maps with three different scales of 64 pixels multiplied by 64 pixels, 32 pixels multiplied by 32 pixels and 16 pixels multiplied by 16 pixels respectively at a DCNN input layer; the number of residual network units repeatedly executed in the DCNN middle layer is reduced to 4 times; the ASSP module executes multi-scale feature map information fusion, adjusts the number of feature map channels by using 1 multiplied by 1 convolution + BN + ReLU operation, and the encoder outputs a feature map carrying high-level semantic information, wherein the number of channels and the resolution ratio are respectively 256 and 16 pixels multiplied by 16 pixels; in a decoding link, channel number adjustment is completed on three low-layer feature maps of different scales output by DCNN by using '1 × 1 convolution + BN + ReLU' operation to obtain feature maps with resolutions of 64 pixels × 64 pixels, 32 pixels × 32 pixels and 16 pixels × 16 pixels respectively, and then the resolution of the feature maps is improved to 64 pixels × 64 pixels through up-sampling convolution operation; meanwhile, 4 × 4 bilinear interpolation upsampling convolution operation is carried out on the feature map output by the encoding link to obtain a 64-channel feature map with the resolution of 64 × 64 pixels, the feature map is organically spliced with the low-layer feature map output by the DCNN through skip connection, and finally the feature map with the resolution of 256 pixels × 256 pixels, namely the interference map unwrapping phase map, can be obtained through 4 × 4 bilinear interpolation upsampling convolution operation and '1 × 1 convolution + BN + ReLU' operation.
6. The multi-scale connection deep learning one-step phase unwrapping method according to claim 1, wherein in S3, a single-channel winding phase diagram enters from an input layer, the number of image channels is adjusted to 16 × 2024 feature diagrams through DCNN, then feature diagrams are output through ASPP, the number of channels is adjusted by using "1 × 1 convolution + BN + ReLU" operation on three low-layer feature diagrams of different scales of DCNN jump connection, feature diagrams with resolutions of 64 × 64 pixels, 32 × 32 pixels and 16 × 16 pixels are obtained, and then the resolution of the feature diagrams is increased to 64 × 64 pixels through up-sampling convolution operation; meanwhile, 4 × 4 bilinear interpolation up-sampling convolution operation is carried out on the feature map output by the coding link to obtain a 64-channel feature map with the resolution of 64 × 64 pixels, the feature map is organically spliced with the low-layer feature map output by the DCNN through skip connection, the feature map with the resolution of 256 × 256 pixels is obtained through 4 × 4 bilinear interpolation up-sampling convolution operation, and finally the interference map unwrapping phase map can be obtained through two depth separable convolution layers and the operations of '1 × 1 convolution + BN + ReLU'.
CN202110470485.0A 2021-04-28 2021-04-28 Multi-scale connection deep learning one-step phase unwrapping method Pending CN113129294A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110470485.0A CN113129294A (en) 2021-04-28 2021-04-28 Multi-scale connection deep learning one-step phase unwrapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110470485.0A CN113129294A (en) 2021-04-28 2021-04-28 Multi-scale connection deep learning one-step phase unwrapping method

Publications (1)

Publication Number Publication Date
CN113129294A true CN113129294A (en) 2021-07-16

Family

ID=76780499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110470485.0A Pending CN113129294A (en) 2021-04-28 2021-04-28 Multi-scale connection deep learning one-step phase unwrapping method

Country Status (1)

Country Link
CN (1) CN113129294A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572420A (en) * 2023-11-14 2024-02-20 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886273A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of CMR classification of image segmentation system
CN111476249A (en) * 2020-03-20 2020-07-31 华东师范大学 Construction method of multi-scale large-receptive-field convolutional neural network
WO2020182169A1 (en) * 2019-03-11 2020-09-17 杭州海康威视数字技术股份有限公司 Decoding method and device
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN112464914A (en) * 2020-12-30 2021-03-09 南京积图网络科技有限公司 Guardrail segmentation method based on convolutional neural network
CN112560716A (en) * 2020-12-21 2021-03-26 浙江万里学院 High-resolution remote sensing image water body extraction method based on low-level feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886273A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of CMR classification of image segmentation system
WO2020182169A1 (en) * 2019-03-11 2020-09-17 杭州海康威视数字技术股份有限公司 Decoding method and device
CN111476249A (en) * 2020-03-20 2020-07-31 华东师范大学 Construction method of multi-scale large-receptive-field convolutional neural network
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN112560716A (en) * 2020-12-21 2021-03-26 浙江万里学院 High-resolution remote sensing image water body extraction method based on low-level feature fusion
CN112464914A (en) * 2020-12-30 2021-03-09 南京积图网络科技有限公司 Guardrail segmentation method based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHANG T等: ""Rapid and robust two-dimensional phase unwrapping via deep learning"", 《OPTICS EXPRESS》 *
ZIYAO LI等: ""Multiscale features supported deeplabv3+ optimization scheme for accurate water semantic segmentation"", 《IEEE ACCESS》 *
刘志赢等: ""基于改进DeeplabV3+的烟雾区域分割识别算法"", 《系统工程与电子技术》 *
沈建军等: ""结合改进deeplabV3+网络的水岸线检测算法"", 《中国图象图形学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572420A (en) * 2023-11-14 2024-02-20 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning
CN117572420B (en) * 2023-11-14 2024-04-26 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning

Similar Documents

Publication Publication Date Title
CN112381172B (en) InSAR interference image phase unwrapping method based on U-net
Van der Jeught et al. Deep neural networks for single shot structured light profilometry
Li et al. NTIRE 2023 challenge on efficient super-resolution: Methods and results
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN112861729B (en) Real-time depth completion method based on pseudo-depth map guidance
CN109886880B (en) Optical image phase unwrapping method based on U-Net segmentation network
CN112991472B (en) Image compressed sensing reconstruction method based on residual error dense threshold network
CN111797678B (en) Phase unwrapping method and device based on composite neural network
Gilles et al. Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes
CN111815516A (en) Super-resolution reconstruction method for weak supervision infrared remote sensing image
CN113129294A (en) Multi-scale connection deep learning one-step phase unwrapping method
Waghmare et al. Particle-filter-based phase estimation in digital holographic interferometry
Yuan et al. High-accuracy phase demodulation method compatible to closed fringes in a single-frame interferogram based on deep learning
Tan et al. Deep learning-based method for non-uniform motion-induced error reduction in dynamic microscopic 3D shape measurement
Hui et al. Two-stage convolutional network for image super-resolution
Nguyen et al. MIMONet: Structured-light 3D shape reconstruction by a multi-input multi-output network
CN115239564A (en) Mine image super-resolution reconstruction method combining semantic information
Geng et al. Differentiable programming of isometric tensor networks
Chen et al. Two-dimensional phase unwrapping based on U 2-Net in complex noise environment
CN111223046B (en) Image super-resolution reconstruction method and device
CN113344779B (en) SAR image super-resolution method and system based on cartoon texture decomposition and deep learning
CN113129295A (en) Full-scale connected deep learning phase unwrapping method
CN115860113A (en) Training method and related device for self-antagonistic neural network model
Guo et al. Speedy and accurate image super‐resolution via deeply recursive CNN with skip connection and network in network
Horisaki et al. Regularized image reconstruction for continuously self-imaging gratings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716