CN116664419A - InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network - Google Patents

InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network Download PDF

Info

Publication number
CN116664419A
CN116664419A CN202310478694.9A CN202310478694A CN116664419A CN 116664419 A CN116664419 A CN 116664419A CN 202310478694 A CN202310478694 A CN 202310478694A CN 116664419 A CN116664419 A CN 116664419A
Authority
CN
China
Prior art keywords
phase
insar
unwrapping
interference
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310478694.9A
Other languages
Chinese (zh)
Inventor
罗卿莉
殷志媛
李梦丽
封皓
曾周末
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202310478694.9A priority Critical patent/CN116664419A/en
Publication of CN116664419A publication Critical patent/CN116664419A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an InSAR phase unwrapping method and system of a multi-scale feature fusion noise reduction CNN network. Inputting a noisy real InSAR interference phase diagram into an InSAR phase unwrapping model of a trained multi-scale feature fusion noise reduction CNN network, wherein the phase unwrapping network model adopts the noise reduction network DnCNN as a frame, performs multi-scale feature extraction by setting expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers feature information; and outputting an unwrapped phase map. The problem that a good unwrapping effect cannot be obtained due to noise interference existing in InSAR in traditional phase unwrapping is solved.

Description

InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network
Technical Field
The application relates to the technical field of phase unwrapping, in particular to InSAR single-base-line phase unwrapping, and particularly relates to an InSAR phase unwrapping method of a multi-scale feature fusion noise reduction CNN network.
Background
Conventional phase unwrapping methods are generally categorized into three categories: (1) Based on a phase unwrapping algorithm of path tracking, integrating phase gradients of adjacent pixels by selecting a proper integration path to realize phase unwrapping; (2) The phase unwrapping method based on the minimum norm realizes phase unwrapping by minimizing the difference between the wrapping phase gradient and the true phase gradient; (3) The network flow method converts the phase unwrapping problem into a network flow problem with minimum cost, and limits the transmission of phase errors in a low-quality area by minimizing the difference between the unwrapping phase and the discrete partial derivative of the wrapping phase, so as to solve the global optimal solution. Three traditional algorithms can obtain good unwrapping effects under the scenes of small noise interference and good phase continuity, but when the quality of an InSAR interference phase diagram is poor, a path tracking unwrapping method represented by a branch-cut method is easy to generate a phase island, unwrapping vacancies are generated in a low-quality area, and the calculation time is long. The least-squares-represented least-norm unwrapping method has a problem that unwrapping errors in a local low-quality region cause errors to spread in a global range, and unwrapping speed is high but unwrapping quality is low. The network flow method represented by the minimum network cost flow balances the unwrapping precision and unwrapping efficiency to a certain extent, but unwrapping errors still occur on phase edge information in a low-quality area, and the phase information cannot be completely recovered.
The deep learning method uses different strategies to realize phase unwrapping by performing supervision optimization on the neural network of a specific data set, and can be mainly divided into two types: a deep learning regression analysis phase unwrapping method and an wrapping number estimation method based on deep learning. The deep learning regression analysis phase unwrapping method regards unwrapping as a regression problem, and the neural network directly learns the mapping relationship between the wrapping phase and the absolute phase. The winding number estimation method based on deep learning firstly converts the problem into the problem of semantic segmentation, inputs winding phases, outputs winding numbers through a trained network, and then obtains a final unwrapping result through post-processing. Whether the regression problem-based one-step unwrapping network or the semantic segmentation-based two-step unwrapping network, the methods have good performance in respective application scenes, but the methods still cannot fully meet the requirements in the face of complex InSAR phase images with complex noise distribution and including interference factors such as atmospheric effects. On the other hand, the network structure often determines the performance of the deep learning phase unwrapping method. Many unwrapping networks select a network frame, but frequent downsampling operations inevitably tend to result in missing fringe information. The unwrapping network based on semantic segmentation mainly learns the wrapping fuzzy number through the network, has good unwrapping precision under the condition of clear stripes, but complicated fluctuation in the InSAR phase diagram is very easy for classifying the image wrapping number, so that the network needs to be reclassified in a global range, and errors cannot be avoided at the same time of sacrificing the unwrapping efficiency.
Disclosure of Invention
Therefore, the application aims to provide an InSAR phase unwrapping method and system of a multi-scale feature fusion noise reduction CNN network, which solve the problem that a good unwrapping effect cannot be obtained due to noise interference existing in InSAR in traditional phase unwrapping.
In order to achieve the above purpose, the InSAR phase unwrapping method of the multi-scale feature fusion noise reduction CNN network is characterized by comprising the following steps:
s1, acquiring a simulated InSAR interference phase diagram, and constructing an InSAR simulation data set;
s2, inputting the InSAR analog data set into an InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network to perform phase unwrapping training;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts a noise reduction network DnCNN as a framework, performs multi-scale feature extraction by setting expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information;
s3, inputting the real InSAR interference phase diagram into an InSAR phase unwrapping model of the trained multi-scale feature fusion noise reduction CNN network, and outputting the unwrapping phase diagram.
Further preferably, in S1, the acquiring a simulated InSAR interference phase map includes:
s101, adjusting parameters of a two-dimensional Gaussian surface to generate Gaussian curved surfaces with different sizes and modes; adding a random matrix into the generated Gaussian curved surface to enable the Gaussian curved surface to generate distortion in different directions and sizes, so as to form an interference phase diagram simulating a topography phase and a deformation phase;
s102, superposing the Perlin noise with different frequencies and amplitudes together to obtain fractal Perlin noise, and simulating local atmospheric phase; superposing the interference phase map with a local atmospheric phase map; as a real phase map for training;
s103, winding the real phase diagram to form a noiseless interference phase diagram after winding, and using the noiseless interference phase diagram as a simulated interference phase diagram;
s104, simulating uncorrelated noise by using Gaussian noise, generating a complex noise matrix with the same noise level by using the real part and the imaginary part of the interference phase diagram in S101, and multiplying the complex noise matrix with the simulated interference diagram obtained in S103 to obtain the simulated interference phase diagram containing uncorrelated noise.
Further preferably, the method further comprises S105, filtering the simulated interference phase map containing uncorrelated noise generated in S104 by using a Goldstein filtering algorithm, and using the filtered interference phase map as a final InSAR simulated data set.
Further preferably, the performing phase unwrapping training on the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network includes the following steps:
s201, inputting an InSAR phase diagram of an InSAR analog data set into an input layer of an InSAR phase unwrapping model of a multi-scale feature fusion noise reduction CNN network;
s202, performing feature extraction on the input InSAR phase map by using 64 convolution check input InSAR phase maps with the size of 3 multiplied by 3 to obtain 64 feature maps;
s203, performing multi-level abstraction on the extracted primary feature map by adopting expansion convolution and deformable convolution of two different sampling rates, and extracting interference map noise and fringe information under 192 different scales;
s204, carrying out batch normalization processing on the extracted interference pattern noise and stripe information under different scales, carrying out self-adaptive learning by utilizing a ReLU activation function, and fusing the processed feature images; carrying out characteristic information recovery on the fused characteristic graphs by utilizing residual convolution; repeating S203-S204 for a plurality of times until the feature map completes feature information recovery;
s205, outputting the feature map of the restored feature information through an output convolution layer to perform single-channel output, and obtaining an unwrapped phase map which meets the expectations.
Further preferably, in S203, the performing multi-level abstraction on the extracted primary feature map by using the two expansion volumes with different sampling rates and a deformable convolution includes:
performing parallel multi-level abstraction on the extracted primary characteristic diagram by adopting a first expansion convolution with a sampling rate of 5*5 and a second expansion convolution with a sampling rate of 7*7 and a deformable convolution layer;
the first expansion convolution and the second expansion convolution are provided with different void ratios, and the receptive field is increased while the resolution of the image output characteristic diagram is not changed;
the deformable convolution introduces a learnable offset into the receptive field such that the receptive field is no longer a regular square, but an irregular shape conforming to the target object features;
respectively obtaining 3 groups of characteristic information with different scales by using the first expansion convolution, the second expansion convolution and the deformable convolution, wherein each group is 64; finally, the noise and fringe information of the interferograms under 192 different scales are extracted.
Further preferably, in S204, the feature information recovery of the fused feature map by residual convolution includes the following steps:
the interference pattern noise and stripe information under 192 different scales are extracted, the convolution kernels of 192 3*3 are adopted for splicing and fusion, the recovery of the characteristic information is completed by using a 3*3 residual network convolution kernel, and meanwhile, the number of channels is recovered to 64.
The recovered 64 feature maps are subjected to the last 3×3 convolution to become a single-channel output.
The application also provides an InSAR phase unwrapping system of the multi-scale feature fusion noise reduction CNN network, which comprises the following steps: the data acquisition unit and the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network;
the data acquisition unit is used for acquiring an InSAR interference phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is used for unwrapping the input InSAR interference phase diagram to output an unwrapped phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts the noise reduction network DnCNN as a framework, performs multi-scale feature extraction by arranging a multi-scale feature fusion module with expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information.
Further preferably, the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is trained by using an InSAR simulation data set, wherein the InSAR simulation data set is obtained through the following process:
the interference phase diagram generating module is used for adjusting parameters of the two-dimensional Gaussian surface and generating Gaussian curved surfaces with different sizes and modes; adding a random matrix into the generated Gaussian curved surface to enable the Gaussian curved surface to generate distortion in different directions and sizes, so as to form an interference phase diagram simulating a topography phase and a deformation phase;
the real phase diagram generation module is used for superposing the Perlin noises with different frequencies and amplitudes together to obtain fractal Perlin noises and simulate local atmospheric phase; superposing the interference phase map with a local atmospheric phase map; as a real phase map for training;
the simulated interference phase diagram generating module is used for carrying out winding processing on the real phase diagram to form a noiseless interference phase diagram after winding, and the noiseless interference phase diagram is used as a simulated interference phase diagram;
and the analog interference phase diagram generating module comprises an uncorrelated noise, and utilizes Gaussian noise to simulate uncorrelated noise, so that a complex noise matrix with the same noise level is generated by the real part and the imaginary part of the interference phase diagram, and the complex noise matrix is multiplied with the obtained analog interference diagram to obtain the analog interference phase diagram comprising uncorrelated noise.
Further preferably, the method further comprises a filtering module, wherein the filtering module is used for filtering the generated simulated interference phase diagram containing uncorrelated noise by using a Goldstein filtering algorithm, and taking the filtered interference phase diagram as a final InSAR simulated data set.
Further preferably, the number of the expansion convolutions is two, the sampling rate of the first expansion convolution is 5*5, and the sampling rate of the second expansion convolution is 7*7.
According to the InSAR phase unwrapping method and system of the multi-scale feature fusion noise reduction CNN network, the noise reduction CNN network is taken as a frame, and when the data training is carried out on the InSAR noise reduction CNN phase unwrapping model with the multi-scale feature fusion, various fitting such as a terrain phase, a deformation phase, an atmosphere phase, an uncorrelated noise phase and the like are carried out on simulation data according to real InSAR data, so that a final interference phase map is formed; setting two expansion convolution and deformation convolution with different sampling rates in a phase unwrapping model, and extracting the input InSAR data in a multi-scale manner; the problem of traditional InSAR data when carrying out the phase unwrapping, because the uncorrelated noise is filtered, lead to final phase unwrapping result, inaccuracy is solved.
In the application, a DnCNN network is taken as a basic framework, and in order to enable the network to be more in line with InSAR phase unwrapping application, a data set in line with the InSAR phase characteristics is constructed by simulating each component in SAR interference phase.
In order to ensure that the network suppresses noise of the noisy interferogram and simultaneously improves the unwrapping precision as much as possible, a multi-scale feature extraction module is parallelly built in the network through expansion convolution and deformable convolution with different void ratios, so that the extraction and fusion of multi-scale information of the feature image are realized. In addition, the network speed is improved by adding the residual error module, the problem of network degradation is avoided, and the network robustness is ensured.
Drawings
FIG. 1 is a flow chart of an InSAR phase unwrapping method of a multi-scale feature fusion noise reduction CNN network of the present application;
FIG. 2 is a flow chart of creating simulated SAR interferometric phase data in an example of the present application;
FIG. 3 (a) is a simulated original phase diagram;
fig. 3 (b) is a phase diagram after the winding process of the original phase diagram;
FIG. 3 (c) is a winding interference phase diagram containing noise;
fig. 3 (d) is a diagram of the coherence coefficient calculated from the phase diagram;
FIG. 4 is a schematic diagram of a phase unwrapping network in accordance with the present application;
FIG. 5 is a schematic diagram of a multi-scale feature extraction module of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and the detailed description.
As shown in fig. 1, an InSAR phase unwrapping method of a multiscale feature fusion noise reduction CNN network according to an embodiment of the present application includes the following steps:
s1, acquiring a simulated InSAR interference phase diagram, and constructing an InSAR simulation data set;
s2, inputting the InSAR analog data set into an InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network to perform phase unwrapping training;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts a noise reduction network DnCNN as a framework, performs multi-scale feature extraction by setting expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information;
s3, inputting the real InSAR interference phase diagram into an InSAR phase unwrapping model of the trained multi-scale feature fusion noise reduction CNN network, and outputting the unwrapping phase diagram.
In the embodiment shown in fig. 2, a simulated InSAR interference phase diagram is acquired, and an InSAR simulation data set is constructed, which includes the following procedures:
s101: and adjusting parameters of the two-dimensional Gaussian surface to generate Gaussian curved surfaces with different sizes and modes. Adding random matrix to control the distribution of points in the curved surface, so that the curved surface generates distortion in different directions and sizes, and forming an interference phase diagram simulating the topography phase and the deformation phase;
s102: and superposing the Perlin noises with different frequencies and amplitudes to obtain fractal Perlin noises, and simulating local atmospheric phases. And superposing the interference phase diagram generated by the S1-1 and the atmospheric phase diagram to serve as a real phase diagram of subsequent training.
S103: and (3) winding the true phase generated in the step (S102) to form a noiseless interference phase diagram after winding, wherein the size of the phase diagram in the example is 186pix multiplied by 186pix.
S104: and simulating uncorrelated noise by using Gaussian noise, setting a noise level along with the deformation phase gradient obtained by simulation in the step S101, generating a complex noise matrix with the same noise level by the real part and the imaginary part of the interference phase diagram in the step S101, and multiplying the complex noise matrix with the simulated interference diagram obtained in the step S103 to obtain a simulated interference phase diagram containing uncorrelated noise.
S105: using Goldstein filtering algorithm to the interference phase generated in S104, setting a filtering window to be 32 multiplied by 32, setting a filtering coefficient to be 0.5, and mapping the filtered interference phase to form a final InSAR analog data set; the filtered interference phase map is used as an input value for network training.
Fig. 2 shows the InSAR simulation data simulation process, fig. 3 is a 186pix×185pix size phase diagram correspondingly generated, fig. 3 (a) is an original phase containing a surface feature and an atmospheric phase, which will be a true value of network training, fig. 3 (b) is a winding phase corresponding to the original phase, and fig. 3 (c) is a winding diagram after adding noise varying with deformation gradient; fig. 3 (d) is a diagram of the coherence coefficient of the simulated interferogram.
The simulated InSAR data set generation is completed in Matlab R2020a and forms a data set with the training data set containing 14000 pairs of floating point arrays of 186pix by 186pix, and the verification data set containing 3000 pairs of floating point arrays of the same size.
Further, S2 comprises the steps of:
s201: inputting a single-channel InSAR winding phase diagram with the size of 186pix multiplied by 186pix into a first layer of an unwrapping model;
s202, the convolution layer extracts 64 feature maps using 64 convolution kernels of size 3×3, while modeling data nonlinearities using a ReLU activation function.
S203: the multi-scale feature fusion module adopts two expansion convolution layers with different sampling rates (5×5 and 7×7) and a deformable convolution layer to extract noise and fringe information (64 in each group and 3×64) of interferograms with different scales in parallel, so as to realize multi-level abstraction of features, as shown in fig. 5.
S204: and (3) performing splicing fusion on the multi-scale information extracted by using 192 convolution checks with the size of 3×3 through cascade connection of Batch Normalization (BN) and ReLU activation functions, and completing characteristic information recovery under the action of one convolution kernel with the size of 3×3, wherein the number of channels is 64.
Operations 203 and 204 are repeated until the feature map completes feature information recovery at the final residual module.
S205: and finally, the restored characteristic information changes the characteristic image into a single-channel output through a 3X 3 convolution kernel, and a desired unwrapped phase image is obtained.
Fig. 4 is an overall structure diagram of an unwrapped network according to the present application, where the network is known to have a DnCNN network as a basic framework, and includes an input layer, a batch normalization process (BN), a ReLU activation function, a multi-scale feature extraction module, a residual module, and an output layer. The main component functions are introduced as follows:
batch normalization: the robustness of the system is increased by transporting the system parameter search space, so that the purposes of accelerating the network convergence speed, guaranteeing the gradient and relieving the overfitting are achieved.
ReLU activation function: as an activation function, the ReLU is simple to realize, has high calculation speed and strong nonlinear fitting capability, and can well avoid the problem of gradient disappearance.
Expansion convolution: by setting different void ratios, the expansion convolution increases the receptive field while not changing the resolution of the image output feature map, and meanwhile, the information loss caused by downsampling is avoided to a certain extent during feature extraction.
Deformable convolution: the deformable convolution introduces a learnable offset into the receptive field, so that the receptive field is not a regular square, but an irregular shape fitting the characteristics of the target object, and therefore, the convolution area can always cover the periphery of the target, and the learnable offset can be adapted no matter how the target stripes are deformed. The addition of the deformable convolution allows the network to extract more complex edge information.
Residual network: and a residual network structure is added into the deep neural network, so that the deep neural network is degenerated into a shallow network, the degeneration problem easily caused by the increase of the network depth is solved, and the network performance is improved.
Further, the InSAR phase unwrapping of the multi-scale feature fusion noise reduction CNN network in S2 takes the DnCNN network as a main frame, and combines the characteristics of cavity convolution, deformable convolution, batch normalization processing, reLU activation function and residual error network to construct a multi-scale feature extraction module and a residual error module. In a multi-scale feature extraction module part, 64 feature images extracted from the previous layer are respectively subjected to expansion convolution and deformable convolution to capture phase diagram information, and under the condition that resolution is not affected, the regularity of a receptive field is enlarged by setting different void ratios and the void convolution. The deformable convolution additionally learns the offset compared with the conventional convolution, so that the receptive field is more fit with the complex shape of the interference phase, and more detail information is acquired in the global scope. The three groups obtained contain characteristic information of different scales, and 192 convolution checks with the size of 3×3 are used for fusion of the extracted multi-scale information through cascade connection of Batch Normalization (BN) and ReLU activation functions. The multi-scale information is subjected to feature recovery through a residual error module, so that the network degradation problem is avoided while the network depth is increased.
Further, S3 comprises the following steps:
in a trained disentangled network, a single-channel INSAR winding phase diagram is input, a 64-channel feature diagram is output through convolution of a first layer, a multi-scale feature extraction module of an intermediate layer is used for outputting feature information diagrams fused with different scales, feature diagram recovery is achieved through a residual module, 8 multi-scale feature extraction modules are shown in fig. 4, 10 residual modules are shown in fig. 4, and finally the fused 64-channel feature diagram is converted into single-channel output under the action of a 3×3 convolution kernel, so that a final disentangled phase diagram conforming to expectations is obtained.
In the example, the network is developed based on the version 2.9.0 of the deep learning framework Tensorflow2 of Python3.8, and the main parameters of the computer for network training and experimental testing are as follows: tesla T4 GPU+ vCPU Intel Xeon Processor (Skylake, IBRS) CPU+64G RAM.
The application also provides an InSAR phase unwrapping system of the multi-scale feature fusion noise reduction CNN network, which is used for implementing the phase unwrapping method and comprises the following steps: the data acquisition unit and the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network;
the data acquisition unit is used for acquiring an InSAR interference phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is used for unwrapping the input InSAR interference phase diagram to output an unwrapped phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts the noise reduction network DnCNN as a framework, performs multi-scale feature extraction by arranging a multi-scale feature fusion module with expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information.
The InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is trained by adopting an InSAR simulation data set, and the InSAR simulation data set is obtained through the following processes:
the interference phase diagram generating module is used for adjusting parameters of the two-dimensional Gaussian surface and generating Gaussian curved surfaces with different sizes and modes; adding a random matrix into the generated Gaussian curved surface to enable the Gaussian curved surface to generate distortion in different directions and sizes, so as to form an interference phase diagram simulating a topography phase and a deformation phase;
the real phase diagram generation module is used for superposing the Perlin noises with different frequencies and amplitudes together to obtain fractal Perlin noises and simulate local atmospheric phase; superposing the interference phase map with a local atmospheric phase map; as a real phase map for training;
the simulated interference phase diagram generating module is used for carrying out winding processing on the real phase diagram to form a noiseless interference phase diagram after winding, and the noiseless interference phase diagram is used as a simulated interference phase diagram;
and the analog interference phase diagram generating module comprises an uncorrelated noise, and utilizes Gaussian noise to simulate uncorrelated noise, so that a complex noise matrix with the same noise level is generated by the real part and the imaginary part of the interference phase diagram, and the complex noise matrix is multiplied with the obtained analog interference diagram to obtain the analog interference phase diagram comprising uncorrelated noise.
The system also comprises a filtering module, a filtering module and a processing module, wherein the filtering module is used for filtering the generated simulated interference phase diagram containing uncorrelated noise by using a Goldstein filtering algorithm, and taking the filtered interference phase diagram as a final InSAR simulated data set.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the application.

Claims (10)

1. An InSAR phase unwrapping method of a multi-scale feature fusion noise reduction CNN network is characterized by comprising the following steps:
s1, acquiring a simulated InSAR interference phase diagram, and constructing an InSAR simulation data set;
s2, inputting the InSAR analog data set into an InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network to perform phase unwrapping training;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts a noise reduction network DnCNN as a framework, performs multi-scale feature extraction by setting expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information;
s3, inputting the real InSAR interference phase diagram containing noise into an InSAR phase unwrapping model of the trained multi-scale feature fusion noise reduction CNN network, and outputting an unwrapping phase diagram.
2. The method for phase unwrapping of InSAR by multi-scale feature fusion noise reduction CNN network according to claim 1, wherein in S1, the obtaining the simulated InSAR interference phase map includes:
s101, adjusting parameters of a two-dimensional Gaussian surface to generate Gaussian curved surfaces with different sizes and modes; adding a random matrix into the generated Gaussian curved surface to enable the Gaussian curved surface to generate distortion in different directions and sizes, so as to form an interference phase diagram simulating a topography phase and a deformation phase;
s102, superposing the Perlin noise with different frequencies and amplitudes together to obtain fractal Perlin noise, and simulating local atmospheric phase; superposing the interference phase map with a local atmospheric phase map; as a real phase map for training;
s103, winding the real phase diagram to form a noiseless interference phase diagram after winding, and using the noiseless interference phase diagram as a simulated interference phase diagram;
s104, simulating uncorrelated noise by using Gaussian noise, generating a complex noise matrix with the same noise level by using the real part and the imaginary part of the interference phase diagram in S101, and multiplying the complex noise matrix with the simulated interference diagram obtained in S103 to obtain the simulated interference phase diagram containing uncorrelated noise.
3. The method for phase unwrapping of InSAR by multi-scale feature fusion noise reduction CNN network according to claim 2, further comprising S105, filtering the simulated interference phase map containing uncorrelated noise generated in S104 using Goldstein filtering algorithm, and using the filtered interference phase map as a final InSAR simulated data set.
4. The InSAR phase unwrapping method of the multiscale feature fusion noise reduction CNN network according to claim 1, wherein the performing phase unwrapping training on the InSAR phase unwrapping model of the multiscale feature fusion noise reduction CNN network includes the steps of:
s201, inputting an InSAR phase diagram of an InSAR analog data set into an input layer of an InSAR phase unwrapping model of a multi-scale feature fusion noise reduction CNN network;
s202, performing feature extraction on the input InSAR phase map by using 64 convolution check input InSAR phase maps with the size of 3 multiplied by 3 to obtain 64 feature maps;
s203, performing multi-level abstraction on the extracted primary feature map by adopting expansion convolution and deformable convolution of two different sampling rates, and extracting interference map noise and fringe information under 192 different scales;
s204, carrying out batch normalization processing on the extracted interference pattern noise and stripe information under different scales, carrying out self-adaptive learning by utilizing a ReLU activation function, and fusing the processed feature images; carrying out characteristic information recovery on the fused characteristic graphs by utilizing residual convolution; repeating S203-S204 for a plurality of times until the feature map completes feature information recovery;
s205, outputting the feature map of the restored feature information through an output convolution layer to perform single-channel output, and obtaining an unwrapped phase map which meets the expectations.
5. The method for phase unwrapping of InSAR of a multiscale feature fusion noise reduction CNN network according to claim 4, wherein in S203, the expanding convolution with two different sampling rates and a deformable convolution perform multi-level abstraction on the extracted primary feature map, including:
performing parallel multi-level abstraction on the extracted primary characteristic diagram by adopting a first expansion convolution with a sampling rate of 5*5 and a second expansion convolution with a sampling rate of 7*7 and a deformable convolution layer;
the first expansion convolution and the second expansion convolution are provided with different void ratios, and the receptive field is increased while the resolution of the image output characteristic diagram is not changed;
the deformable convolution introduces a learnable offset into the receptive field such that the receptive field is no longer a regular square, but an irregular shape conforming to the target object features;
respectively obtaining 3 groups of characteristic information with different scales by using the first expansion convolution, the second expansion convolution and the deformable convolution, wherein each group is 64; finally, the noise and fringe information of the interferograms under 192 different scales are extracted.
6. The method for phase unwrapping of InSAR of a multiscale feature fusion noise reduction CNN network according to claim 4, wherein in S204, the feature information recovery is performed on the fused feature map by residual convolution, and the method comprises the following steps:
the interference pattern noise and stripe information under 192 different scales are extracted, the convolution kernels of 192 3*3 are adopted for splicing and fusion, the recovery of the characteristic information is completed by using a 3*3 residual network convolution kernel, and meanwhile, the number of channels is recovered to 64.
The recovered 64 feature maps are subjected to the last 3×3 convolution to become a single-channel output.
7. An InSAR phase unwrapping system of a multiscale feature fusion noise reduction CNN network, comprising: the data acquisition unit and the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network;
the data acquisition unit is used for acquiring an InSAR interference phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is used for unwrapping the input InSAR interference phase diagram to output an unwrapped phase diagram;
the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network adopts the noise reduction network DnCNN as a framework, performs multi-scale feature extraction by arranging a multi-scale feature fusion module with expansion convolution and deformable convolution, fuses the extracted multi-scale feature information, performs phase unwrapping by using a residual error module, and recovers the feature information.
8. The InSAR phase unwrapping system of the multi-scale feature fusion noise reduction CNN network of claim 7, wherein the InSAR phase unwrapping model of the multi-scale feature fusion noise reduction CNN network is trained using an InSAR simulation dataset obtained by:
the interference phase diagram generating module is used for adjusting parameters of the two-dimensional Gaussian surface and generating Gaussian curved surfaces with different sizes and modes; adding a random matrix into the generated Gaussian curved surface to enable the Gaussian curved surface to generate distortion in different directions and sizes, so as to form an interference phase diagram simulating a topography phase and a deformation phase;
the real phase diagram generation module is used for superposing the Perlin noises with different frequencies and amplitudes together to obtain fractal Perlin noises and simulate local atmospheric phase; superposing the interference phase map with a local atmospheric phase map; as a real phase map for training;
the simulated interference phase diagram generating module is used for carrying out winding processing on the real phase diagram to form a noiseless interference phase diagram after winding, and the noiseless interference phase diagram is used as a simulated interference phase diagram;
and the analog interference phase diagram generating module comprises an uncorrelated noise, and utilizes Gaussian noise to simulate uncorrelated noise, so that a complex noise matrix with the same noise level is generated by the real part and the imaginary part of the interference phase diagram, and the complex noise matrix is multiplied with the obtained analog interference diagram to obtain the analog interference phase diagram comprising uncorrelated noise.
9. The inary unwrapping system of a multiscale feature fusion noise-reducing CNN network of claim 8, further comprising a filtering module configured to filter the generated simulated interference phase map containing uncorrelated noise using a Goldstein filtering algorithm, and using the filtered interference phase map as a final inary simulation dataset.
10. The inar phase unwrapping system of a multi-scale feature fusion noise reduction CNN network of claim 7, wherein the number of dilation convolutions is two, the first dilation convolutions having a sample rate of 5*5 and the second dilation convolutions having a sample rate of 7*7.
CN202310478694.9A 2023-04-28 2023-04-28 InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network Pending CN116664419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310478694.9A CN116664419A (en) 2023-04-28 2023-04-28 InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310478694.9A CN116664419A (en) 2023-04-28 2023-04-28 InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network

Publications (1)

Publication Number Publication Date
CN116664419A true CN116664419A (en) 2023-08-29

Family

ID=87726983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310478694.9A Pending CN116664419A (en) 2023-04-28 2023-04-28 InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network

Country Status (1)

Country Link
CN (1) CN116664419A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572420A (en) * 2023-11-14 2024-02-20 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning
CN117975297A (en) * 2024-04-01 2024-05-03 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Urban ground surface deformation risk fine identification method assisted by combination of multi-source data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572420A (en) * 2023-11-14 2024-02-20 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning
CN117572420B (en) * 2023-11-14 2024-04-26 中国矿业大学 InSAR phase unwrapping optimization method based on deep learning
CN117975297A (en) * 2024-04-01 2024-05-03 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Urban ground surface deformation risk fine identification method assisted by combination of multi-source data
CN117975297B (en) * 2024-04-01 2024-06-11 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Urban ground surface deformation risk fine identification method assisted by combination of multi-source data

Similar Documents

Publication Publication Date Title
CN110188685B (en) Target counting method and system based on double-attention multi-scale cascade network
CN108710830B (en) Human body 3D posture estimation method combining dense connection attention pyramid residual error network and isometric limitation
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN116664419A (en) InSAR phase unwrapping method of multi-scale feature fusion noise reduction CNN network
CN113674403B (en) Three-dimensional point cloud up-sampling method, system, equipment and medium
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN112861729B (en) Real-time depth completion method based on pseudo-depth map guidance
Pistilli et al. Learning robust graph-convolutional representations for point cloud denoising
CN112884668A (en) Lightweight low-light image enhancement method based on multiple scales
CN113902622B (en) Spectrum super-resolution method based on depth priori joint attention
CN109447897B (en) Real scene image synthesis method and system
CN108765540B (en) Relighting method based on image and ensemble learning
Chen et al. Blind de-convolution of images degraded by atmospheric turbulence
CN115272683A (en) Central differential information filtering phase unwrapping method based on deep learning
CN113256508A (en) Improved wavelet transform and convolution neural network image denoising method
Yu et al. PDNet: A lightweight deep convolutional neural network for InSAR phase denoising
CN114882524A (en) Monocular three-dimensional gesture estimation method based on full convolution neural network
Hattori et al. Learning self-prior for mesh denoising using dual graph convolutional networks
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN115760603A (en) Interference array broadband imaging method based on big data technology
Piriyatharawet et al. Image denoising with deep convolutional and multi-directional LSTM networks under Poisson noise environments
CN115859048A (en) Noise processing method and device for partial discharge signal
CN112488238B (en) Hybrid anomaly detection method based on countermeasure self-encoder
CN115294182A (en) High-precision stereo matching method based on double-cross attention mechanism
CN109856673B (en) High-resolution Radon transformation data separation technology based on dominant frequency iterative weighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination