CN116148857A - Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network - Google Patents
Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network Download PDFInfo
- Publication number
- CN116148857A CN116148857A CN202310410231.9A CN202310410231A CN116148857A CN 116148857 A CN116148857 A CN 116148857A CN 202310410231 A CN202310410231 A CN 202310410231A CN 116148857 A CN116148857 A CN 116148857A
- Authority
- CN
- China
- Prior art keywords
- layer
- network
- conv2blr
- cat
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 69
- 230000006870 function Effects 0.000 claims abstract description 40
- 238000012795 verification Methods 0.000 claims abstract description 5
- 230000004913 activation Effects 0.000 claims description 30
- 238000010606 normalization Methods 0.000 claims description 26
- 238000011176 pooling Methods 0.000 claims description 25
- 238000000354 decomposition reaction Methods 0.000 claims description 17
- 238000010587 phase diagram Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000004804 winding Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 239000000654 additive Substances 0.000 claims description 6
- 230000000996 additive effect Effects 0.000 claims description 6
- 230000001788 irregular Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims 1
- 238000005305 interferometry Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000010276 construction Methods 0.000 abstract description 2
- 238000013461 design Methods 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000013421 nuclear magnetic resonance imaging Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/9021—SAR image post-processing techniques
- G01S13/9023—SAR image post-processing techniques combined with interferometric techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Complex Calculations (AREA)
Abstract
The invention discloses a low signal-to-noise ratio interference phase unwrapping method based on a space four-way memory network, which relates to the technical field of synthetic aperture radar interferometry, adopts the technologies of interference phase data set creation, space four-way memory network structure design, new loss function construction, space four-way memory network training and verification and the like to realize low signal-to-noise ratio interference phase unwrapping, and is an interference phase unwrapping method with high convergence rate, strong instantaneity, strong robustness and high accuracy under the condition of low signal-to-noise ratio, and can effectively improve the synthetic aperture radar interferometry precision.
Description
Technical Field
The invention relates to the technical field of synthetic aperture radar interferometry, in particular to a low signal-to-noise ratio interference phase unwrapping method based on a spatial four-way memory network.
Background
InSAR (synthetic aperture radar interferometry) is one of the most potential space earth observation technologies, is an extension of SAR (synthetic aperture radar) technology, and has the working principle that two images are shot in the same region by using SAR, an interference fringe pattern of the region is obtained through interference, and the interference fringe pattern contains the topographic information of the region.
The phase unwrapping problem becomes very challenging due to the influence of noise, phase discontinuity and rapid phase change, so that the interference phase unwrapping becomes one of key steps of the interference measurement processing, has wide application prospects in various fields, and particularly has very important significance in application aspects of geographic position detection, urban and rural planning, nuclear magnetic resonance imaging, disaster monitoring, disaster relief deployment, military monitoring and the like, and is very significant in the research of interference phase unwrapping.
The traditional phase unwrapping method mainly comprises two kinds: the path tracking method and the branch method have the advantages that the phase unwrapping is completed by accumulating phases on a selected path by the path tracking algorithm, the calculation efficiency is high, but the robustness to noise is low, and the branch method is contrary to the path tracking algorithm.
In recent years, with the rise of artificial intelligence technology, for the interference phase unwrapping field, algorithms such as U-net (U-shaped convolutional neural network), QGPU (quality-guided phase unwrapping), and PhaseNet (phase unwrapping neural network) based on deep learning have been applied to interference phase unwrapping, and have achieved excellent performance in interference phase unwrapping scenes.
However, the existing research method has the defects of long training time of the network model, large scale of the training data set, poor robustness, low accuracy under the condition of low signal to noise ratio and the like, so that how to provide an interference phase unwrapping method with high convergence speed, strong instantaneity, strong robustness and high unwrapping accuracy under the condition of low signal to noise ratio for an interference phase unwrapping scene is a technical problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a low signal-to-noise ratio interference phase unwrapping method based on a spatial four-way memory network, which comprises the following steps of
S1, preprocessing interference phase data, creating an irregular interference phase map and an interference phase map with any shape by adding and subtracting Gaussian functions with different shapes and positions, adding slopes and Gaussian additive noise along the vertical and horizontal directions into the interference phase map, and constructing an interference phase map data set;
s2, designing a space four-way memory network structure;
s3, training a space four-way memory network, setting related parameters, randomly dividing an interference phase data set into a training data set and a test data set, sending the training data set into the space four-way memory network, training the network until the network converges, and storing a network training weight;
s4, verifying the space four-way memory network, loading a network training weight, and sending a verification data set into the trained space four-way memory network to verify the network to obtain an interference phase unwrapping result;
s5, outputting a phase unwrapping result;
s6, evaluating a phase unwrapping result.
The technical scheme of the invention is as follows:
further, in step S1, the interferometric phase data preprocessing includes the following substeps
S1.1, creating an irregular and arbitrary-shaped interference phase diagram by adding and subtracting Gaussian functions of a plurality of different shapes and positions;
s1.2, randomly selecting a slope to be added into an interference phase diagram along the vertical and horizontal directions to form an interference phase diagram with a slope phase;
s1.3, carrying out pixel phase winding on the interference phase diagram phase, and winding the phaseThe calculation formula is as follows:
wherein exp represents an exponential function based on a natural constant e,representing the original true phase of the interferometric phase map pixel, < >>Representing the spatial coordinates of the pixel in the phase map, for example>For the angle sign;
s1.4, creating an interference phase map data set with a data size of 5000 through the steps S1.1 to S1.3;
s1.5, randomly endowing the interference phase map data in the interference phase map data set with Gaussian additive noise of 0dB, 3dB and 5dB, and constructing the interference phase data set.
In the aforementioned low snr interference phase unwrapping method based on the spatial four-way memory network, in step S1.4, the pixel size of each interference phase map is 256' 256, and the pixel value is-55 to 55.
In the foregoing method for unwrapping low snr interference phase based on spatial four-way memory network, in step S2, the spatial four-way memory network structure includes an Input layer Input, an Encoder network Encoder, a QD-LSTM module, four connection layers Cat, a Decoder network Decoder, and a linear activation two-dimensional convolutional layer Conv2DR;
the output end of the Input layer Input is connected with the Input end of the Encoder network Encoder, the output end of the Encoder network Encoder is connected with the Input end of the QD-LSTM module Input characteristic diagram X layer, and the QD-LSTM module outputs the characteristic diagramUOutputting a characteristic diagramUThe output end of the layer is connected with the input end of the Decoder network, the output end of the Decoder network is connected with the input end of the linear activated two-dimensional convolution layer Conv2DR, and the output end of the linear activated two-dimensional convolution layer Conv2DR is used as the output end of the phase unwrapping network.
The low signal-to-noise ratio interference phase unwrapping method based on the spatial four-way memory network and the Encoder network EncoderComprising 4 encoding subnetworks Encoder concatenated in sequence 1 、Encoder 2 、Encoder 3 And an Encoder 4 Encoding a subnetwork Encoder 1 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 11 And a max pooling layer MaxPooling2D 11 Convolved block Conv2BLR 11 Output end of (2D) and max pooling layer MaxPooling 11 Is connected with the input end of the power supply; encoding a subnetwork Encoder 2 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 21 And a max pooling layer MaxPooling2D 21 Convolved block Conv2BLR 21 Output end of (2D) and max pooling layer MaxPooling 21 Is connected with the input end of the power supply; encoding a subnetwork Encoder 3 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 31 And a max pooling layer MaxPooling2D 31 Convolved block Conv2BLR 31 Output end of (2D) and max pooling layer MaxPooling 31 Is connected with the input end of the power supply; encoding a subnetwork Encoder 4 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 41 And a max pooling layer MaxPooling2D 41 Convolved block Conv2BLR 41 Output end of (2D) and max pooling layer MaxPooling 41 Is connected with the input end of the power supply;
the QD-LSTM module comprises 4 characteristic decomposition layers in different directions、/>、/> and />3 connection layers Cat 1 、Cat 2 and Cat3 Convolution block Conv2BLR of 2 two-dimensional convolution layers, batch normalization layer and leakage linear activation function layer 51 And Conv2BLR 61 Output ofFeature mapUA layer; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 1 Is connected with the input end of the connecting layer Cat 1 Output of (a) and convolution block Conv2BLR 51 Is connected with the input end of the power supply; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 2 Is connected with the input end of the connecting layer Cat 2 Output of (a) and convolution block Conv2BLR 61 The input end is connected; convolved block Conv2BLR 51 And convolution block Conv2BLR 61 Output terminal of (c) and connection layer Cat 3 Is connected with the input end of the connecting layer Cat 3 Output terminal and output feature map of (2)UThe input ends of the layers are connected;
the Decoder network Decoder comprises 4 decoding sub-network decoders which are cascaded in sequence 1 、Decoder 2 、Decoder 3 Decoder 4 Decoding sub-network Decoder 1 Conv2D comprising 1 two-dimensional convolution layer 11 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 71 Two-dimensional convolution layer Conv2D 11 Output of (a) and convolution block Conv2BLR 71 The input end is connected; decoding sub-network Decoder 2 Conv2D comprising 1 two-dimensional convolution layer 21 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 81 Two-dimensional convolution layer Conv2D 21 Output of (a) and convolution block Conv2BLR 81 The input end is connected; decoding sub-network Decoder 3 Conv2D comprising 1 two-dimensional convolution layer 31 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 91 Two-dimensional convolution layer Conv2D 31 Output terminal of (2)Convolved block Conv2BLR 91 The input end is connected; decoding sub-network Decoder 4 Conv2D comprising 1 two-dimensional convolution layer 41 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 101 Two-dimensional convolution layer Conv2D 41 Output of (a) and convolution block Conv2BLR 101 The input end is connected;
convolved block Conv2BLR in Encoder network Encoder 11 、Conv2BLR 21 、Conv2BLR 31 And Conv2BLR 41 The outputs of (a) are respectively connected with the connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected with the input end of the power supply; connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Respectively with the convolved blocks Conv2BLR in the Decoder of the Decoder network 101 、Conv2BLR 91 、Conv2BLR 81 And Conv2BLR 71 Is connected with the input end of the power supply; convolutional layer Conv2D in Decoder of Decoder network 41 、Conv2D 31 、Conv2D 21 And Conv2D 11 The output ends of (a) are respectively connected with the connecting layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected to the input terminal of the circuit.
In the foregoing low snr interference phase unwrapping method based on the spatial four-way memory network, in step S3, the data amounts of the training data set and the test data set are 4000 and 1000, respectively.
In the aforementioned low snr interference phase unwrapping method based on the spatial four-way memory network, in step S3, the training spatial four-way memory network comprises the following sub-steps
S3.1, setting a training initial learning rate, a maximum learning rate, a training batch number and training wheel times;
s3.2, adopting a K-means clustering algorithm to carry out iterative solution in the training process, and preventing network overfitting through L2 norm regularization;
s3.3, optimizing the network training by using an Adam gradient optimization algorithm, wherein the following constructor is used as a loss function in the optimization process, and the calculation formula is as follows:
wherein ,representing the difference between the training dataset sample realism and the network training estimate +.>Representing the phase pixel coordinates, wherein b is a positive coefficient, and the value is 0.2;
and S3.4, repeating the steps S3.2 to S3.3 until the network converges, obtaining a space four-way memory network model and a weight which are finally used for interference phase unwrapping, and storing the network training weight.
In the foregoing low snr interference phase unwrapping method based on the spatial four-way memory network, in step S3.1, the training start learning rate is set to 0.0001, the maximum learning rate is set to 0.01, the training batch number is set to 4, and the training round number is set to 98.
In the foregoing low signal-to-noise ratio interference phase unwrapping method based on the spatial four-way memory network, in step S5, the network phase unwrapping precision is evaluated by using the normalized root mean square error NRMSE, the smaller the NRMSE value is, the higher the phase unwrapping precision is, the better the network performance is, and the calculation formula is as follows:
where NRMSE represents normalized root mean square error, N is the training data set samples,for the difference between the true value of training dataset sample k and its net training estimate, +.>Representing phase pixel coordinates.
The beneficial effects of the invention are as follows:
in the invention, the interference phase unwrapping method with low signal-to-noise ratio is realized by adopting the technologies of interference phase data set creation, space four-way memory network structural design, new loss function construction, space four-way memory network training and verification and the like, has high convergence speed, strong instantaneity, strong robustness and high accuracy under the condition of low signal-to-noise ratio, and can effectively improve the interferometry precision of the synthetic aperture radar.
Drawings
FIG. 1 is a schematic diagram of the overall process flow of the present invention;
FIG. 2 is a diagram of a spatial four-way memory network according to the present invention;
FIG. 3 is a block diagram of a spatial four-way memory module according to the present invention;
FIG. 4 is a diagram of network training loss accuracy in an embodiment of the present invention;
FIG. 5 is a diagram of a 0dB noise interferometry phase winding in an embodiment of the invention;
FIG. 6 is an original true interference phase diagram in an embodiment of the present invention;
fig. 7 is a graph of the unwrapping result of the 0dB noise interferometry phase in an embodiment of the present invention.
Detailed Description
The low signal-to-noise ratio interference phase unwrapping method based on the spatial four-way memory network provided in this embodiment, as shown in fig. 1, includes the following steps
S1, preprocessing interference phase data: creating an irregular and arbitrary shape interference phase map by adding and subtracting gaussian functions of a plurality of different shapes and positions, adding slopes and gaussian additive noise along vertical and horizontal directions to the interference phase map, and constructing an interference phase map data set.
The preprocessing of the interferometric phase data includes the following substeps
S1.1, creating an irregular and arbitrary-shaped interference phase diagram by adding and subtracting Gaussian functions of a plurality of different shapes and positions;
s1.2, randomly selecting a slope to be added into an interference phase diagram along the vertical and horizontal directions to form an interference phase diagram with a slope phase;
s1.3, carrying out pixel phase winding on the interference phase diagram phase, and winding the phaseThe calculation formula is as follows:
wherein exp represents an exponential function based on a natural constant e,representing the original true phase of the interferometric phase map pixel, < >>Representing the spatial coordinates of the pixel in the phase map, for example>For the angle sign;
s1.4, creating an interference phase map data set with a data size of 5000 through steps S1.1 to S1.3, wherein each interference phase map has a pixel size of 256' 256 and a pixel value of-55 to 55;
s1.5, randomly endowing the interference phase map data in the interference phase map data set with Gaussian additive noise of 0dB, 3dB and 5dB, and constructing the interference phase data set.
S2, designing a space four-way memory (QD-LSTM) network structure;
the spatial four-way memory network structure is shown in fig. 2, and comprises an Input layer (m, m), an Encoder network Encoder, a QD-LSTM module, four connection layers Cat, a Decoder network Decoder and a linear activation two-dimensional convolution layer Conv2DR (c, k, s, p); the Encoder network Encoder comprises 4 encoding sub-networks which are cascaded in sequence, wherein the encoding sub-networks comprise 1 convolution block Conv2BLR (c, k, s, p) and 1 maximum pooling layer MaxPooling2D (c, k, s); as shown in FIG. 3, the QD-LSTM module includes feature decomposition layers in different directions、/>、/> and />The directions of the 3 connection layers Cat and the 2 convolution blocks Conv2BLR (c, k, s, p) and the 4 characteristic decomposition layers are respectively from left to right, from right to left, from top to bottom and from bottom to top; the Decoder network Decoder comprises 4 decoding sub-networks which are sequentially cascaded, wherein the decoding sub-networks comprise 1 two-dimensional convolution layer Conv2D (c, k, s, p) and 1 convolution block Conv2BLR (c, k, s, p);
where m'm represents the pixel size of the image data, c represents the channel number, k represents the kernel size, s represents the number of steps, and p represents the padding number.
As shown in fig. 2, the spatial four-way memory network structure includes an Input layer Input (256 ), an Encoder network Encoder, a QD-LSTM module, four connection layers Cat, a Decoder network Decoder, and a linear active two-dimensional convolutional layer Conv2DR (16,1,1,0);
the output end of the Input layer Input (256 ) is connected with the Input end of the Encoder network Encoder, the output end of the Encoder network Encoder is connected with the Input end of the QD-LSTM module Input feature map X layer, and the QD-LSTM module outputs the feature mapUOutputting a characteristic diagramUThe output of the layer is connected to the input of the Decoder network Decoder, which is connected to the input of the linear active two-dimensional convolution layer Conv2DR (16,1,1,0), the output of the linear active two-dimensional convolution layer Conv2DR (16,1,1,0) being the output of the phase unwrapping network.
The Encoder network Encoder comprises 4 encoding subnetworks encodings in cascade 1 、Encoder 2 、Encoder 3 And an Encoder 4 Encoding a subnetwork Encoder 1 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 11 (16,3,1,0) and maximum pooling layer MaxPooling2D 11 (16,2,2) convolution Block Conv2BLR 11 (16,3,1,0) output and maximum pooling layer MaxPooling2D 11 (16,2,2) are connected at their inputs; encoding a subnetwork Encoder 2 Comprising a two-dimensional rollConvolved block Conv2BLR of laminated batch normalization layer and leakage linear activation function layer 21 (32,3,1,0) and maximum pooling layer MaxPooling2D 21 (32,2,2) convolution Block Conv2BLR 21 (32,3,1,0) output and maximum pooling layer MaxPooling2D 21 (32,2,2) are connected at their inputs; encoding a subnetwork Encoder 3 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 31 (64,3,1,0) and maximum pooling layer MaxPooling2D 31 (64,2,2) convolution Block Conv2BLR 31 (64,3,1,0) output and maximum pooling layer MaxPooling2D 31 (64,2,2) are connected at their inputs; encoding a subnetwork Encoder 4 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 41 (128,3,1,0) and maximum pooling layer MaxPooling2D 41 (128,2,2) convolution Block Conv2BLR 41 (128,3,1,0) output and maximum pooling layer MaxPooling2D 41 (128,2,2) are connected at their inputs;
as shown in FIG. 3, the QD-LSTM module includes 4 different directional feature decomposition layers、/>、/> and />3 connection layers Cat 1 、Cat 2 and Cat3 Convolution block Conv2BLR of 2 two-dimensional convolution layers, batch normalization layer and leakage linear activation function layer 51 (128,3,1,0) and Conv2BLR 61 (128,3,1,0) output characteristic mapUA layer; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 1 Is connected with the input end of the connecting layer Cat 1 Output of (a) and convolution block Conv2BLR 51 (128,3,1,0) are connected at their inputs; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 2 Is connected with the input end of the connecting layer Cat 2 Output of (a) and convolution block Conv2BLR 61 (128,3,1,0) input terminals; convolved block Conv2BLR 51 (128,3,1,0) and convolution Block Conv2BLR 61 (128,3,1,0) output terminal and connection layer Cat 3 Is connected with the input end of the connecting layer Cat 3 Output terminal and output feature map of (2)UThe input ends of the layers are connected;
the Decoder network Decoder comprises 4 decoding sub-network decoders which are cascaded in sequence 1 、Decoder 2 、Decoder 3 Decoder 4 Decoding sub-network Decoder 1 Conv2D comprising 1 two-dimensional convolution layer 11 (128,3,2,0) and 1 two-dimensional convolutional layer-batch normalization layer-leakage Linear activation function layer Conv2BLR 71 (128,3,1,0) two-dimensional convolutional layer Conv2D 11 (128,3,2,0) output and convolution block Conv2BLR 71 (128,3,1,0) input terminals; decoding sub-network Decoder 2 Conv2D comprising 1 two-dimensional convolution layer 21 (64,3,2,0) and 1 two-dimensional convolutional layer-batch normalization layer-leakage Linear activation function layer Conv2BLR 81 (64,3,1,0) two-dimensional convolutional layer Conv2D 21 (64,3,2,0) output and convolution block Conv2BLR 81 (64,3,1,0) input terminals; decoding sub-network Decoder 3 Conv2D comprising 1 two-dimensional convolution layer 31 (32,3,2,0) and 1 two-dimensional convolutional layer-batch normalization layer-leakage Linear activation functionLayer convolution block Conv2BLR 91 (32,3,1,0) two-dimensional convolutional layer Conv2D 31 (32,3,2,0) output and convolution block Conv2BLR 91 (32,3,1,0) input terminals; decoding sub-network Decoder 4 Conv2D comprising 1 two-dimensional convolution layer 41 (16,3,2,0) and 1 two-dimensional convolutional layer-batch normalization layer-leakage Linear activation function layer Conv2BLR 101 (16,3,1,0) two-dimensional convolutional layer Conv2D 41 (16,3,2,0) output and convolution block Conv2BLR 101 (16,3,1,0) input terminals;
convolved block Conv2BLR in Encoder network Encoder 11 (16,3,1,0)、Conv2BLR 21 (32,3,1,0)、Conv2BLR 31 (64,3,1,0) and Conv2BLR 41 (128,3,1,0) outputs are respectively connected to the connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected with the input end of the power supply; connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Respectively with the convolved blocks Conv2BLR in the Decoder of the Decoder network 101 (16,3,1,0)、Conv2BLR 91 (32,3,1,0)、Conv2BLR 81 (64,3,1,0) and Conv2BLR 71 (128,3,1,0) are connected at their inputs; convolutional layer Conv2D in Decoder of Decoder network 41 (16,3,2,0)、Conv2D 31 (32,3,2,0)、Conv2D 21 (64,3,2,0) and Conv2D 11 The output ends of (128,3,2,0) are respectively connected with the connecting layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected to the input terminal of the circuit.
S3, training a space four-way memory network, setting related parameters, and randomly dividing an interference phase data set into a training data set and a test data set, wherein the data quantity is 4000 and 1000 respectively; and sending the training data set into a space four-way memory network, training the network until the network converges, and storing the training weight of the network.
The training space four-way memory network comprises the following sub-steps
S3.1, setting a training initial learning rate to be 0.0001, setting a maximum learning rate to be 0.01, setting the training batch number to be 4, and setting the training round number to be 98;
s3.2, adopting a K-means clustering algorithm to carry out iterative solution in the training process, and preventing network overfitting through L2 norm regularization;
s3.3, optimizing the network training by using an Adam gradient optimization algorithm, wherein the following constructor is used as a loss function in the optimization process, and the calculation formula is as follows:
wherein ,representing the difference between the training dataset sample realism and the network training estimate +.>Representing the phase pixel coordinates, wherein b is a positive coefficient, and the value is 0.2;
and S3.4, repeating the steps S3.2 to S3.3 until the network converges, obtaining a space four-way memory network model and a weight which are finally used for interference phase unwrapping, and storing the network training weight.
S4, verifying the space four-way memory network, loading the network training weight, and sending the verification data set into the trained space four-way memory network, and verifying the network to obtain an interference phase unwrapping result.
S5, outputting a phase unwrapping result;
and (3) evaluating the network phase unwrapping precision by using a normalized root mean square error NRMSE, wherein the smaller the NRMSE value is, the higher the phase unwrapping precision is, the better the network performance is, and the calculation formula is as follows:
where NRMSE represents normalized root mean square error, N is the training data set samples,for the difference between the true value of training dataset sample k and its net training estimate, +.>Representing phase pixel coordinates.
S6, evaluating a phase unwrapping result;
the training loss precision diagram of the spatial four-way memory network in the embodiment is shown in fig. 4, and as can be seen from fig. 4, the spatial four-way memory network in the embodiment is a network with less training rounds, high convergence speed and high unwrapping precision; the 0dB noise interference winding phase diagram is shown in fig. 5, the original real interference phase diagram is shown in fig. 6, the 0dB noise interference winding phase result is shown in fig. 7, and the test result proves that the invention can realize accurate unwrapping of the low signal to noise ratio interference winding phase, the NRMSE result reaches 0.82%, and the U-net, QGPU and PhaseNet methods respectively have only 2.58%, 4.89% and 16.97%, thus the invention is an interference phase unwrapping method with high convergence speed, strong instantaneity, strong robustness and higher accuracy under the condition of low signal to noise ratio, and can effectively improve the interferometry precision of the synthetic aperture radar.
In addition to the embodiments described above, other embodiments of the invention are possible. All technical schemes formed by equivalent substitution or equivalent transformation fall within the protection scope of the invention.
Claims (9)
1. A low signal-to-noise ratio interference phase unwrapping method based on a spatial four-way memory network is characterized by comprising the following steps of: comprises the following steps
S1, preprocessing interference phase data, creating an irregular interference phase map and an interference phase map with any shape by adding and subtracting Gaussian functions with different shapes and positions, adding slopes and Gaussian additive noise along the vertical and horizontal directions into the interference phase map, and constructing an interference phase map data set;
s2, designing a space four-way memory network structure;
s3, training a space four-way memory network, setting related parameters, randomly dividing an interference phase data set into a training data set and a test data set, sending the training data set into the space four-way memory network, training the network until the network converges, and storing a network training weight;
s4, verifying the space four-way memory network, loading a network training weight, and sending a verification data set into the trained space four-way memory network to verify the network to obtain an interference phase unwrapping result;
s5, outputting a phase unwrapping result;
s6, evaluating a phase unwrapping result.
2. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 1, wherein the method comprises the following steps: in the step S1, the preprocessing of the interference phase data comprises the following substeps
S1.1, creating an irregular and arbitrary-shaped interference phase diagram by adding and subtracting Gaussian functions of a plurality of different shapes and positions;
s1.2, randomly selecting a slope to be added into an interference phase diagram along the vertical and horizontal directions to form an interference phase diagram with a slope phase;
s1.3, carrying out pixel phase winding on the interference phase diagram phase, and winding the phaseThe calculation formula is as follows:
wherein exp represents an exponential function based on a natural constant e,representing the original true phase of the interferometric phase map pixel, < >>Representing the spatial coordinates of the pixel in the phase map, for example>For the angle sign;
s1.4, creating an interference phase map data set with a data size of 5000 through the steps S1.1 to S1.3;
s1.5, randomly endowing the interference phase map data in the interference phase map data set with Gaussian additive noise of 0dB, 3dB and 5dB, and constructing the interference phase data set.
3. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 2, wherein the method comprises the following steps: in the step S1.4, the pixel size of each interference phase map is 256' 256, and the pixel value is-55 to 55.
4. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 1, wherein the method comprises the following steps: in the step S2, the spatial four-way memory network structure includes an Input layer Input, an Encoder network Encoder, a QD-LSTM module, four connection layers Cat, a Decoder network Decoder, and a linear activation two-dimensional convolution layer Conv2DR;
the output end of the Input layer Input is connected with the Input end of the Encoder network Encoder, the output end of the Encoder network Encoder is connected with the Input end of the QD-LSTM module Input characteristic diagram X layer, and the QD-LSTM module outputs the characteristic diagramUOutputting a characteristic diagramUThe output end of the layer is connected with the input end of the Decoder network, the output end of the Decoder network is connected with the input end of the linear activated two-dimensional convolution layer Conv2DR, and the output end of the linear activated two-dimensional convolution layer Conv2DR is used as the output end of the phase unwrapping network.
5. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 4, wherein: the Encoder network Encoder comprises 4 encoding sub-networks Encoder which are cascaded in turn 1 、Encoder 2 、Encoder 3 And an Encoder 4 Encoding a subnetwork Encoder 1 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 11 And a maximum pooling layer MaxPooling2D 11 Convolved block Conv2BLR 11 Output end of (2D) and max pooling layer MaxPooling 11 Is connected with the input end of the power supply; encoding a subnetwork Encoder 2 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 21 And a max pooling layer MaxPooling2D 21 Convolved block Conv2BLR 21 Output end of (2D) and max pooling layer MaxPooling 21 Is connected with the input end of the power supply; encoding a subnetwork Encoder 3 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 31 And a max pooling layer MaxPooling2D 31 Convolved block Conv2BLR 31 Output end of (2D) and max pooling layer MaxPooling 31 Is connected with the input end of the power supply; encoding a subnetwork Encoder 4 Conv2BLR including two-dimensional convolution layer plus batch normalization layer plus leakage linear activation function layer 41 And a max pooling layer MaxPooling2D 41 Convolved block Conv2BLR 41 Output end of (2D) and max pooling layer MaxPooling 41 Is connected with the input end of the power supply;
the QD-LSTM module comprises 4 characteristic decomposition layers in different directions、/>、/> and />3 connection layers Cat 1 、Cat 2 and Cat3 Convolution block Conv2BLR of 2 two-dimensional convolution layers, batch normalization layer and leakage linear activation function layer 51 And Conv2BLR 61 Outputting a characteristic mapUA layer; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 1 Is connected with the input end of the connecting layer Cat 1 Output of (a) and convolution block Conv2BLR 51 Is connected with the input end of the power supply; feature decomposition layer->And feature decomposition layer->Output terminal of (c) and connection layer Cat 2 Is connected with the input end of the connecting layer Cat 2 Output of (a) and convolution block Conv2BLR 61 The input end is connected; convolved block Conv2BLR 51 And convolution block Conv2BLR 61 Output terminal of (c) and connection layer Cat 3 Is connected with the input end of the connecting layer Cat 3 Output terminal and output feature map of (2)UThe input ends of the layers are connected;
the Decoder network Decoder comprises 4 decoding sub-network decoders which are cascaded in sequence 1 、Decoder 2 、Decoder 3 Decoder 4 Decoding sub-network Decoder 1 Conv2D comprising 1 two-dimensional convolution layer 11 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 71 Two-dimensional convolution layer Conv2D 11 Output of (a) and convolution block Conv2BLR 71 The input end is connected; decoding sub-network Decoder 2 Conv2D comprising 1 two-dimensional convolution layer 21 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 81 Two-dimensional convolution layer Conv2D 21 Output of (a) and convolution block Conv2BLR 81 The input end is connected; decoding sub-network Decoder 3 Conv2D comprising 1 two-dimensional convolution layer 31 And 1 two-dimensional convolution block Conv2BLR of adding batch normalization layer and leakage linear activation function layer 91 Two-dimensional convolution layer Conv2D 31 Output of (a) and convolution block Conv2BLR 91 The input end is connected; decoding sub-network Decoder 4 Conv2D comprising 1 two-dimensional convolution layer 41 And 1 two-dimensional convolution layer batch-by-batch normalizationConvolved block Conv2BLR of chemical layer and leakage linear activation function layer 101 Two-dimensional convolution layer Conv2D 41 Output of (a) and convolution block Conv2BLR 101 The input end is connected;
convolved block Conv2BLR in Encoder network Encoder 11 、Conv2BLR 21 、Conv2BLR 31 And Conv2BLR 41 The outputs of (a) are respectively connected with the connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected with the input end of the power supply; connection layer Cat 4 、Cat 5 、Cat 6 and Cat7 Respectively with the convolved blocks Conv2BLR in the Decoder of the Decoder network 101 、Conv2BLR 91 、Conv2BLR 81 And Conv2BLR 71 Is connected with the input end of the power supply; convolutional layer Conv2D in Decoder of Decoder network 41 、Conv2D 31 、Conv2D 21 And Conv2D 11 The output ends of (a) are respectively connected with the connecting layer Cat 4 、Cat 5 、Cat 6 and Cat7 Is connected to the input terminal of the circuit.
6. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 1, wherein the method comprises the following steps: in the step S3, the data amounts of the training data set and the test data set are 4000 and 1000, respectively.
7. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 1, wherein the method comprises the following steps: in the step S3, the training space four-way memory network comprises the following sub-steps
S3.1, setting a training initial learning rate, a maximum learning rate, a training batch number and training wheel times;
s3.2, adopting a K-means clustering algorithm to carry out iterative solution in the training process, and preventing network overfitting through L2 norm regularization;
s3.3, optimizing the network training by using an Adam gradient optimization algorithm, wherein the following constructor is used as a loss function in the optimization process, and the calculation formula is as follows:
wherein ,representing the difference between the training dataset sample realism and the network training estimate +.>Representing the phase pixel coordinates, wherein b is a positive coefficient, and the value is 0.2;
and S3.4, repeating the steps S3.2 to S3.3 until the network converges, obtaining a space four-way memory network model and a weight which are finally used for interference phase unwrapping, and storing the network training weight.
8. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network of claim 7, wherein: in the step S3.1, the training start learning rate is set to 0.0001, the maximum learning rate is set to 0.01, the training batch number is set to 4, and the training round number is set to 98.
9. The method for unwrapping low signal-to-noise interference phases based on spatial four-way memory network according to claim 1, wherein the method comprises the following steps: in the step S5, the network phase unwrapping precision is evaluated by using the normalized root mean square error NRMSE, and the smaller the NRMSE value is, the higher the phase unwrapping precision is, the better the network performance is, and the calculation formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310410231.9A CN116148857A (en) | 2023-04-18 | 2023-04-18 | Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310410231.9A CN116148857A (en) | 2023-04-18 | 2023-04-18 | Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116148857A true CN116148857A (en) | 2023-05-23 |
Family
ID=86352712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310410231.9A Pending CN116148857A (en) | 2023-04-18 | 2023-04-18 | Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116148857A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523344A (en) * | 2024-01-08 | 2024-02-06 | 南京信息工程大学 | Interference phase unwrapping method based on phase quality weighted convolution neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200135017A1 (en) * | 2018-10-29 | 2020-04-30 | Beihang University | Transportation network speed foreeasting method using deep capsule networks with nested lstm models |
CN112381172A (en) * | 2020-11-28 | 2021-02-19 | 桂林电子科技大学 | InSAR interference image phase unwrapping method based on U-net |
CN115392252A (en) * | 2022-09-01 | 2022-11-25 | 广东工业大学 | Entity identification method integrating self-attention and hierarchical residual error memory network |
-
2023
- 2023-04-18 CN CN202310410231.9A patent/CN116148857A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200135017A1 (en) * | 2018-10-29 | 2020-04-30 | Beihang University | Transportation network speed foreeasting method using deep capsule networks with nested lstm models |
CN112381172A (en) * | 2020-11-28 | 2021-02-19 | 桂林电子科技大学 | InSAR interference image phase unwrapping method based on U-net |
CN115392252A (en) * | 2022-09-01 | 2022-11-25 | 广东工业大学 | Entity identification method integrating self-attention and hierarchical residual error memory network |
Non-Patent Citations (5)
Title |
---|
FRANCESCO LATTARI等: "A Deep Learning Approach for Change Points Detection in InSAR Time Series", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
MALSHA V.PERERA等: "A JOINT CONVOLUTIONAL AND SPATIAL QUAD-DIRECTIONAL LSTM NETWORK FOR PHASE UNWRAPPING", 《ICASSP 2021 - 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING》, pages 4056 - 4058 * |
姜文斌;刘丽萍;孙学宏;: "基于自适应权重法的K-means模型对遥感图像分割", 计算机应用与软件, no. 05 * |
杨子文;曾上游;杨远飞;: "基于二叉树型卷积神经网络信息融合的人脸验证", 计算机应用, no. 2 * |
王国松;王喜冬;侯敏;齐义泉;宋军;刘克修;吴新荣;白志鹏;: "基于观测和再分析数据的LSTM深度神经网络沿海风速预报应用研究", 海洋学报, no. 01 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523344A (en) * | 2024-01-08 | 2024-02-06 | 南京信息工程大学 | Interference phase unwrapping method based on phase quality weighted convolution neural network |
CN117523344B (en) * | 2024-01-08 | 2024-03-19 | 南京信息工程大学 | Interference phase unwrapping method based on phase quality weighted convolution neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111598892B (en) | Cell image segmentation method based on Res2-uneXt network structure | |
CN106126807B (en) | A kind of Wireless Channel Modeling method excavated based on big data | |
CN109671030B (en) | Image completion method based on adaptive rank estimation Riemann manifold optimization | |
CN104268934B (en) | Method for reconstructing three-dimensional curve face through point cloud | |
CN111967679B (en) | Ionosphere total electron content forecasting method based on TCN model | |
CN111369442B (en) | Remote sensing image super-resolution reconstruction method based on fuzzy kernel classification and attention mechanism | |
Deng et al. | Noisy depth maps fusion for multiview stereo via matrix completion | |
CN104835202A (en) | Quick three-dimensional virtual scene constructing method | |
CN111325666B (en) | Airborne laser point cloud processing method based on variable resolution voxel grid | |
CN116148857A (en) | Low signal-to-noise ratio interference phase unwrapping method based on space four-way memory network | |
CN117274072A (en) | Point cloud denoising method and device based on two-dimensional multi-modal range image | |
CN110146855B (en) | Radar intermittent interference suppression threshold calculation method and device | |
Rojas et al. | Re-rend: Real-time rendering of nerfs across devices | |
CN117115359B (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN113780389B (en) | Deep learning semi-supervised dense matching method and system based on consistency constraint | |
CN110956601A (en) | Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium | |
CN117132737B (en) | Three-dimensional building model construction method, system and equipment | |
Ghannadi et al. | Optimal texture image reconstruction method for improvement of SAR image matching | |
CN113298931A (en) | Reconstruction method and device of object model, terminal equipment and storage medium | |
Yuan et al. | [Retracted] Weather Radar Image Superresolution Using a Nonlocal Residual Network | |
CN116758219A (en) | Region-aware multi-view stereo matching three-dimensional reconstruction method based on neural network | |
CN116681844A (en) | Building white film construction method based on sub-meter stereopair satellite images | |
CN115512077A (en) | Implicit three-dimensional scene characterization method based on multilayer dynamic characteristic point clouds | |
CN114663600A (en) | Point cloud reconstruction method and system based on self-encoder | |
CN117523344B (en) | Interference phase unwrapping method based on phase quality weighted convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230523 |
|
RJ01 | Rejection of invention patent application after publication |