CN114626987A - Electromagnetic backscattering imaging method of deep expansion network based on physics - Google Patents

Electromagnetic backscattering imaging method of deep expansion network based on physics Download PDF

Info

Publication number
CN114626987A
CN114626987A CN202210307192.5A CN202210307192A CN114626987A CN 114626987 A CN114626987 A CN 114626987A CN 202210307192 A CN202210307192 A CN 202210307192A CN 114626987 A CN114626987 A CN 114626987A
Authority
CN
China
Prior art keywords
network
deep
image
induced current
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210307192.5A
Other languages
Chinese (zh)
Other versions
CN114626987B (en
Inventor
刘羽
赵浩
宋仁成
成娟
李畅
陈勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210307192.5A priority Critical patent/CN114626987B/en
Publication of CN114626987A publication Critical patent/CN114626987A/en
Application granted granted Critical
Publication of CN114626987B publication Critical patent/CN114626987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Monitoring And Testing Of Transmission In General (AREA)

Abstract

The invention discloses an electromagnetic backscattering imaging method based on a physical deep-developed network, which comprises the following steps: 1, constructing mixed input data, including data acquisition and preprocessing, and enriching network input information; 2, in a network structure building stage, designing a network structure of a deep expansion technology by using the deep expansion technology in combination with a traditional subspace optimization iterative algorithm; designing a loss function, and jointly optimizing a network by using the image structure similarity loss and the pixel-by-pixel loss in the objective function, particularly using the induced current loss and the scattered field loss; 4, by training the deep expansion network, the induced current, the scattering field and the contrast of the target scatterer can be quickly reconstructed with high quality. The deep-expansion network based on physics provided by the invention can effectively replace the traditional SOM iterative algorithm, enrich the physical knowledge of the network and realize rapid and high-precision electromagnetic backscatter imaging.

Description

Electromagnetic backscattering imaging method of deep expansion network based on physics
Technical Field
The invention belongs to the technical field of electromagnetic backscatter imaging, and particularly relates to electromagnetic backscatter imaging by effectively replacing a traditional SOM iteration method with a deep learning method.
Background
Electromagnetic backscattering is the determination of properties, such as position, shape and physical parameters, of unknown scatterers within a spatial region from the distribution of the scattered field within that spatial region. However, the electromagnetic backscattering problem (ISP) presents a high degree of nonlinearity and pathophysiology, the nonlinearity is due to multiple scattering effects between the measured scattering field and the scatterer, and the pathophysiology is mainly due to the fact that small perturbations of the observed data will cause large errors in the solution. The ISP can be solved by conventional objective function methods, which mainly include back propagation method (BP) of linear approximation, modified born iteration method (DBIM) of nonlinear iteration, Contrast Source Inversion (CSI), Subspace Optimization (SOM), and the like. In the traditional method, fluctuation physics is integrated into a model, but the linear method is rough in imaging, and the iterative method is high in calculation cost and long in time consumption.
In recent years, due to the strong learning mapping capability and the fast solving speed of the deep learning network, researchers successfully apply the deep learning network to solving the electromagnetic backscattering problem. For example, the Direct Inversion Scheme (DIS) proposed by Wei et al is a typical algorithm for mapping scatterers from a scattering field to a target scatterer using a neural network, but it can only reconstruct some simple scatterers within a training set. Li et al propose a 'DeepNIS' algorithm based on a rewinding product neural network by analogy of the relation between the traditional nonlinear iterative method and CNN. Some studies convert the target domain from the contrast domain to the current domain. The 'ICLM' algorithm proposed by Wei et al uses a cascaded network exclusively for learning the fuzzy part of the induced current. Huang et al simplify the backscattering problem to an image translation problem, first use the back-propagation method to obtain a coarse image, and then use the neural network to achieve high resolution reconstruction of the image. The test results in the above paper show that the current depth inverse scattering method is superior to the traditional nonlinear optimization method in both imaging quality and speed.
The above approach is limited by the quality of the input and the prior of the scatterer boundary type, and the generalization capability is limited especially when the network lacks physical knowledge guidance. The deep backscattering method needs to take into account physical model consistency and data consistency. The method makes up the gap between the traditional objective function method and the deep learning method driven by data, and effectively embeds physical knowledge into the deep neural network to realize high-quality imaging, which is a key technical problem.
Disclosure of Invention
The invention aims to solve the key technical problems and provides an electromagnetic backscattering imaging method based on a physical deep unfolding network, so that the deep unfolding technology, an SOM iteration framework and the existing physical knowledge can be effectively combined, the network learns the physical knowledge and enhances the generalization capability of a model, the rapid and high-precision electromagnetic backscattering imaging is realized, and the high-quality reconstruction of the induced current, the scattering field and the contrast of a scatterer is further realized.
The invention adopts the following technical scheme for solving the technical problems:
the invention discloses an electromagnetic backscattering imaging method based on a physical deep unfolding network, which is characterized by comprising the following steps of:
step one, constructing mixed input data, including data acquisition and preprocessing;
step 1.1, the electromagnetic scattering system adopts T transmitting antennas and R receiving antennas, and a target scatterer is placed in a square region of interest D
Figure BDA0003565973930000021
The transmitting antennas sequentially transmit plane wave signals to the region of interest D, and the R receiving antennas simultaneously measure a scattered field;
step 1.2, during forward performance, calculating the target scatterer by adopting a moment method
Figure BDA0003565973930000022
Analog induced current of
Figure BDA0003565973930000023
And simulating the scattered field
Figure BDA0003565973930000024
Step 1.3, during inversion, dispersing the region of interest D into a grid with the size of M multiplied by M, and forming M by the same2A sub-grid;
step 1.4, the simulated scattered field is decomposed by using singular values
Figure BDA0003565973930000025
Processing to obtain dimension [ M2,T]Deterministic partial current
Figure BDA0003565973930000026
Wherein, T represents T transmitting antenna channels;
step 1.5, changing deterministic partial Current
Figure BDA0003565973930000027
And obtaining the dimension of [ T, M]Three-dimensional current image matrix of
Figure BDA0003565973930000028
Step 1.6, three-dimensional current image matrix
Figure BDA0003565973930000029
Adding a dimension for storing the three-dimensional current image matrix
Figure BDA00035659739300000210
Real and imaginary parts of (a), thereby obtaining a dimension of [ T, N ]1,M,M]Deterministic current matrix
Figure BDA00035659739300000211
Wherein, N1Representing said deterministic current
Figure BDA00035659739300000212
The number of real and imaginary channels of (1);
step 1.7, utilizing a back propagation method to simulate the scattered field
Figure BDA00035659739300000213
Processing to generate dimension [ M, M]Low resolution scatterer image χBP
Step 1.8, correcting the low-resolution scatterer image chiBPAdding a dimension for storing the low resolution scatterer image χBPTo obtain an imaginary part of dimension [ N ]2,M,M]Three-dimensional image matrix of
Figure BDA00035659739300000214
Step 1.9, the three-dimensional image matrix is processed
Figure BDA00035659739300000215
Adding one dimension for storing T three-dimensional image matrixes
Figure BDA00035659739300000216
To obtain a dimension of [ T, N2,M,M]Low resolution contrast image of
Figure BDA00035659739300000217
Wherein N is2Representing the low resolution contrast image
Figure BDA00035659739300000218
The number of imaginary channels of (a);
step 1.10, the deterministic current is applied
Figure BDA0003565973930000031
And the low resolution contrastDegree image
Figure BDA0003565973930000032
Splicing in the second dimension to obtain the dimension [ T, N, M]Mixed input data x of (2); wherein N is N1+N2Representing mixed input data x1The number of real and imaginary channels of (a);
step two, building a deep expansion network PθAnd mixing the input data x1As a deep developed network PθBy said deep-developed network PθOutputting the target scatterer
Figure BDA0003565973930000033
Approximately true full induced current of
Figure BDA0003565973930000034
Step 2.1, cascading K subnetworks { P }θ,k|k∈[1,K]Form a deep-developed network Pθ(ii) a Wherein, Pθ,kRepresents the kth cascaded subnetwork; and the kth cascaded subnetwork Pθ,kAdopting a U-net structure comprising a contraction path and an expansion path;
the contraction path is formed by adding a maximum pooling layer after two convolution blocks in sequence, wherein each convolution block is formed by a convolution layer with convolution kernel size of a multiplied by a, a BN layer and a ReLU activation function;
the expansion path is formed by adding two convolution blocks after one deconvolution operation in sequence, the deconvolution operation is formed by a deconvolution layer with a convolution kernel of b multiplied by b, and the structure of the convolution blocks is the same as that of the contraction path;
step 2.2, when k is 1, the mixed input data x1Inputting the deep developed network PθAnd passes through the kth sub-network Pθ,kThe feature map f with the dimension of c x c is obtained by the contraction path processing of (1)kThen the k-th sub-network P is output after the processing of the expanded pathθ,kPredicted induced current
Figure BDA0003565973930000035
According to the induced current
Figure BDA0003565973930000036
Obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure BDA0003565973930000037
K predicted total field of
Figure BDA0003565973930000038
Predicted scatter field k
Figure BDA0003565973930000039
And the k-th predicted contrast image
Figure BDA00035659739300000310
When K is 2,3, K, the K-1 st sub-network Pθ,k-1Output induced current matrix
Figure BDA00035659739300000311
And the k-1 st scattered field image matrix
Figure BDA00035659739300000312
Splicing in the second dimension to obtain the kth mixed input data xkAnd passes through the kth sub-network Pθ,kOutput the k-th predicted induced current matrix
Figure BDA00035659739300000313
Thereby consisting of the Kth sub-network Pθ,KOutputting the Kth predicted induced current matrix
Figure BDA00035659739300000314
And as a deep deployment network PθApproximate real complete induction current of output
Figure BDA00035659739300000315
Then according to the complete induction current which is approximate to the real
Figure BDA00035659739300000316
Further obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure BDA00035659739300000317
Including predicting the total field
Figure BDA00035659739300000318
Predicting the scattered field
Figure BDA00035659739300000319
And predicting the contrast image
Figure BDA00035659739300000320
Step three, designing a loss function and establishing a deep expansion network PθThe optimization objective of (2);
step 3.1, constructing a deep expansion network P by using the formula (1)θTarget loss function L ofP
LP=LJ+LE1LSSIM2LMSE (1)
In the formula (1), LJRepresents an induced current loss and is obtained by the formula (2); l isERepresents the scattered field loss and is obtained by the formula (3); l isSSIMRepresents a loss of contrast image quality and is obtained by equation (3); l isMSERepresents the pixel-by-pixel loss and is obtained by the formula (4); lambda1,λ2Is a hyper-parameter used to balance the effects of image quality loss and pixel-by-pixel loss;
Figure BDA0003565973930000041
in the formula (2), the reaction mixture is,
Figure BDA0003565973930000042
target scatterer corresponding to j-th transmitting antenna
Figure BDA0003565973930000043
Approximately real induced current matrix of
Figure BDA0003565973930000044
Figure BDA0003565973930000045
Target scatterer corresponding to j-th transmitting antenna
Figure BDA0003565973930000046
The simulated induced current of (a);
Figure BDA0003565973930000047
in the formula (3), the reaction mixture is,
Figure BDA0003565973930000048
representing the scattered field predicted by the depth expanded network corresponding to the ith receiving antenna,
Figure BDA0003565973930000049
target scatterer corresponding to the first receiving antenna
Figure BDA00035659739300000410
The true fringe field of (a);
Figure BDA00035659739300000411
in the formula (4), SSIM represents loss of image structural similarity;
Figure BDA00035659739300000412
in the formula (5), N represents the number of pixels of one contrast image;
Figure BDA00035659739300000413
representing a predicted contrast image
Figure BDA00035659739300000414
The corresponding contrast value of the qth pixel point of (1);
Figure BDA00035659739300000415
representing a target scatterer
Figure BDA00035659739300000416
The corresponding contrast value of the qth pixel point of (1);
fourthly, reconstructing induced current, scattering field and contrast of the scatterer by training a deep unfolding network;
based on the mixed input data x1For the deep developed network PθLearning is performed and the loss function L is calculatedPIn the process, the network parameter theta is continuously optimized, so that the induced current, the scattering field and the contrast ratio image output by network reconstruction are gradually fitted to the physical quantity corresponding to the real scatterer, and an optimal network model is obtained and is used for realizing the reconstruction of the induced current, the scattering field and the contrast ratio image with high quality.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an electromagnetic backscattering imaging method of a deep expansion network based on physics, which uses two characteristic information of induction current and contrast to carry out mixed input network, enriches the input characteristic information, reduces the nonlinear difficulty of network learning, and improves the network reconstruction quality and the network reconstruction efficiency.
2. The method utilizes the deep expansion technology to combine the traditional SOM iteration method and the existing physical knowledge to establish physical network mapping, uses the characteristic information such as induced current, scattered field and contrast to carry out comprehensive constraint, effectively replaces the traditional SOM iteration method, contains rich physical knowledge, and improves the network prediction accuracy and generalization capability.
3. In addition to a contrast loss function, a current loss function and a scattered field loss function are particularly introduced into the target function, so that the consistency of a physical model and the consistency of data are ensured, and the generalization capability of a physical network is better enhanced.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of a deep expanded subnetwork structure of the present invention;
FIG. 3 is a diagram showing a result of reconstructing a MNIST handwritten digital data set according to the present invention;
FIG. 4 is a graph showing the reconstruction results of Test #1 induced current and scattered field with 10% noise added according to the present invention;
FIG. 5 is a diagram showing the reconstruction result of "Austria" data set with different proportions of noise added according to the present invention;
FIG. 6 is a graph showing the reconstruction results of induced current and scattered field of "Austria" with 25% noise added in accordance with the present invention;
FIG. 7 is a diagram showing the result of experimental data reconstruction with a frequency of 3GHz according to the present invention;
FIG. 8 is a graph showing the reconstruction results of induced current and scattered field of the experimental data with the frequency of 3GHz according to the present invention.
Detailed Description
In this embodiment, an electromagnetic backscatter imaging method based on a physical deep expansion network is to mix input data x1As a deep developed network PθOf the approximate real induced current matrix of the scatterer
Figure BDA0003565973930000051
As a deep developed network PθAn output of (d); the traditional SOM method is effectively replaced by using a deep expansion technology in combination with the traditional SOM iterative process, and one-time iterative process of the SOM is mapped into a deep expansion network PθEach module updates the induced current and the contrast simultaneously, and the deep spreading network PθRealize complete inductionBesides the high-quality reconstruction of the current, the scattering field and the contrast can be further obtained through calculation of a state equation, a data equation and a contrast updating formula of the SOM. Specifically, as shown in fig. 1, the method comprises the following steps:
step one, constructing mixed input data, including data acquisition and preprocessing;
step 1.1, the electromagnetic scattering system adopts 16 transmitting antennas T and 32 receiving antennas R, and places target scatterers in a square region of interest D
Figure BDA0003565973930000061
The transmitting antenna sequentially transmits plane wave signals to the interested region D, and the scattering fields are measured simultaneously by 32 receiving antennas; assuming an operating frequency of 400MHZ, the region of interest D is 2.0 meters by 2.0 meters.
Step 1.2, placing a target scatterer in the region of interest D during forward operation
Figure BDA0003565973930000062
To avoid the occurrence of the "inverse crime" phenomenon in the inverse problem, for each incidence, the target scatterers were calculated on a 100 × 100 grid using the moment method
Figure BDA0003565973930000063
Analog induced current of
Figure BDA0003565973930000064
And simulating the scattered field
Figure BDA0003565973930000065
Step 1.3, in inversion, dispersing the region of interest D into a grid with a size of M × M-64 × 64, and forming M together24096 sub-grids;
step 1.4, simulating a scattering field by using singular value decomposition
Figure BDA0003565973930000066
Processing to obtain dimension [ M2,T]=[4096,16]IndeedQualitative partial current
Figure BDA0003565973930000067
Step 1.5, changing deterministic partial Current
Figure BDA0003565973930000068
And obtaining the dimension of [ T, M]=[16,64,64]Three-dimensional current image matrix of
Figure BDA0003565973930000069
Step 1.6, three-dimensional current image matrix
Figure BDA00035659739300000610
Adding a dimension for storing a three-dimensional current image matrix
Figure BDA00035659739300000611
To obtain the real and imaginary components of dimension [16,2, 64%]Deterministic current matrix
Figure BDA00035659739300000612
Step 1.7, simulating the scattered field by using a back propagation method
Figure BDA00035659739300000613
Processing to generate dimension [ M, M]Low resolution scatterer image χBP
Step 1.8, aiming at low-resolution scatterer image chiBPAdding one dimension for storing low-resolution scatterer images xBPTo obtain an imaginary part of dimension N2,M,M]=[1,64,64]Three-dimensional image matrix of
Figure BDA00035659739300000614
Step 1.9, three-dimensional image matrix
Figure BDA00035659739300000615
Adding one dimension for storing T-16 three-dimensional image matrixes
Figure BDA00035659739300000616
To obtain a dimension of [ T, N2,M,M]=[16,1,64,64]Low resolution contrast image of
Figure BDA00035659739300000617
Step 1.10, deterministic Current
Figure BDA00035659739300000618
And low resolution contrast images
Figure BDA00035659739300000619
Splicing in the second dimension to obtain the dimension [16,3,64]Mixed input data x of1
The invention adopts MNIST handwritten figures added with random circles as a training set, each scatterer is assumed to be uniform and has no loss, the relative dielectric constant is randomly distributed between 1.5 and 2.5, and the background is a free space; wherein the training set scattered field does not contain noise, and the test set scattered field is added with 10% of Gaussian noise. 5000 pieces of MNIST training sets added with a random circle are randomly selected as training sets, 2500 pieces of MNIST training sets are selected as verification sets, and 1500 pieces of MNIST testing sets added with a random circle are randomly selected as tests. In order to verify the effectiveness of the model under different shapes and different noise levels and the generalization capability of the model to real data, four pieces of experimental data with different shapes, different proportions of noise added, dielectric constant of 2 and configuration frequency of 3GHz are generated respectively;
step two, building a deep expansion network PθAnd mixing the input data x1As a deep developed network PθBy a deep-developed network PθOutput target scatterer
Figure BDA0003565973930000071
Approximately true full induced current of
Figure BDA0003565973930000072
Step 2.1, cascading K sub-networks { P }θ,k|k∈[1,K]Form a deep-expanded network Pθ(ii) a Wherein, Pθ,kRepresents the kth cascaded subnetwork; in the present embodiment, K ═ 4; and the kth cascaded subnetwork Pθ,kAdopting a U-net structure comprising a contraction path and an expansion path; the network structure of the sub-network is shown in fig. 2; in this embodiment, the number of input channels and the number of output channels of the deep-scaling network are 3 and 2, respectively.
The contraction path is formed by adding a maximum pooling layer after two convolution blocks in sequence, wherein each convolution block is formed by a convolution layer with convolution kernel size of 3 multiplied by 3, a BN layer and a ReLU activation function; the maximum pooling layer performs down-sampling on the output feature map of the convolution block, the size of the feature map is reduced by half and the number of channels is multiplied by 2 after each down-sampling;
the expansion path is formed by adding two convolution blocks after one deconvolution operation in sequence, the deconvolution operation is formed by a deconvolution layer with a convolution kernel size of 2 multiplied by 2, and the convolution blocks have the same structure as the contraction path; after each deconvolution, the size of the feature graph is doubled and the number of channels is halved, and meanwhile, after each deconvolution operation, feature information of different depths is fused by using jump connection and feature graphs with the same size from corresponding contraction paths. At the last layer of the network, a 1 × 1 convolutional layer is provided, and this operation can convert the 64-channel features into the required number of feature channels;
step 2.2, when k is 1, mixing the input data x1Input deep unfolding network PθAnd passes through the kth sub-network Pθ,kThe feature map f having a dimension of c × c 16 × 16 is obtained by the narrowing-down path processing of (1)kThen the k-th sub-network P is output after the processing of the expanded pathθ,kPredicted induced current
Figure BDA0003565973930000073
According to the induced current
Figure BDA0003565973930000074
Obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure BDA0003565973930000075
K predicted total field of
Figure BDA0003565973930000076
K predicted scatter field
Figure BDA0003565973930000077
And the k-th predicted contrast image
Figure BDA0003565973930000078
When K is 2,3,.., K, the K-1 st sub-network Pθ,k-1Output induced current matrix
Figure BDA0003565973930000079
And a k-1 st fringe field image matrix
Figure BDA0003565973930000081
Splicing in the second dimension to obtain the kth mixed input data xkAnd passes through the kth sub-network Pθ,kOutput the k-th predicted induced current matrix
Figure BDA0003565973930000082
Thereby consisting of the Kth sub-network Pθ,KOutputting the Kth predicted induced current matrix
Figure BDA0003565973930000083
And as a deep deployment network PθApproximate real complete induction current of output
Figure BDA0003565973930000084
Then according to the complete induction current which is approximate to the real
Figure BDA0003565973930000085
Further obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure BDA0003565973930000086
Including predicting the total field
Figure BDA0003565973930000087
Predicting the scattered field
Figure BDA0003565973930000088
And predicting the contrast image
Figure BDA0003565973930000089
Step three, designing a loss function and establishing a deep expansion network PθThe optimization objective of (2);
deep-developed network PθAnd continuously obtaining an optimized physical parameter theta in the back propagation process of the loss function, and guiding the network to better learn induced current, scattering field and contrast ratio reconstruction of the target scatterer.
Step 3.1, constructing a deep expansion network P by using the formula (1)θTarget loss function L ofP
LP=LJ+LE1LSSIM2LMSE (1)
In the formula (1), LJRepresents an induced current loss and is obtained by the formula (2); l isERepresents the scattered field loss and is obtained by the formula (3); l is a radical of an alcoholSSIMRepresents a loss of contrast image quality and is obtained by equation (3); l isMSERepresents the pixel-by-pixel loss and is obtained by equation (4); lambda [ alpha ]1,λ2Is a hyper-parameter to balance the effects of image quality loss and pixel-by-pixel loss;
Figure BDA00035659739300000810
in the formula (2), the reaction mixture is,
Figure BDA00035659739300000811
target scatterer corresponding to j-th transmitting antenna
Figure BDA00035659739300000812
Approximately real induced current matrix of
Figure BDA00035659739300000813
Figure BDA00035659739300000814
Target scatterer corresponding to j-th transmitting antenna
Figure BDA00035659739300000815
The simulated induced current of (a);
Figure BDA00035659739300000816
in the formula (3), the reaction mixture is,
Figure BDA00035659739300000817
representing the scattered field predicted by the depth expanded network corresponding to the ith receiving antenna,
Figure BDA00035659739300000818
target scatterer corresponding to the first receiving antenna
Figure BDA00035659739300000819
The true fringe field of (a);
Figure BDA00035659739300000820
in the formula (4), SSIM represents loss of image structural similarity;
Figure BDA0003565973930000091
in the formula (5), in this embodiment, the number N of pixels in one contrast image is 4096;
Figure BDA0003565973930000092
representing a predicted contrast image
Figure BDA0003565973930000093
The corresponding contrast value of the qth pixel point of (1);
Figure BDA0003565973930000094
representing a target scatterer
Figure BDA0003565973930000095
The corresponding contrast value of the qth pixel point of (1);
fourthly, reconstructing induced current, scattering field and contrast of the scatterer by training a deep unfolding network;
based on mixed input data x1For deep developed network PθLearning is carried out, the nonlinear difficulty of network learning is reduced, the predicted induced current, scattering field and contrast output by a deep expansion network and the physical quantity of a real scatterer are used as constraints, and a loss function L is calculatedPAnd optimizing a network parameter theta, and selecting an optimal network model for realizing high-quality induced current, scattered field and contrast image reconstruction, thereby ensuring the consistency of the physical model and the data consistency.
The deep-unfolding network uses an Adam optimizer, setting beta1=0.9,β2At 0.999, the batch size is set to 1, the training epoch is set to 40, the first 20 epochs maintain an initial learning rate of 0.0002, the learning rate decays linearly from the 21 st epoch until the last epoch learning rate drops to 0. Continuously adjusting network parameters according to the error of the verification set after the mth training, and simultaneously adjusting the hyper-parameter lambda in the experimental process1And λ2Balancing the weights between losses until the losses of the network converge, and selecting an optimal model to achieve high-quality reconstruction of unknown scatterers。
The present invention uses Structural Similarity (SSIM) and Root Mean Square Error (RMSE) as evaluation indicators for both intra-and cross-dataset testing. The method provided by the invention is compared with a method for reconstructing the relative dielectric constant by using a U-net network, wherein the target loss function used by the U-net network is LU=LSSIMFor the sake of comparison, the method of reconstructing an image using a U-net network is referred to as U-net in the description of the result.
The method of the present invention performs training and testing on handwritten digital data sets and directly performs testing on "Austria" and experimental data. In the following figures, GT represents a contrast image of a real target scatterer; SOM denotes a reconstructed image using a conventional SOM iterative algorithm; u-net represents the reconstruction of images using a U-net network; the reconstruction result in the MNIST Test set is shown in fig. 3, the first graph in fig. 4 is a current distribution diagram of Test #1 under the action of the first transmitting antenna, from left to right, sequentially including input induced current, predicted complete induced current and simulated induced current, the first row is real part current, and the second row is imaginary part current; the second graph shows the fitting results of the scattered field from each transmit antenna received by all receive antennas, the first line being the real field and the second line being the imaginary field. Meanwhile, the invention also carries out cross-dataset test, and the reconstruction results on the 'Austria' dataset and the experimental data are respectively shown in FIG. 5, FIG. 6, FIG. 7 and FIG. 8. As shown in FIG. 5, the reconstructed results of "Austria" data set with dielectric constant of 2 under different scales of noise, wherein Test #5-Test #8 are sequentially added with 10%, 20%, 25% and 30% of white Gaussian noise; as shown in fig. 6, the current distribution diagram of Test #7 under the action of the first transmitting antenna is sequentially an input induced current, a predicted complete induced current and a simulated induced current from left to right, wherein the first row is a real part current, and the second row is an imaginary part current; the second graph shows the fitting results of the scattered field from each transmit antenna received by all receive antennas, the first line being the real field and the second line being the imaginary field. The result of the experimental data reconstruction with a frequency of 3GHz is shown in fig. 7, in which the dotted line represents the position of the real image; fig. 8 shows a current distribution diagram of experimental data under the action of a first transmitting antenna, which sequentially includes, from left to right, an input induced current, a predicted complete induced current, and a simulated induced current, where a first row is a real part current and a second row is an imaginary part current; the second graph shows the fitting results of the scattered field from each transmit antenna received by all receive antennas, the first line being the real field and the second line being the imaginary field.
According to the reconstruction result, the method can rapidly and accurately reconstruct physical quantities such as scatterer induced current, scattering field, contrast ratio and the like, the deep expansion network trained by the method effectively replaces the traditional SOM iterative method, the consistency and the data consistency of the physical model are ensured, the obvious advantages are shown on 'Austria' data sets and experimental data of different noises, and the fact that the trained physical model learns physical knowledge and has better generalization capability is also proved.

Claims (1)

1. An electromagnetic backscattering imaging method based on a physical depth expansion network is characterized by comprising the following steps:
step one, constructing mixed input data, including data acquisition and preprocessing;
step 1.1, the electromagnetic scattering system adopts T transmitting antennas and R receiving antennas, and a target scatterer is placed in a square region of interest D
Figure FDA0003565973920000011
The transmitting antennas sequentially transmit plane wave signals to the region of interest D, and the R receiving antennas simultaneously measure a scattered field;
step 1.2, during forward performance, calculating the target scatterer by adopting a moment method
Figure FDA0003565973920000012
Analog induced current of
Figure FDA0003565973920000013
And simulating the scattered field
Figure FDA0003565973920000014
Step 1.3, during inversion, dispersing the region of interest D into a grid with the size of M multiplied by M, and forming M by the same2A sub-grid;
step 1.4, the simulated scattered field is decomposed by using singular values
Figure FDA0003565973920000015
Processing to obtain dimension [ M2,T]Deterministic partial current
Figure FDA0003565973920000016
Wherein, T represents T transmitting antenna channels;
step 1.5, changing deterministic partial Current
Figure FDA0003565973920000017
And obtaining the dimension of [ T, M]Three-dimensional current image matrix of
Figure FDA0003565973920000018
Step 1.6, three-dimensional current image matrix
Figure FDA0003565973920000019
Adding a dimension for storing the three-dimensional current image matrix
Figure FDA00035659739200000110
Real and imaginary parts of (a), thereby obtaining a dimension of [ T, N ]1,M,M]Deterministic current matrix
Figure FDA00035659739200000111
Wherein N is1Representing said deterministic current
Figure FDA00035659739200000112
The number of real and imaginary channels of (1);
step 1.7, utilizing a back propagation method to simulate the scattered field
Figure FDA00035659739200000113
Processing to generate dimension [ M, M]Low resolution scatterer image χBP
Step 1.8, aiming at the low-resolution scatterer image xBPAdding a dimension for storing the low resolution scatterer image χBPTo obtain an imaginary part of dimension [ N ]2,M,M]Of the three-dimensional image matrix
Figure FDA00035659739200000114
Step 1.9, the three-dimensional image matrix is processed
Figure FDA00035659739200000115
Adding one dimension for storing T three-dimensional image matrixes
Figure FDA00035659739200000116
To obtain a dimension of [ T, N2,M,M]Low resolution contrast image of
Figure FDA00035659739200000117
Wherein N is2Representing the low resolution contrast image
Figure FDA00035659739200000118
The number of imaginary channels of (a);
step 1.10, the deterministic current is applied
Figure FDA0003565973920000021
And the low resolution contrast image
Figure FDA0003565973920000022
Splicing in the second dimension to obtain the dimension of [ T, N,M,M]Mixed input data x of (2); wherein N is N1+N2Representing mixed input data x1The number of real and imaginary channels of (a);
step two, building a deep expansion network PθAnd mixing the input data x1As a deep developed network PθBy said deep-developed network PθOutputting the target scatterer
Figure FDA0003565973920000023
Approximately true full induced current of
Figure FDA0003565973920000024
Step 2.1, cascading K sub-networks { P }θ,k|k∈[1,K]Form a deep-expanded network Pθ(ii) a Wherein, Pθ,kRepresents the kth cascaded subnetwork; and the kth cascaded subnetwork Pθ,kAdopting a U-net structure comprising a contraction path and an expansion path;
the contraction path is formed by adding a maximum pooling layer after two convolution blocks in sequence, wherein each convolution block is formed by a convolution layer with convolution kernel size of a multiplied by a, a BN layer and a ReLU activation function;
the expansion path is formed by adding two convolution blocks after one deconvolution operation in sequence, the deconvolution operation is formed by a deconvolution layer with a convolution kernel size of b x b, and the convolution blocks have the same structure as the contraction path;
step 2.2, when k is 1, the mixed input data x1Inputting the deep developed network PθAnd passes through the kth sub-network Pθ,kThe feature map f with the dimension of c x c is obtained by the contraction path processing of (1)kThen the k-th sub-network P is output after the processing of the expanded pathθ,kPredicted induced current
Figure FDA0003565973920000025
According to the induced current
Figure FDA0003565973920000026
Obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure FDA0003565973920000027
K predicted total field of
Figure FDA0003565973920000028
Predicted scatter field k
Figure FDA0003565973920000029
And the k-th predicted contrast image
Figure FDA00035659739200000210
When K is 2,3,.., K, the K-1 st sub-network Pθ,k-1Output induced current matrix
Figure FDA00035659739200000211
And the k-1 st scattered field image matrix
Figure FDA00035659739200000212
Splicing in the second dimension to obtain the kth mixed input data xkAnd passes through the kth sub-network Pθ,kOutput the k-th predicted induced current matrix
Figure FDA00035659739200000213
Thereby consisting of the Kth sub-network Pθ,KOutputting the Kth predicted induced current matrix
Figure FDA00035659739200000214
And as a deep deployment network PθApproximate real complete induction current of output
Figure FDA00035659739200000215
Then according to the complete induction current which is approximate to the real
Figure FDA00035659739200000216
Further obtaining the target scatterer by using a state equation, a data equation and a contrast updating formula of the SOM
Figure FDA00035659739200000217
Including predicting the total field
Figure FDA00035659739200000218
Predicting the scattered field
Figure FDA00035659739200000219
And predicting the contrast image
Figure FDA00035659739200000220
Step three, designing a loss function and establishing a deep expansion network PθThe optimization objective of (2);
step 3.1, constructing a deep expansion network P by using the formula (1)θTarget loss function L ofP
LP=LJ+LE1LSSIM2LMSE (1)
In the formula (1), LJRepresents an induced current loss and is obtained by the formula (2); l isERepresents the scattered field loss and is obtained by the formula (3); l isSSIMRepresents a loss of contrast image quality and is obtained by equation (3); l isMSERepresents the pixel-by-pixel loss and is obtained by equation (4); lambda [ alpha ]1,λ2Is a hyper-parameter to balance the effects of image quality loss and pixel-by-pixel loss;
Figure FDA0003565973920000031
in the formula (2), the reaction mixture is,
Figure FDA0003565973920000032
target scatterer corresponding to j-th transmitting antenna
Figure FDA0003565973920000033
Approximately real induced current matrix of
Figure FDA0003565973920000034
Figure FDA0003565973920000035
Target scatterer corresponding to j-th transmitting antenna
Figure FDA0003565973920000036
The simulated induced current of (a);
Figure FDA0003565973920000037
in the formula (3), the reaction mixture is,
Figure FDA0003565973920000038
representing the scattered field predicted by the depth expanded network corresponding to the ith receiving antenna,
Figure FDA0003565973920000039
target scatterer corresponding to the first receiving antenna
Figure FDA00035659739200000310
The true fringe field of (a);
Figure FDA00035659739200000311
in the formula (4), SSIM represents loss of image structural similarity;
Figure FDA00035659739200000312
in the formula (5), N represents the number of pixels of one contrast image;
Figure FDA00035659739200000313
representing a predicted contrast image
Figure FDA00035659739200000314
The corresponding contrast value of the qth pixel point of (1);
Figure FDA00035659739200000315
representing a target scatterer
Figure FDA00035659739200000316
The corresponding contrast value of the qth pixel point of (1);
fourthly, reconstructing induced current, scattering field and contrast of the scatterer by training a deep unfolding network;
based on the mixed input data x1For the deep developed network PθLearning is performed and the loss function L is calculatedPIn the process, the network parameter theta is continuously optimized, so that the induced current, the scattering field and the contrast ratio image output by network reconstruction are gradually fitted to the physical quantity corresponding to the real scatterer, and an optimal network model is obtained and is used for realizing the reconstruction of the induced current, the scattering field and the contrast ratio image with high quality.
CN202210307192.5A 2022-03-25 2022-03-25 Electromagnetic backscatter imaging method based on physical depth expansion network Active CN114626987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210307192.5A CN114626987B (en) 2022-03-25 2022-03-25 Electromagnetic backscatter imaging method based on physical depth expansion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210307192.5A CN114626987B (en) 2022-03-25 2022-03-25 Electromagnetic backscatter imaging method based on physical depth expansion network

Publications (2)

Publication Number Publication Date
CN114626987A true CN114626987A (en) 2022-06-14
CN114626987B CN114626987B (en) 2024-02-20

Family

ID=81904718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210307192.5A Active CN114626987B (en) 2022-03-25 2022-03-25 Electromagnetic backscatter imaging method based on physical depth expansion network

Country Status (1)

Country Link
CN (1) CN114626987B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428407A (en) * 2020-03-23 2020-07-17 杭州电子科技大学 Electromagnetic scattering calculation method based on deep learning
CN111610374A (en) * 2020-05-28 2020-09-01 杭州电子科技大学 Scattered field phase recovery method based on convolutional neural network
CN111999731A (en) * 2020-08-26 2020-11-27 合肥工业大学 Electromagnetic backscattering imaging method based on perception generation countermeasure network
CN113378472A (en) * 2021-06-23 2021-09-10 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428407A (en) * 2020-03-23 2020-07-17 杭州电子科技大学 Electromagnetic scattering calculation method based on deep learning
CN111610374A (en) * 2020-05-28 2020-09-01 杭州电子科技大学 Scattered field phase recovery method based on convolutional neural network
CN111999731A (en) * 2020-08-26 2020-11-27 合肥工业大学 Electromagnetic backscattering imaging method based on perception generation countermeasure network
CN113378472A (en) * 2021-06-23 2021-09-10 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王龙刚;钟威;阮恒心;贺凯;李廉林;: "大尺度电磁散射与逆散射问题的深度学习方法", 电波科学学报, no. 05 *

Also Published As

Publication number Publication date
CN114626987B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN110074813B (en) Ultrasonic image reconstruction method and system
CN111814875B (en) Ship sample expansion method in infrared image based on pattern generation countermeasure network
Yonel et al. Deep learning for passive synthetic aperture radar
CN109683161B (en) Inverse synthetic aperture radar imaging method based on depth ADMM network
CN103713288B (en) Sparse Bayesian reconstruct linear array SAR formation method is minimized based on iteration
CN107193003B (en) Sparse singular value decomposition scanning radar foresight imaging method
Wang et al. TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging
CN110726992B (en) SA-ISAR self-focusing method based on structure sparsity and entropy joint constraint
CN112906300B (en) Polarization SAR soil humidity inversion method based on double-channel convolutional neural network
CN109343060B (en) ISAR imaging method and system based on deep learning time-frequency analysis
CN107015225A (en) A kind of SAR platform elemental height error estimation based on self-focusing
CN111948652B (en) SAR intelligent parameterized super-resolution imaging method based on deep learning
CN114117886A (en) Water depth inversion method for multispectral remote sensing
CN109557540A (en) Total variation regularization relevance imaging method based on target scattering coefficient nonnegativity restrictions
CN109991602A (en) ISAR image resolution enhancement method based on depth residual error network
CN114442092B (en) SAR deep learning three-dimensional imaging method for distributed unmanned aerial vehicle
CN113378472B (en) Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN115453528A (en) Method and device for realizing segmented observation ISAR high-resolution imaging based on rapid SBL algorithm
Hu et al. FCNN-based ISAR sparse imaging exploiting gate units and transfer learning
CN115018943B (en) Electromagnetic backscatter imaging method based on training-free depth cascade network
CN114626987A (en) Electromagnetic backscattering imaging method of deep expansion network based on physics
CN114994674B (en) Intelligent microwave staring correlated imaging method and equipment and storage medium
Lorintiu et al. Compressed sensing reconstruction of 3D ultrasound data using dictionary learning
CN113311429A (en) 1-bit radar imaging method based on countermeasure sample
CN115100538A (en) Three-dimensional water quality spatial interpolation method based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant