CN111999731A - Electromagnetic backscattering imaging method based on perception generation countermeasure network - Google Patents

Electromagnetic backscattering imaging method based on perception generation countermeasure network Download PDF

Info

Publication number
CN111999731A
CN111999731A CN202010870574.XA CN202010870574A CN111999731A CN 111999731 A CN111999731 A CN 111999731A CN 202010870574 A CN202010870574 A CN 202010870574A CN 111999731 A CN111999731 A CN 111999731A
Authority
CN
China
Prior art keywords
image
discriminator
network
countermeasure
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010870574.XA
Other languages
Chinese (zh)
Other versions
CN111999731B (en
Inventor
宋仁成
黄优优
刘羽
李畅
成娟
陈勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202010870574.XA priority Critical patent/CN111999731B/en
Publication of CN111999731A publication Critical patent/CN111999731A/en
Application granted granted Critical
Publication of CN111999731B publication Critical patent/CN111999731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an electromagnetic backscattering imaging method based on a perception generation countermeasure network, which comprises the following steps: 1, according to the measured scattering field, quickly generating a low-resolution scatterer image by using a back propagation method; 2, in the stage of network structure building, adopting a generation countermeasure network architecture and designing a generator and a discriminator structure; 3, in a loss function design stage, adding a perception countermeasure loss into the target function, and extracting the perception countermeasure loss by utilizing a hidden layer of the discriminator so as to simultaneously realize matching of the reconstructed image and the target image on pixels and features; and 4, generating a countermeasure network through training perception, and reconstructing the relative dielectric constant of the scatterer. The invention can more effectively lead the generated network to learn the target characteristic information, thereby better improving the imaging quality.

Description

Electromagnetic backscattering imaging method based on perception generation countermeasure network
Technical Field
The invention belongs to the technical field of electromagnetic backscatter imaging, and particularly relates to electromagnetic backscatter imaging by using a deep learning method.
Background
Electromagnetic backscattering determines information such as the position, shape and physical parameters of a scatterer by combining a measured scattering field with an inversion algorithm. Electromagnetic backscattering is generally a highly nonlinear and ill-posed problem. Through the development of many years, researchers have proposed various electromagnetic backscattering reconstruction algorithms, wherein the quantitative method is the mainstream direction of the electromagnetic backscattering research at present because all information of scatterers can be obtained.
The quantitative method generally defines a nonlinear objective function containing regularization terms, and then adopts integral or local linearization approximation to carry out iterative optimization solution on objective parameters. Typical quantitative methods include the Deformed Born Iterative Method (DBIM), the Contrast Source Inversion (CSI), Subspace Optimization (SOM), and the like. The traditional nonlinear quantitative electromagnetic backscattering method generally has the bottleneck problems of high calculation complexity, unstable imaging quality and the like.
In recent years, deep neural network technology has been widely applied to artificial intelligence field problems such as pattern recognition, classification and regression due to its strong mapping capability and fast computation speed. Inspired by it, recent researchers have applied Convolutional Neural Network (CNN) techniques to solve the electromagnetic backscattering problem. For example, Li et al propose a 'deep nis' algorithm based on a rewinding product neural network by analogy with the relation between the conventional nonlinear iterative method and CNN. Wei et al propose a DCS algorithm for rapidly constructing an approximate image of a target parameter, and then map the approximate image of the target parameter with an accurate image of the target parameter by using a simplified U-net. The test results in the above paper show that the imaging quality and speed of the depth backscattering method exceed those of the traditional nonlinear iterative method, but the new methods have a great promotion space. Many backscatter problems have significant a priori information, for example in medical imaging, the object and background images to be detected usually have significant structural information, and therefore the reconstructed image also contains corresponding features. If the characteristics of the target image lack explicit constraints, the reconstructed image is easy to have artifacts.
Disclosure of Invention
The invention provides an electromagnetic backscattering imaging method based on a perception generation countermeasure network for overcoming the defects of the prior art, so that the generation network can learn target characteristic information more effectively, and a reconstructed image has better quality.
The invention adopts the following technical scheme for solving the technical problems:
the invention relates to an electromagnetic backscattering imaging method based on a perception generation countermeasure network, which is characterized by comprising the following steps of:
step one, data generation:
step 1.1, calculating a scattering field on an MxM grid by adopting a moment method;
step 1.2, generating a scatterer image x with low resolution by using a back propagation method according to the scattering field;
step two, sensing and generating a structure building of a countermeasure network:
step 2.1, by generator GθAnd a discriminator DφConstructing a structure for perceiving and generating the countermeasure network;
step 2.2, the generator GθUsing a U-net network with the input as the scatterer image x, the generator GθIs an approximately true reconstructed image Gθ(x);
Step 2.3, the discriminator DφUsing a convolutional neural network, said discriminator DφHas two channels, one of which inputs the scatterer image x as a condition and the other of which inputs the reconstructed image Gθ(x) Or the real target image y, the discriminator DφIs judged as a characteristicIdentifying a matrix;
the discriminator DφThe feature extractor is used for extracting features of different hidden layers, the feature extractor judges feature levels of features output after each hidden layer, and the discriminator D is expected to be used for real target imagesφThe feature level is judged to be '1', and a reconstructed image G is subjected toθ(x) Expectation is given to the discriminator DφJudging the characteristic level to be 0;
step three, designing a loss function for sensing, generating and judging the network;
step 3.1, design generator G using formula (1)θTarget loss function L ofG
Figure BDA0002650972770000021
In the formula (1), L1Represents a 1 norm loss and is obtained using formula (2); l isPerp,jRepresenting the calculated perceptual countermeasure loss of the jth hidden layer, and obtaining the perceptual countermeasure loss by using a formula (3); α is a hyper-parameter, M represents the number of hidden layers extracted in the feature extractor, λjA weight representing the perceptual countermeasure loss calculated by the jth hidden layer;
Figure BDA0002650972770000022
in the formula (2), N represents the number of pictures in one batch, xiRepresenting the ith low-resolution input image in a batch, yiRepresenting the corresponding ith input image xiThe real target image of (1);
Figure BDA0002650972770000031
in the formula (3), djIs the discriminator DφThe characteristics corresponding to the jth hidden layer;
step 3.2, designing the discriminator D by using the formula (4)φTarget loss function L ofD
Figure BDA0002650972770000032
In formula (4) [ ·]+Max (0,), m is another hyper-parameter;
generating a countermeasure network through training perception, and reconstructing the relative dielectric constant of the scatterer:
inputting the low-resolution image x into the perception generation countermeasure network, so that a discriminator D in the process of back propagation of a loss functionφAnd generator GθAnd performing alternate training, and continuously adjusting network parameters to continuously reduce the difference between a reconstructed image output by the network and a real image until the error is not reduced any more, thereby obtaining an optimal model to realize the reconstruction of the relative dielectric constant of the scatterer.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a novel electromagnetic backscattering imaging method based on a perception generation countermeasure network, and a discriminator of the method can guide generation training of a generator, so that high-quality imaging reconstruction is realized.
2. The invention introduces a perception countermeasure loss function into the target function, so that the reconstructed image and the target image are matched on pixels and characteristics at the same time, and the imaging quality is improved better.
3. The electromagnetic backscattering imaging method based on the perception generation countermeasure network considers the structural information of the reconstructed image and gives more characteristic information to the network, so that the characteristics of the reconstructed image are constrained, and the model is predicted more accurately.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of a sense-generated confrontation network generator of the present invention;
FIG. 3 is a diagram of a perceptually-generated confrontation network arbiter of the present invention;
FIG. 4 is a diagram showing a result of reconstructing a MNIST handwritten digit data set according to the present invention;
FIG. 5 is a diagram showing the reconstruction result of "Austria" data set with different proportions of noise added according to the present invention;
FIG. 6 is a graph showing the result of reconstructing experimental data with a frequency of 3GHz according to the present invention.
Detailed Description
In this embodiment, an electromagnetic backscattering imaging method for generating a countermeasure network based on sensing first generates a scatterer image x with low resolution from a measured scattering field by using a back propagation method, and then generates the scatterer image x by using a generator GθMapping from a low resolution image to a target image to generate an approximate true reconstructed image Gθ(x) In that respect Using the low resolution image x generated by BP as a condition of a discriminator and reconstructing an image Gθ(x) Or the real target image y is input into a discriminator in a pairing way, wherein the discriminator is used as a feature extractor to extract features of different hidden layers, so that the reconstructed image and the target image are matched on pixels and features at the same time. Specifically, as shown in fig. 1, the method comprises the following steps:
step one, data generation;
step 1.1, in a two-dimensional transverse magnetic field, assuming a frequency of 400MHZ and a region of interest of 2.0 × 2.0 meters, the region of interest is discretized into a grid of 64 × 64. With 16 plane wave incidences, 32 receive antennas are evenly distributed on a circle with a radius of 3.0 meters. In order to avoid the occurrence of the "inverse crop" phenomenon in the inverse problem, for each incidence, a scattering field is calculated on a grid of 100 multiplied by 100 by adopting a moment method;
step 1.2, generating a scatterer image x with low resolution by using a back propagation method according to a scattering field, adopting MNIST handwritten figures added with random circles as a training set, assuming that each scatterer is uniform and lossless, randomly distributing the scatterers with relative dielectric constants of 1.5-2.5 and a background of a free space;
the training set scattered field does not contain noise, the testing set scattered field is added with 10% of white Gaussian noise, most of the real scattered field contains noise, and the purpose of adding the noise in the testing set scattered field is to simulate the real scattering phenomenon. 5000 pieces of MNIST training sets added with random circles are randomly selected to serve as training sets, another 2500 pieces of MNIST training sets serve as verification sets, and finally 1500 pieces of MNIST testing sets added with random circles are randomly selected to serve as tests. Meanwhile, in order to verify the effectiveness of the model under different noises and the generalization capability of the model to real data, experimental data with different proportions of noise, the dielectric constant of 1.5, and the frequency of 3GHz configured by FoamDieExt are generated respectively;
step two, sensing and generating a structure building of a countermeasure network:
step 2.1, by generator GθAnd a discriminator DφConstructing a structure for perceiving and generating the countermeasure network;
step 2.2, Generator GθUsing a U-net network, the input of which is a scatterer image x, a generator GθIs an approximately true reconstructed image Gθ(x) (ii) a The network structure of the generator is shown in fig. 2, and the network structure mainly comprises a contraction path and an expansion path, and comprises an input layer, a convolution layer, a pooling layer and an output layer. In addition to the shrinking path of every two 3 × 3 convolutional layers followed by a 2 × 2 max pooling layer, and the re lu activation function is used after each convolutional layer to perform down-sampling operation on the original picture, each down-sampling operation increases by one channel. In the up-sampling of the extended path, there will be one 2 × 2 convolutional layer and two 3 × 3 convolutional layers in each step; fig. 2 is a network structure diagram of a generator, in which an arrow in the middle of a contraction path and an expansion path indicates that a skip connection is added, so that the feature map from the corresponding contraction path is added in the up-sampling of each step, thereby ensuring that more low-level features are fused in the output image; the number indicates the number of channels, and the number of input and output channels of the generator network is 1.
Step 2.3, discriminator DφUsing a convolutional neural network, discriminator DφHas two channels, one of which inputs the scatterer image x as a condition and the other of which inputs the reconstructed image Gθ(x) Or the true target image y, discriminator DφThe output of (1) is a feature discrimination matrix instead of a scalar value, so that the trained model can pay more attention to image details; discriminator DφThe feature extractor is used for extracting features of different hidden layers, the feature extractor judges feature levels of features output after each hidden layer, and the expected discriminator D is used for real target imagesφThe feature level is judged to be '1', and a reconstructed image G is subjected toθ(x) Expectation discriminator DφJudging the characteristic level to be 0; pass through discriminator DφExtracting the features of different hidden layers to enable the reconstructed image and the target image to be matched on pixels and features at the same time, so that the imaging quality is improved; the network structure of the discriminator is shown in fig. 3, the network structure is a convolutional neural network mainly composed of a series of convolutional-pooling-LeakyReLU layers, the LeakyReLu activation function is adopted to perform down-sampling operation on the original picture after the convolutional layer is input, and the output layer is a 4 x 4 convolutional layer; the network structure of the discriminator is shown in FIG. 3, where djIn order to represent the characteristics corresponding to the hidden layer by the discriminator, only one layer is taken as the hidden layer in the embodiment; the number indicates the number of channels, the number of input channels of the discriminator network is 2, and the number of output channels is 1.
Step three, designing a loss function for sensing, generating and judging the network;
the perceptually-generated confrontation network includes a generator GθAnd arbiter network DφThe two networks need to be respectively established with loss functions, and the two networks are alternately trained and continuously optimized in the process of back propagation of the loss functions.
Step 3.1, design generator G using formula (1)θTarget loss function L ofGAt the generator GθAdding a perceptual countermeasure loss to the loss function of (D), using a discriminator network DφComputing a perceptual countermeasure loss between the reconstructed image and the true target image as a feature extractor:
Figure BDA0002650972770000051
in the formula (1),L1Represents a 1 norm loss and is obtained using formula (2); l isPerp,jRepresenting the calculated perceptual countermeasure loss of the jth hidden layer, and obtaining the perceptual countermeasure loss by using a formula (3); alpha is a hyper-parameter used to balance the 1-norm loss and the influence of the perceptual countermeasure loss; m represents the number of hidden layers extracted in the feature extractor, λjWeights representing perceptual countermeasure loss calculated by the jth hidden layer are used for balancing the influence of different hidden layers;
Figure BDA0002650972770000061
in the formula (2), N represents the number of pictures in one batch, xiRepresenting the ith low-resolution input image in a batch, yiRepresenting the corresponding ith input image xiThe real target image of (1);
Figure BDA0002650972770000062
in the formula (3), djIs a discriminator DφThe characteristics corresponding to the jth hidden layer;
step 3.2, design the discriminator D by using the formula (4)φTarget loss function L ofD
Figure BDA0002650972770000063
In formula (4) [ ·]+Max (0, ·); m is another hyper-parameter used for setting an upper limit for the sensing countermeasure loss in the discriminator, the hyper-parameter m needs to be adjusted firstly in the experimental process, and then the weight of each hidden layer is adjusted according to the value of m;
in the third step, the loss function in the least square form is used for replacing the logarithmic loss in the original GAN, so that the gradient dispersion phenomenon which often occurs in the training process of generating the countermeasure network is avoided, and the model can be better converged; in the design of the objective loss function of the step 3.1 generator,the invention adopts the perception countermeasure loss to replace the generation countermeasure loss in the original GAN, and aims to relieve the difficulty of the generation countermeasure network learning so that the model can be better converged; the invention employs a discriminator network DφAs a feature extractor for extracting perceptual countermeasure loss, the reconstructed image is matched with the target image in terms of pixels and features at the same time, rather than using a pre-trained network, which ensures that the features used are true features of the target.
Generating a countermeasure network through training perception, and reconstructing the relative dielectric constant of the scatterer:
step 4.1, inputting the low-resolution image x into a perception generation countermeasure network, so that a discriminator D in the process of loss function back propagationφAnd generator GθAnd performing alternate training, so that a perception generation confrontation network model after the training of the s time can be obtained, and a reconstructed image after the training of the s time is output.
Obtaining an s-th training error according to the reconstructed image after the s-th training and the corresponding real target image;
the optimizer used in the network is the Adam optimizer, beta1=0.9,β2At 0.999, the batch size is set to 1, the initial learning rate is 0.0002, and the learning rate starts to decrease when the number of iterations reaches half of the total epoch number until the last epoch learning rate decreases to 0. The number of hidden layers M of the discriminator is 1, and since the shape of the data set generated in the present invention is relatively simple and does not have much texture detail, only one hidden layer of the discriminator is selected to extract the perceptual countermeasure loss.
And (3) continuously adjusting network parameters according to the training error value of the s th time, then performing training of the (s + 1) th time, and then repeating the steps 4.1 and 4.2 to continuously reduce the difference between the reconstructed image output by the network and the real image until the error is not reduced any more, thereby obtaining an optimal model to realize the reconstruction of the relative dielectric constant of the scatterer.
The present invention uses Structural Similarity (SSIM) and Root Mean Square Error (RMSE) as evaluation indicators for both intra-and cross-dataset testing. The method provided by the invention and the method using the U-net networkThe method for reconstructing the line relative dielectric constant is compared, wherein the target function in the U-net network is LG=L1For the sake of comparison, the method of reconstructing an image using a U-net network is referred to as U-net in the description of the result. The method of the present invention performs training and testing on handwritten digital data sets and directly performs testing on "Austria" and experimental data. The result of the reconstruction within the MNIST handwritten digit data set is shown in fig. 4, where GT represents the true target image; BP represents a low resolution image that is rapidly reconstructed using a back propagation algorithm; u-net indicates the reconstruction of an image using a U-net network. Meanwhile, the invention also carries out cross-dataset test, and the reconstruction results on the 'Austria' dataset and the experimental data are respectively shown in FIG. 5 and FIG. 6; reconstruction of the "Austria" data set with different proportions of noise added as shown in FIG. 5 shows a graph, in which Test5-Test8 sequentially represent images with 10%, 20%, 25% and 30% noise added; GT represents a real target image; BP represents a low resolution image rapidly reconstructed using a back propagation algorithm with 10% noise added; u-net indicates the reconstruction of an image using a U-net network. The experimental data reconstruction result display diagram with the frequency of 3GHz as shown in fig. 6, wherein GT represents a real target image; BP represents a low resolution image reconstructed rapidly using a back propagation algorithm; u-net represents the reconstruction of images using a U-net network; the dotted line indicates the position of the real image.
The reconstruction result shows that the method provided by the invention can obviously improve the quality of the reconstructed image and reduce the image artifacts, and particularly has obvious advantages on more challenging 'Austria' data sets and experimental data, and meanwhile, the result also proves that the trained model has better generalization capability, and the method provided by the invention is still effective under the noise interference of different proportions.

Claims (1)

1. An electromagnetic backscattering imaging method based on a perception generation countermeasure network is characterized by comprising the following steps:
step one, data generation:
step 1.1, calculating a scattering field on an MxM grid by adopting a moment method;
step 1.2, generating a scatterer image x with low resolution by using a back propagation method according to the scattering field;
step two, sensing and generating a structure building of a countermeasure network:
step 2.1, by generator GθAnd a discriminator DφConstructing a structure for perceiving and generating the countermeasure network;
step 2.2, the generator GθUsing a U-net network with the input as the scatterer image x, the generator GθIs an approximately true reconstructed image Gθ(x);
Step 2.3, the discriminator DφUsing a convolutional neural network, said discriminator DφHas two channels, one of which inputs the scatterer image x as a condition and the other of which inputs the reconstructed image Gθ(x) Or the real target image y, the discriminator DφThe output of (a) is a feature discrimination matrix;
the discriminator DφThe feature extractor is used for extracting features of different hidden layers, the feature extractor judges feature levels of features output after each hidden layer, and the discriminator D is expected to be used for real target imagesφThe feature level is judged to be '1', and a reconstructed image G is subjected toθ(x) Expectation is given to the discriminator DφJudging the characteristic level to be 0;
step three, designing a loss function for sensing, generating and judging the network;
step 3.1, design generator G using formula (1)θTarget loss function L ofG
Figure FDA0002650972760000011
In the formula (1), L1Represents a 1 norm loss and is obtained using formula (2); l isPerp,jRepresenting the calculated perceptual countermeasure loss of the jth hidden layer, and obtaining the perceptual countermeasure loss by using a formula (3); alpha is a hyperparameter, M representsNumber of hidden layers, λ, extracted in a feature extractorjA weight representing the perceptual countermeasure loss calculated by the jth hidden layer;
Figure FDA0002650972760000012
in the formula (2), N represents the number of pictures in one batch, xiRepresenting the ith low-resolution input image in a batch, yiRepresenting the corresponding ith input image xiThe real target image of (1);
Figure FDA0002650972760000021
in the formula (3), djIs the discriminator DφThe characteristics corresponding to the jth hidden layer;
step 3.2, designing the discriminator D by using the formula (4)φTarget loss function L ofD
Figure FDA0002650972760000022
In formula (4) [ ·]+Max (0,), m is another hyper-parameter;
generating a countermeasure network through training perception, and reconstructing the relative dielectric constant of the scatterer:
inputting the low-resolution image x into the perception generation countermeasure network, so that a discriminator D in the process of back propagation of a loss functionφAnd generator GθAnd performing alternate training, and continuously adjusting network parameters to continuously reduce the difference between a reconstructed image output by the network and a real image until the error is not reduced any more, thereby obtaining an optimal model to realize the reconstruction of the relative dielectric constant of the scatterer.
CN202010870574.XA 2020-08-26 2020-08-26 Electromagnetic backscattering imaging method based on perception generation countermeasure network Active CN111999731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010870574.XA CN111999731B (en) 2020-08-26 2020-08-26 Electromagnetic backscattering imaging method based on perception generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010870574.XA CN111999731B (en) 2020-08-26 2020-08-26 Electromagnetic backscattering imaging method based on perception generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111999731A true CN111999731A (en) 2020-11-27
CN111999731B CN111999731B (en) 2022-03-22

Family

ID=73471067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010870574.XA Active CN111999731B (en) 2020-08-26 2020-08-26 Electromagnetic backscattering imaging method based on perception generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111999731B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378472A (en) * 2021-06-23 2021-09-10 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN114609631A (en) * 2022-03-08 2022-06-10 电子科技大学 Synthetic aperture radar undersampling imaging method based on generation countermeasure network
CN114626987A (en) * 2022-03-25 2022-06-14 合肥工业大学 Electromagnetic backscattering imaging method of deep expansion network based on physics
CN115731125A (en) * 2022-11-11 2023-03-03 贵州大学 Big data technology-based method for eliminating main beam effect of radio interference array
CN117973456A (en) * 2024-03-29 2024-05-03 安徽大学 Electromagnetic backscatter imaging method based on deep learning network model
CN117973456B (en) * 2024-03-29 2024-07-02 安徽大学 Electromagnetic backscatter imaging method based on deep learning network model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170068948A (en) * 2015-12-10 2017-06-20 김영욱 Radar and method for idendifying of target using the same
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
CN108960159A (en) * 2018-07-10 2018-12-07 深圳市唯特视科技有限公司 A kind of thermal imaging face identification method based on generation confrontation network
CN110097524A (en) * 2019-04-22 2019-08-06 西安电子科技大学 SAR image object detection method based on fusion convolutional neural networks
CN110163809A (en) * 2019-03-31 2019-08-23 东南大学 Confrontation network DSA imaging method and device are generated based on U-net
US20190373293A1 (en) * 2019-08-19 2019-12-05 Intel Corporation Visual quality optimized video compression
CN110988818A (en) * 2019-12-09 2020-04-10 西安电子科技大学 Cheating interference template generation method for countermeasure network based on condition generation formula
CN111077523A (en) * 2019-12-13 2020-04-28 南京航空航天大学 Inverse synthetic aperture radar imaging method based on generation countermeasure network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170068948A (en) * 2015-12-10 2017-06-20 김영욱 Radar and method for idendifying of target using the same
CN108492258A (en) * 2018-01-17 2018-09-04 天津大学 A kind of radar image denoising method based on generation confrontation network
CN108960159A (en) * 2018-07-10 2018-12-07 深圳市唯特视科技有限公司 A kind of thermal imaging face identification method based on generation confrontation network
CN110163809A (en) * 2019-03-31 2019-08-23 东南大学 Confrontation network DSA imaging method and device are generated based on U-net
CN110097524A (en) * 2019-04-22 2019-08-06 西安电子科技大学 SAR image object detection method based on fusion convolutional neural networks
US20190373293A1 (en) * 2019-08-19 2019-12-05 Intel Corporation Visual quality optimized video compression
CN110988818A (en) * 2019-12-09 2020-04-10 西安电子科技大学 Cheating interference template generation method for countermeasure network based on condition generation formula
CN111077523A (en) * 2019-12-13 2020-04-28 南京航空航天大学 Inverse synthetic aperture radar imaging method based on generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANWEI PANG ET AL.: "Visual Haze Removal by a Unified Generative Adversarial Network", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
简献忠 等: "基于生成对抗网络的压缩感知图像重构方法", 《包装工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378472A (en) * 2021-06-23 2021-09-10 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN113378472B (en) * 2021-06-23 2022-09-13 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN114609631A (en) * 2022-03-08 2022-06-10 电子科技大学 Synthetic aperture radar undersampling imaging method based on generation countermeasure network
CN114609631B (en) * 2022-03-08 2023-12-22 电子科技大学 Synthetic aperture radar undersampling imaging method based on generation countermeasure network
CN114626987A (en) * 2022-03-25 2022-06-14 合肥工业大学 Electromagnetic backscattering imaging method of deep expansion network based on physics
CN114626987B (en) * 2022-03-25 2024-02-20 合肥工业大学 Electromagnetic backscatter imaging method based on physical depth expansion network
CN115731125A (en) * 2022-11-11 2023-03-03 贵州大学 Big data technology-based method for eliminating main beam effect of radio interference array
CN117973456A (en) * 2024-03-29 2024-05-03 安徽大学 Electromagnetic backscatter imaging method based on deep learning network model
CN117973456B (en) * 2024-03-29 2024-07-02 安徽大学 Electromagnetic backscatter imaging method based on deep learning network model

Also Published As

Publication number Publication date
CN111999731B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN111999731B (en) Electromagnetic backscattering imaging method based on perception generation countermeasure network
RU2709437C1 (en) Image processing method, an image processing device and a data medium
Wei et al. Deep-learning schemes for full-wave nonlinear inverse scattering problems
US11449759B2 (en) Medical imaging diffeomorphic registration based on machine learning
CN107610193B (en) Image correction using depth-generated machine learning models
KR20200032651A (en) Apparatus for three dimension image reconstruction and method thereof
CN113436290A (en) Method and system for selectively removing streak artifacts and noise from images using a deep neural network
CN112912758A (en) Method and system for adaptive beamforming of ultrasound signals
US20220130084A1 (en) Systems and methods for medical image processing using deep neural network
CN116097302A (en) Connected machine learning model with joint training for lesion detection
US20220361848A1 (en) Method and system for generating a synthetic elastrography image
Zhang et al. CNN and multi-feature extraction based denoising of CT images
CN114581550B (en) Magnetic resonance imaging down-sampling and reconstruction method based on cross-domain network
CN113538616A (en) Magnetic resonance image reconstruction method combining PUGAN and improved U-net
CN113378472B (en) Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN111507047A (en) Inverse scattering imaging method based on SP-CUnet
WO2021165053A1 (en) Out-of-distribution detection of input instances to a model
CN115601268A (en) LDCT image denoising method based on multi-scale self-attention generation countermeasure network
CN115880523A (en) Image classification model, model training method and application thereof
Cherian et al. A Novel AlphaSRGAN for Underwater Image Super Resolution.
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
El-Shafai et al. Hybrid Single Image Super-Resolution Algorithm for Medical Images.
CN116843544A (en) Method, system and equipment for super-resolution reconstruction by introducing hypersonic flow field into convolutional neural network
CN114972332B (en) Bamboo laminated wood crack detection method based on image super-resolution reconstruction network
CN115482434A (en) Small sample high-quality generation method based on multi-scale generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant