CN117706514B - Clutter elimination method, system and equipment based on generation countermeasure network - Google Patents

Clutter elimination method, system and equipment based on generation countermeasure network Download PDF

Info

Publication number
CN117706514B
CN117706514B CN202410155231.3A CN202410155231A CN117706514B CN 117706514 B CN117706514 B CN 117706514B CN 202410155231 A CN202410155231 A CN 202410155231A CN 117706514 B CN117706514 B CN 117706514B
Authority
CN
China
Prior art keywords
image
free
clutter
simulated
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410155231.3A
Other languages
Chinese (zh)
Other versions
CN117706514A (en
Inventor
侯斐斐
王浩冉
王一军
樊欣宇
刘彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202410155231.3A priority Critical patent/CN117706514B/en
Publication of CN117706514A publication Critical patent/CN117706514A/en
Application granted granted Critical
Publication of CN117706514B publication Critical patent/CN117706514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application belongs to the field of image processing, and particularly relates to a clutter elimination method, system and equipment based on a generation countermeasure network, wherein the method comprises the following steps: obtaining a simulated clutter-free image, an actual measurement background image and a simulated noisy B scanning image; generating an countermeasure network based on the simulated clutter-free image and the third generation style to obtain a synthesized noise-free B scanning image; obtaining a synthesized noisy B-scan image based on the synthesized noiseless B-scan image and the actually measured background image; obtaining a paired training data set based on the simulated clutter-free image, the synthesized noisy B-scan image, the simulated noisy B-scan image and the synthesized noiseless B-scan image; obtaining a denoising model; obtaining image loss based on the paired training data sets and the denoising model; adjusting the denoising model based on the image loss to obtain an optimized training model; and obtaining a clutter-free image based on the optimized training model and the input image. The application has the effects of expanding the training data set and improving the image processing quality.

Description

Clutter elimination method, system and equipment based on generation countermeasure network
Technical Field
The invention belongs to the field of image processing, and particularly relates to a clutter elimination method, system and equipment based on a generation countermeasure network.
Background
Ground Penetrating Radar (GPR) is a geophysical telemetry method with high resolution and fast detection capabilities. It is widely used as a nondestructive testing technique for evaluating the position and condition of underground objects such as concrete reinforcements, underground utilities, underground water pipes, etc. However, in actual use, GPR is susceptible to disturbances and clutter, including: direct waves from the transmitting antennas, surface reflection and background scattering are detrimental to the visualization and detection of subsurface targets. For the clutter removal method of the GPR, clutter is separated from target components mainly through mode difference between linear clutter and hyperbolic target components, and the adopted technical means are a subspace-based method, a low-rank sparse matrix decomposition method, a sparse representation-based method and a deep learning method. Among them, deep learning-based methods are used to participate in solving challenging GPR tasks such as object detection, feature description, and inverse imaging by virtue of powerful feature representation and learning capabilities.
In the related art, a deep learning method is mainly a GAN-based method, which is a tool for performing unsupervised learning without paired tag data, and a generator network and a discriminator network trained by the GAN-based method are mutually opposed, so that the capability of removing impurities is stronger.
For the above related art, most GAN models based on deep learning have limitation of insufficient real GPR data and hyperbolic characteristics of surface targets in GPR B scanning are easily confused with other background interference, so that clutter removal effect is not ideal.
Disclosure of Invention
The invention aims to provide a clutter elimination method, system, equipment and storage medium based on a generation countermeasure network, which can provide a sufficient amount of real data for training a GAN model, and can make the model distinguish hyperbolic features from background at the same time, so as to improve clutter removal effect.
A method of clutter cancellation based on generating an countermeasure network, comprising:
acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image;
generating an countermeasure network based on the simulated clutter-free image and a third generation style to obtain a synthesized noise-free B scanning image;
obtaining a synthesized noisy B-scan image based on the synthesized noiseless B-scan image and the measured background image;
Obtaining a paired training data set based on the simulated clutter free image, the synthetic noisy B-scan image, the simulated noisy B-scan image, and the synthetic noiseless B-scan image;
obtaining a denoising model;
Obtaining image loss based on the paired training data sets and the denoising model;
Adjusting the denoising model based on the image loss to obtain an optimized training model;
and inputting the images to be clutter removed into the optimized training model to obtain clutter removed images.
Specifically, the third generation style generation countermeasure network is a derivative network of the style generation countermeasure network, the network structure of the generator is mainly improved, and the generator is changed to be composed of a mapping network and a synthetic network. The mapping network maps the latent code z to an intermediate vector w, adds the intermediate vector w into the synthesis network after affine transformation, and controls the style of the generator, thereby realizing the generation of images with different styles. The discriminator is used to identify the ability of the generator to generate a false image so that the generator generates a more realistic image.
The mapping network maps the input low-dimensional noise to a high-dimensional feature space, and the synthesis network combines the input features generated by the mapping network, so as to generate an image, wherein the input features comprise category labels and attribute vectors.
The denoising model comprises an encoder and a generator, wherein the encoder is used for compressing an input image into a low-dimensional representation for extracting features of the image, the low-dimensional representation is a vector or a feature map, and the generator is used for decoding the low-dimensional representation of the encoder into a denoised image and reconstructing the image.
Optionally, the generating the countermeasure network based on the simulated clutter-free image and the third generation style, the obtaining the synthesized noise-free B-scan image includes:
Generating an countermeasure network and the simulated clutter-free image based on the third generation style to obtain a random latent code;
based on the random latent codes and a mapping network, style codes are obtained;
based on the style codes and a synthesis network, obtaining a synthetic false image;
And training a third-generation style generation countermeasure network according to the synthesized false image and the simulated clutter-free image to obtain the synthesized noise-free B scanning image.
Optionally, the training the third generation style to generate the countermeasure network according to the synthetic false image and the simulated clutter-free image, and obtaining the synthetic noise-free B-scan image includes:
Comparing the synthesized false image and the simulated clutter-free image through a discriminator to obtain a discrimination result;
Based on the discrimination result, adjusting the network weights of the mapping network and the synthetic network to obtain a training third-generation style generation countermeasure network;
And generating an countermeasure network and the simulated clutter-free image based on the training third generation style to obtain the synthesized noise-free B scanning image.
Optionally, the obtaining the paired training data set based on the simulated clutter free image, the synthetic noisy B-scan image, the simulated noisy B-scan image, and the synthetic noise free B-scan image includes:
taking the simulated clutter free image and the synthetic noise free B-scan image as a first dataset;
taking the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set;
And obtaining the paired training data sets based on the first data set and the second data set.
Optionally, the obtaining the image loss based on the paired training data sets and the denoising model includes:
Obtaining a denoising B scanning image based on the second data set and the denoising model;
And obtaining image loss based on the denoising B scanning image and the first data set.
Optionally, the adjusting the denoising model based on the image loss, the obtaining an optimized training model includes:
and updating the denoising model weight based on the image loss to obtain the optimized training model.
A clutter cancellation system based on generating an countermeasure network, comprising:
the first acquisition module is used for acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image;
The first synthesis module is used for generating an countermeasure network based on the simulated clutter-free image and the third generation style to obtain a synthesized noise-free B scanning image;
the second synthesis module is used for obtaining a synthesized noisy B scanning image based on the synthesized noiseless B scanning image and the actually measured background image;
the data set generation module is used for obtaining a paired training data set based on the simulated clutter-free image, the synthesized noisy B-scan image, the simulated noisy B-scan image and the synthesized noise-free B-scan image;
the training module is used for obtaining image loss based on the paired training data sets and the denoising model;
The optimization module is used for adjusting the denoising model based on the image loss to obtain an optimization training model;
and the output module is used for obtaining a clutter removal image based on the optimized training model and the input image.
Optionally, the data set generating module includes:
a first classification unit for taking the simulated clutter free image and the synthetic noise free B-scan image as a first dataset;
a second classification unit for taking the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set;
And the association unit is used for obtaining the paired training data sets based on the second data set of the first data set.
The beneficial effects of the invention are as follows:
the noise-free B scanning image synthesized by the countermeasure network is generated through the third generation style, so that the amplification of a data set and the enrichment of samples are realized, the denoising model can be trained more, and the capability of the model for processing pictures is improved.
The noise removing capability of the noise removing model is improved and the image processing quality is improved by training the noise removing model based on the simulated clutter free image, the synthesized noisy B scanning image, the simulated noisy B scanning image and the synthesized noise free B scanning image.
Drawings
FIG. 1 is a flow chart of one embodiment of a clutter cancellation method based on generating an countermeasure network according to the present invention;
FIG. 2 is a process flow diagram of a method for clutter cancellation based on generation of an countermeasure network in accordance with the present invention;
FIG. 3 is a schematic view of the denoising model processing effect according to the present invention.
Detailed Description
As shown in fig. 1, the present invention includes:
S100, acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image.
Specifically, the simulated clutter free image is generated by gprMax software using a finite difference time domain method. For example, the x, y, z directions of the simulation domain are defined as 250×50×0.25 cm. The spatial dispersion was dx=dy=0.5 cm and dz=0.25 cm. An antenna with an operating frequency of 1.5 GHz was used as transmitter and receiver. The antenna is 5 cm from the ground and has dimensions of 17 x 10.8 x 4.5 cm. The antenna is moved in the X direction with a step size of 2 cm. 115 a-scan images are collected along a scan trajectory to generate a B-scan image. We obtained 1000B-scan images using gprMax. Clutter background B scan images are obtained while maintaining the same conditions but without targets. Then, we subtract the background B scan image to obtain corresponding clutter-free images, and each of the simulated clutter-free image and the simulated noisy B scan image is 1000.
The simulated noisy B-scan image is obtained while maintaining the same conditions (same as the simulated clutter-free image) but without the target, and the measured background image is obtained by capturing the target-free background in a real environment.
S110, generating an countermeasure network based on the simulated clutter-free image and the third generation style to obtain a synthesized noise-free B scanning image.
The composite noise-free B-scan image is generated by learning hyperbolic features from 1000 noise-free B-scans of the analog data set. Currently, there are few GPR datasets disclosed. Training a deep learning-based denoising model requires a large number of data sets to avoid poor network training effect due to too small training data sets. Modeling GPR data using finite-difference time-domain methods by gprMax software is very time-consuming, and generation-type countermeasure networks (GAN) are widely used for data enhancement, with third generation styles being employed to generate a synthetic noise-free B-scan image of the countermeasure network. A certain number of analog B-scan images are input for training, and then a similar composite B-scan image is output. In this embodiment, 4000 images are generated by synthesizing the noiseless B-scan image.
Specifically, the third generation style generation countermeasure network includes a generator and a discriminator, the analog clutter-free image is input into the generator to output a composite noise-free B-scan image, the composite noise-free B-scan image is input into the discriminator along with the input image, the discriminator attempts to identify the false image and provide feedback, and these steps are repeated a plurality of times during the competition and joint progress of the generator and the discriminator to train the third generation style generation countermeasure network. After the third generation style generation countermeasure network training is completed, the simulated clutter-free image is input into the third generation style generation countermeasure network, and a more realistic synthesized noise-free B scanning image can be obtained. The composite noiseless B-scan image is a noiseless image.
The third generation style generation countermeasure network is a derivative network of the style generation countermeasure network, mainly improves the network structure of the generator, changes the generator into two parts of a mapping network (used for converting a potential code z into an intermediate vector w) and a synthesis network (used for generating an image), and has the core concept that a latent code z is mapped into one intermediate vector w and added into the synthesis network after affine transformation, so that the style of the generator is controlled, and the specific principle is that: the mapping network maps the input low-dimensional noise into a high-dimensional feature space to generate a high-quality image, and is typically composed of a plurality of fully connected or convolutional layers, which learn a mapping relationship from the input low-dimensional space. The synthetic network combines the characteristics generated by the mapping network with some condition information (e.g., class labels, attribute vectors, etc.) to generate a realistic image, and the structure of the synthetic network typically includes an encoder and a decoder. The encoder encodes the input condition information into feature vectors, and the decoder combines the feature vectors with the features generated by the mapping network to generate a final image through operations such as deconvolution layers.
S120, based on the synthesized noiseless B scanning image and the actually measured background image, a synthesized noisy B scanning image is obtained.
Specifically, the synthetic noisy B-scan image is obtained by fusing the synthetic noiseless B-scan image with the actual measurement background image.
The denoising model requires real field data to avoid performance degradation. Because the B scanning image and the corresponding clutter-free image thereof cannot be obtained in the field environment, a compromise method is adopted, firstly, the target-free background is collected in the real experimental environment, and then the target-free background is fused with 4000 synthetic noise-free B scanning images which are generated by the third generation style and are enhanced by the countermeasure network, so that 4000 noisy B scanning images are obtained.
S130, obtaining a paired training data set based on the simulated clutter-free image, the synthesized noisy B-scan image, the simulated noisy B-scan image and the synthesized noise-free B-scan image.
Specifically, the paired training data set refers to a process that the denoising model needs to learn from noisy to noiseless when the denoising model is trained, so that the input data must need a noisy image and a noiseless image, and the noiseless image and the noisy image become the paired training data set. The clutter-free image and the synthetic noise-free B-scan image are timely simulated, and the synthetic noise-free B-scan image and the simulated noise-free B-scan image together form a paired training data set.
And S140, obtaining the image loss based on the paired training data sets and the denoising model.
Specifically, the denoising model is a model that processes a noisy B-scan image into a noise-free B-scan image. The denoising model includes an encoder and a generator. The encoder is for receiving an input image and the generator is for generating a noise-free B-scan image. The denoising model mainly compresses an input image into a low-dimensional representation through an encoder, and is generally used for extracting features of the image, and then decodes the low-dimensional representation of the encoder into a denoised image through a generator to reconstruct a cleaner image. In this process, the encoder and generator network are jointly trained using pairs of training data sets, optimizing network parameters by minimizing the loss function. In the training process, the noisy image is input into an encoder to obtain a low-dimensional representation, the denoised image is generated through a generator, and the loss is calculated by comparing the denoised image with a target image, so that the optimal network parameters are determined. The generator and encoder can be built by a GAN model (generating an antagonism network).
The image loss is the pixel loss between the real image and the reconstructed image. And inputting the noisy B-scan image in the training data into a denoising model, outputting a noiseless B-scan image by the denoising model, and comparing the noiseless B-scan image with the input image to obtain image loss.
And S150, adjusting the denoising model based on the image loss to obtain an optimized training model.
Specifically, the encoder weights are updated according to the image loss, so that the generator is inverted better, and an optimized denoising model is obtained.
S160, obtaining a clutter removal image based on the optimized training model and the input image.
Specifically, after the training of the denoising model is completed, how to better process the noisy image into a denoising image, namely a clutter removal image is learned, and at the moment, the image to be processed, namely an input image, is input into an optimized training model, so that a processed noiseless training model can be obtained.
In one implementation of this embodiment, step S110, which is to generate the countermeasure network based on the simulated clutter-free image and the third generation style, includes:
and S200, generating an countermeasure network and simulating a clutter-free image based on the third generation style to obtain a random latent code.
In particular, the third generation style of random latent codes generates an input signal for use in a countermeasure network, typically by randomly sampling from a space of latent codes.
S210, obtaining a style code based on the random latent code and the mapping network.
Specifically, the style code is used for controlling the style attribute of the generated image, and is obtained by carrying out mapping network processing on the latent code.
S220, based on the style codes and the synthesis network, obtaining the synthesized false image.
And S230, training a third-generation style to generate an countermeasure network according to the synthesized false image and the simulated clutter-free image, so as to obtain the synthesized noise-free B-scan image.
Specifically, the style codes are transmitted to a synthesis network, and the synthesis network generates a synthetic false image corresponding to the style codes according to the style codes and the simulated clutter-free image. And then the synthesized false image and the input simulated clutter-free image are identified by an identifier, so that a third-generation style countermeasure generation network is trained, and finally, the trained third-generation style countermeasure generation network is used for acquiring random latent codes again, so that the synthesized noise-free B scanning image is obtained.
In one implementation of this embodiment, step S220 includes, after obtaining the synthesized noise-free B-scan image based on the style code and the simulated noise-free image:
S300, comparing the synthesized false image and the simulated clutter-free image through a discriminator to obtain a discrimination result.
And S310, adjusting the network weights of the mapping network and the synthetic network based on the discrimination result to obtain the training third-generation style generation countermeasure network.
And S320, generating an countermeasure network and simulating a clutter-free image based on the training third generation style to obtain a synthesized noise-free B scanning image.
Specifically, the discrimination result is obtained by comparing the synthesized false image with the true image by the discriminator, the discrimination result represents the difference between the true image and the image generated by the generator, and the discriminator judges whether the input image is the true image or the false image.
The random latent codes are sampled and input into a mapping network, which is a small neural network that maps the input latent codes into intermediate latent code spaces, to obtain style codes. In this way, the style properties of the generated image can be controlled. The style code is then passed to a synthesis network to generate a false image, which uses adaptive instance normalization to dynamically adjust activation based on the latent code, and contains convolutional layers arranged in progressively growing blocks. The false image is fed into the discriminator along with the actual image in the dataset, and the discriminator attempts to identify the false image and provide feedback. According to the feedback of the discriminator, the mapping network and the synthesized network weight are updated through back propagation, and the third generation style training generation countermeasure network is obtained, so that the generator generates more realistic images. These steps are repeated multiple times during the course of the competition between the generator and the discriminator and the joint progress. After training is completed, the generator may generate a new high quality image by sampling the new latent code.
The synthesis network is part of a generator in which the synthesis network is responsible for transcoding the input style into feature activations suitable for generating the image. It controls the style properties of the image as it is generated by adjusting the features of the style code with the middle layer activation of the convolutional neural network in the generator.
In one implementation of this embodiment, step S130 includes obtaining the paired training data set based on the simulated clutter free image, the composite noisy B-scan image, the simulated noisy B-scan image, and the composite noise free B-scan image:
s400, taking the simulated clutter-free image and the synthesized noise-free B scanning image as a first data set.
S410, using the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set.
S420, based on the first data set and the second data set, paired training data sets are obtained.
Specifically, the first data set includes images without clutter, the second data set includes images with clutter, and the images with clutter and the images without clutter are taken as paired images to be input into a denoising model together, so that the denoising model learns the process from clutter to no clutter.
In one implementation manner of this embodiment, step S150, that is, based on the paired training data set and the denoising model, obtains the image loss includes:
s500, obtaining a denoising B scanning image based on the second data set and the denoising model.
S510, obtaining image loss based on the denoising B scanning image and the first data set.
Adjusting the denoising model based on the image loss, the obtaining the optimized training model includes:
updating the denoising model weight based on the image loss to obtain an optimized training model.
Specifically, when the denoising model is input, the second data set and the first data set are input into the denoising model together, but only the second data set is processed by the denoising model to obtain a denoising B scanning image, then the produced denoising B scanning image is compared with a real image to obtain image loss, namely pixel loss, and the encoder weight is updated according to the pixel loss, so that the generator is better inverted. After training is completed, the image conversion process is as follows. First, a source image is acquired and a latent code is obtained by an encoder. The mapping network will then convert the latent code into a style code to control the style attributes. The style code is then provided to a generator to synthesize an image in the target area. This allows the source image to be converted to a new domain using knowledge of the generator. The encoder projects the image into the potential space and the generator uses the code to determine the style properties in the output domain.
As shown in fig. 2, a process of clutter cancellation is demonstrated, firstly, a simulated noisy B-scan image, a simulated clutter-free image and an actual measurement background image are generated by simulation software such as gprMax, then an amplification of a data set is realized by generating an countermeasure network by a third generation style, generating the countermeasure network training for the third generation style to make the generated image more realistic, then fusing the actual measurement background image to obtain a synthesized noisy B-scan image, and finally, a pair data set is obtained, the denoising model learns the process of being noisy to be noiseless by a denoising model and the pair data set, and the weight of the denoising model is updated by a loss function to obtain an optimal denoising effect. Fig. 3 shows the effect of the final processing of the image by the trained denoising model, where (a) in fig. 3 is the input image and (b) is the output image.
A clutter cancellation system based on generating an countermeasure network, comprising:
the first acquisition module is used for acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image;
The first synthesis module is used for generating an countermeasure network based on the simulated clutter-free image and the third generation style to obtain a synthesized noise-free B scanning image;
The second synthesis module is used for obtaining a synthesized noisy B scanning image based on the synthesized noiseless B scanning image and the actually measured background image;
The data set generation module is used for obtaining paired training data sets based on the simulated clutter-free image, the synthesized noisy B-scan image, the simulated noisy B-scan image and the synthesized noiseless B-scan image;
the training module is used for obtaining image loss based on the paired training data sets and the denoising model;
the optimization module is used for adjusting the denoising model based on the image loss to obtain an optimized training model;
and the output module is used for obtaining a clutter removal image based on the optimized training model and the input image.
The data set generation module includes:
A first classification unit for taking the simulated clutter free image and the synthesized noise free B-scan image as a first dataset;
a second classification unit for taking the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set;
and the association unit is used for obtaining paired training data sets based on the first data set and the second data set.
The embodiment of the application also discloses a terminal device which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and when the processor loads and executes the computer program, a clutter elimination method based on the generation countermeasure network is adopted.
The terminal device may be a computer device such as a desktop computer, a notebook computer, or a cloud server, and the terminal device includes, but is not limited to, a processor and a memory, for example, the terminal device may further include an input/output device, a network access device, a bus, and the like.
The processor may be a Central Processing Unit (CPU), or of course, according to actual use, other general purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), ready-made programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., and the general purpose processor may be a microprocessor or any conventional processor, etc., which is not limited in this respect.
The memory may be an internal storage unit of the terminal device, for example, a hard disk or a memory of the terminal device, or an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Memory Card (SMC), a secure digital card (SD), or a flash memory card (FC) provided on the terminal device, or the like, and may be a combination of the internal storage unit of the terminal device and the external storage device, where the memory is used to store a computer program and other programs and data required by the terminal device, and the memory may be used to temporarily store data that has been output or is to be output, which is not limited by the present application.
The clutter elimination method based on the generation of the countermeasure network in the embodiment is stored in a memory of the terminal device and is loaded and executed on a processor of the terminal device, so that the clutter elimination method is convenient to use.
The embodiment of the application also discloses a computer readable storage medium, and the computer readable storage medium stores a computer program, wherein the computer program is executed by a processor, and the clutter elimination method based on the generation countermeasure network in the embodiment is adopted.
The computer program may be stored in a computer readable medium, where the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file form, or some middleware form, etc., and the computer readable medium includes any entity or device capable of carrying the computer program code, a recording medium, a usb disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunication signal, a software distribution medium, etc., where the computer readable medium includes, but is not limited to, the above components.
Wherein, through the present computer readable storage medium, a clutter cancellation method based on generating an countermeasure network in the above embodiments is stored in the computer readable storage medium, and is loaded and executed on a processor, so as to facilitate the storage and application of the method.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of protection of the application is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order and there are many other variations of the different aspects of one or more embodiments of the application as described above, which are not provided in detail for the sake of brevity.
One or more embodiments of the present application are intended to embrace all such alternatives, modifications and variations as fall within the broad scope of the present application. Accordingly, any omissions, modifications, equivalents, improvements and others which are within the spirit and principles of the one or more embodiments of the application are intended to be included within the scope of the application.

Claims (9)

1. A method for clutter cancellation based on generation of an countermeasure network, comprising:
acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image;
generating an countermeasure network based on the simulated clutter-free image and a third-generation style to obtain a synthesized noise-free B scanning image, wherein the third-generation style generation countermeasure network is a derivative network of the style generation countermeasure network, the network structure of the generator is improved, and the generator is changed to be composed of a mapping network and a synthesized network;
obtaining a synthesized noisy B-scan image based on the synthesized noiseless B-scan image and the measured background image;
Obtaining a paired training data set based on the simulated clutter free image, the synthetic noisy B-scan image, the simulated noisy B-scan image, and the synthetic noiseless B-scan image;
obtaining image loss based on the paired training data sets and the denoising model;
Adjusting the denoising model based on the image loss to obtain an optimized training model;
Inputting the images to be clutter removed into the optimized training model to obtain clutter removed images;
Generating an countermeasure network based on the simulated clutter-free image and a third generation style, the obtaining a composite noise-free B-scan image includes:
Generating an countermeasure network and the simulated clutter-free image based on the third generation style to obtain a random latent code;
based on the random latent codes and the mapping network, style codes are obtained;
Based on the style codes and the synthesis network, obtaining a synthetic false image;
and training a third-generation style to generate an countermeasure network according to the synthesized false image and the simulated clutter-free image to obtain the synthesized noise-free B scanning image.
2. The clutter cancellation method based on generating an countermeasure network of claim 1, wherein the training a third generation style generating an countermeasure network from the synthetic false image and the simulated clutter free image, the obtaining the synthetic noise free B-scan image comprises:
Comparing the synthesized false image and the simulated clutter-free image through a discriminator to obtain a discrimination result;
Based on the discrimination result, adjusting the network weights of the mapping network and the synthetic network to obtain a training third-generation style generation countermeasure network;
And generating an countermeasure network and the simulated clutter-free image based on the training third generation style to obtain the synthesized noise-free B scanning image.
3. The method of generating an countermeasure network based clutter cancellation method of claim 1, wherein the obtaining a pair-wise training dataset based on the simulated clutter free image, the synthetic noisy B-scan image, the simulated noisy B-scan image, and the synthetic noisy B-scan image comprises:
taking the simulated clutter free image and the synthetic noise free B-scan image as a first dataset;
taking the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set;
The paired training data sets are obtained based on the first data set and the second data set.
4. The method of clutter cancellation based on a generation countermeasure network of claim 3, wherein the deriving an image loss based on the paired training data sets and the denoising model comprises:
Obtaining a denoising B scanning image based on the second data set and the denoising model;
And obtaining image loss based on the denoising B scanning image and the first data set.
5. The clutter cancellation method based on generation of countermeasure network of claim 1, wherein the adjusting the denoising model based on the image loss to obtain an optimized training model comprises:
and updating the denoising model weight based on the image loss to obtain the optimized training model.
6. A clutter cancellation system based on generating an countermeasure network, comprising:
the first acquisition module is used for acquiring a simulated noisy B scanning image, a simulated clutter-free image and an actual measurement background image;
The first synthesis module is used for generating an countermeasure network based on the simulated clutter-free image and a third generation style to obtain a synthesized noise-free B scanning image, the third generation style generation countermeasure network is a derivative network of the style generation countermeasure network, the network structure of the generator is improved, and the generator is changed to be composed of a mapping network and a synthesis network;
the second synthesis module is used for obtaining a synthesized noisy B scanning image based on the synthesized noiseless B scanning image and the actually measured background image;
the data set generation module is used for obtaining a paired training data set based on the simulated clutter-free image, the synthesized noisy B-scan image, the simulated noisy B-scan image and the synthesized noise-free B-scan image;
the training module is used for obtaining image loss based on the paired training data sets and the denoising model;
The optimization module is used for adjusting the denoising model based on the image loss to obtain an optimization training model;
The output module is used for obtaining a clutter removal image based on the optimized training model and the input image;
Generating an countermeasure network based on the simulated clutter-free image and a third generation style, the obtaining a composite noise-free B-scan image includes:
Generating an countermeasure network and the simulated clutter-free image based on the third generation style to obtain a random latent code;
based on the random latent codes and the mapping network, style codes are obtained;
Based on the style codes and the synthesis network, obtaining a synthetic false image;
and training a third-generation style to generate an countermeasure network according to the synthesized false image and the simulated clutter-free image to obtain the synthesized noise-free B scanning image.
7. The system for generating a countermeasure network based clutter cancellation system of claim 6, wherein the data set generation module comprises:
a first classification unit for taking the simulated clutter free image and the synthetic noise free B-scan image as a first dataset;
a second classification unit for taking the simulated noisy B-scan image and the synthesized noisy B-scan image as a second data set;
And the association unit is used for obtaining the paired training data sets based on the second data set of the first data set.
8. A terminal device comprising a memory and a processor, characterized in that the memory stores a computer program capable of running on the processor, which processor, when loaded and executed, employs the method according to any of claims 1-5.
9. A computer readable storage medium having a computer program stored therein, characterized in that the method of any of claims 1 to 5 is employed when the computer program is loaded and executed by a processor.
CN202410155231.3A 2024-02-04 2024-02-04 Clutter elimination method, system and equipment based on generation countermeasure network Active CN117706514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410155231.3A CN117706514B (en) 2024-02-04 2024-02-04 Clutter elimination method, system and equipment based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410155231.3A CN117706514B (en) 2024-02-04 2024-02-04 Clutter elimination method, system and equipment based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN117706514A CN117706514A (en) 2024-03-15
CN117706514B true CN117706514B (en) 2024-04-30

Family

ID=90146479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410155231.3A Active CN117706514B (en) 2024-02-04 2024-02-04 Clutter elimination method, system and equipment based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN117706514B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345469A (en) * 2018-09-07 2019-02-15 苏州大学 It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition
CN111626961A (en) * 2020-05-29 2020-09-04 中国人民解放军海军航空大学 Radar image clutter suppression method and system based on generation countermeasure network
CN114998137A (en) * 2022-06-01 2022-09-02 东南大学 Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN117368877A (en) * 2023-10-19 2024-01-09 电子科技大学 Radar image clutter suppression and target detection method based on generation countermeasure learning
CN117409192A (en) * 2023-12-14 2024-01-16 武汉大学 Data enhancement-based infrared small target detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345469A (en) * 2018-09-07 2019-02-15 苏州大学 It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition
CN111626961A (en) * 2020-05-29 2020-09-04 中国人民解放军海军航空大学 Radar image clutter suppression method and system based on generation countermeasure network
CN114998137A (en) * 2022-06-01 2022-09-02 东南大学 Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN117368877A (en) * 2023-10-19 2024-01-09 电子科技大学 Radar image clutter suppression and target detection method based on generation countermeasure learning
CN117409192A (en) * 2023-12-14 2024-01-16 武汉大学 Data enhancement-based infrared small target detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度学习的雷达图像目标识别研究进展;潘宗序;安全智;张冰尘;;中国科学:信息科学;20191220(12);全文 *
戴圆强.基于神经网络的高频地波雷达电离层杂波抑制方法研究.2023,全文. *

Also Published As

Publication number Publication date
CN117706514A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111724478B (en) Point cloud up-sampling method based on deep learning
Baur et al. MelanoGANs: high resolution skin lesion synthesis with GANs
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
Sonogashira et al. High-resolution bathymetry by deep-learning-based image superresolution
CN112578471B (en) Clutter noise removing method for ground penetrating radar
CN114966600A (en) Clutter suppression method and system for B-scan image of ground penetrating radar
Reed et al. Coupling rendering and generative adversarial networks for artificial SAS image generation
US11403807B2 (en) Learning hybrid (surface-based and volume-based) shape representation
Wang et al. An adaptive learning image denoising algorithm based on eigenvalue extraction and the GAN model
CN111025385A (en) Seismic data reconstruction method based on low rank and sparse constraint
CN117706514B (en) Clutter elimination method, system and equipment based on generation countermeasure network
Zhang et al. SFA-GAN: structure–frequency-aware generative adversarial network for underwater image enhancement
Du et al. Multi-category SAR images generation based on improved generative adversarial network
CN109063760B (en) Polarization SAR classification method based on random forest multi-scale convolution model
Ruiz‐Munoz et al. Super resolution for root imaging
CN115018943B (en) Electromagnetic backscatter imaging method based on training-free depth cascade network
CN112381845B (en) Rock core image generation method, model training method and device
An et al. A Nsst-Based Fusion Method For Airborne Dual-Frequency, High-Spatial-Resolution Sar Images
CN108564098B (en) Polarization SAR classification method based on scattering complete convolution model
US20240127530A1 (en) Method, electronic device, and computer program product for generating target object
Pu et al. Cross-Domain SAR Ship Detection in Strong Interference Environment Based On Image-to-Image Translation
CN114626987B (en) Electromagnetic backscatter imaging method based on physical depth expansion network
Hamida et al. Facies-guided seismic image super-resolution
Sharma et al. Super-resolution reconstruction and denoising of 3D millimetre-wave images using a complex-valued convolutional neural network
CN115372960B (en) Method for enhancing sky-wave radar land-sea clutter data of improved generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant