CN116930884B - SAR deception jamming template generation and jamming method based on optical SAR image conversion - Google Patents

SAR deception jamming template generation and jamming method based on optical SAR image conversion Download PDF

Info

Publication number
CN116930884B
CN116930884B CN202311193660.1A CN202311193660A CN116930884B CN 116930884 B CN116930884 B CN 116930884B CN 202311193660 A CN202311193660 A CN 202311193660A CN 116930884 B CN116930884 B CN 116930884B
Authority
CN
China
Prior art keywords
layer
sar
generator
optical
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311193660.1A
Other languages
Chinese (zh)
Other versions
CN116930884A (en
Inventor
田甜
周峰
张宇成
郭欣仪
樊伟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202311193660.1A priority Critical patent/CN116930884B/en
Publication of CN116930884A publication Critical patent/CN116930884A/en
Application granted granted Critical
Publication of CN116930884B publication Critical patent/CN116930884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/38Jamming means, e.g. producing false echoes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses a SAR deception jamming template generation method based on optical SAR image conversion, which comprises the following steps: acquiring an optical target image; inputting the optical target image into a trained generator to generate a false SAR target image under the pitching and azimuth angles corresponding to the optical target image; the generator is obtained by adopting a plurality of pairs of training samples and a target loss function to generate countermeasures for network training on initial conversion; each pair of training samples comprises an optical sample image and an SAR sample image; the target loss function comprises a generator loss function and a discriminator loss function; the generator loss function comprises a cyclic consistency loss function, a focusing frequency loss function and a Watson-Stent gradient penalty generating a correlation term to the generator in the counterloss function; and taking the generated false SAR target image as a deception jamming template. The invention can generate the SAR target spoofing interference template with multiple gestures and high fidelity by using rich optical target images, and improves the diversity and quality of the SAR spoofing interference template.

Description

SAR deception jamming template generation and jamming method based on optical SAR image conversion
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a SAR deception jamming template generation and jamming method based on optical SAR image conversion.
Background
The synthetic aperture radar (Synthetic Aperture Radar, SAR) is a microwave remote sensing radar, and is widely applied to the fields of geological mapping, target monitoring, key region reconnaissance and the like due to the advantages of all-day and all-weather high-resolution imaging. With the rapid development of SAR technology, in order to effectively avoid the critical area or target from being subjected to malicious detection of SAR, various deceptive jamming technologies have also been developed, and a high-quality SAR deception jamming template is a precondition for implementing effective SAR deception jamming, so that a research on a high-fidelity, multi-pose, low-time-consuming SAR deception jamming template generation technology is required.
Conventional SAR deceptive jamming template generation methods are generally divided into template generation methods based on measured acquisition data and template generation methods based on electromagnetic calculation. The former has high labor cost, high operation difficulty and strict environmental requirements, and the latter has the problems of high geometric modeling complexity, high calculation resource consumption and the like. With the development of artificial intelligence technology, a deep learning-based SAR target image generation method starts to emerge, and a new solution idea is provided for the generation of SAR deception jamming templates. The current SAR deception jamming template generation method based on deep learning mainly adopts a generation countermeasure network (Generative Adversarial Networks, GAN) and a derivative network thereof to generate an SAR target deception jamming template. However, the method utilizes SAR image data sets to generate templates, and the current multi-target type and multi-angle SAR target images are very limited, so that the templates have insufficient diversity and cannot adapt to various deceptive interference environments.
That is, the SAR spoofing template generating method in the related art has the following problems:
1) The SAR deception jamming template generation method based on the actually measured acquisition data and the electromagnetic calculation is high in template acquisition cost and long in time consumption;
2) For the method for generating the template mainly by adopting the SAR image dataset, the template diversity is insufficient due to the fact that the existing multi-target type and multi-angle SAR target images are very limited, so that the generated template cannot adapt to various deceptive interference environments.
Disclosure of Invention
In order to solve the problems in the related art, the invention provides a SAR deception jamming template generation and jamming method based on optical SAR image conversion. The technical problems to be solved by the invention are realized by the following technical scheme:
the invention provides a SAR deception jamming template generation method based on optical SAR image conversion, which comprises the following steps:
acquiring an optical target image;
inputting the optical target image into a trained generator to generate a false SAR target image under the pitching and azimuth angles corresponding to the optical target image; the generator is obtained by training an initial conversion generation countermeasure network by adopting a plurality of pairs of training samples and a target loss function; each pair of training samples comprises an optical sample image and an SAR sample image; the target loss function comprises a generator loss function and a discriminator loss function; the generator loss function comprises a cyclic consistency loss function, a focusing frequency loss function and a Watson-Stant gradient penalty generation correlation terms of the generator in the counterloss function;
and taking the false SAR target image as a deception jamming template.
In some embodiments, the inputting the optical target image into a trained generator, generating a false SAR target image for the optical target image corresponding to pitch and azimuth, includes:
after the optical target image is input into a trained generator, the optical target image sequentially passes through a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer and a sixth coding layer to generate coding characteristics;
after the coding features pass through the first decoding layer, generating first decoding features;
the output of the first decoding feature and the output of the fifth coding layer jointly pass through a second decoding layer to generate a second decoding feature;
the output of the second decoding feature and the output of the fourth coding layer jointly pass through a third decoding layer to generate a third decoding feature;
the third decoding characteristic and the output of the third coding layer jointly pass through a fourth decoding layer to generate a fourth decoding characteristic;
and the fourth decoding characteristic sequentially passes through the fifth decoding layer, the sixth decoding layer and a combination layer to generate the SAR target image.
In some embodiments, the first coding layer comprises, in order: a first convolution layer, a convolution kernel selection network, an instance normalization layer, and a ReLU activation layer;
the second coding layer sequentially comprises: a second convolution layer, the convolution kernel selection network layer, the instance normalization layer, and the ReLU activation layer;
the third coding layer sequentially comprises: a third convolution layer, the convolution kernel selection network layer, the instance normalization layer, and the ReLU activation layer;
the fourth coding layer, the fifth coding layer and the sixth coding layer are the same coding layer; the fourth coding layer includes: a fourth convolution layer, the instance normalization layer, and the ReLU activation layer.
In some embodiments, the first decoding layer comprises, in order: a first deconvolution layer, an instance normalization layer, and a ReLU activation layer;
the second decoding layer sequentially includes: the first deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the third decoding layer sequentially includes: a second deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the fourth decoding layer sequentially includes: a third deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the fifth decoding layer and the sixth decoding layer each sequentially include: a fourth deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the combination layer sequentially comprises: a fifth convolution layer, the example normalization layer, and a Tanh activation layer.
In some embodiments, before the inputting the optical target image into the trained generator to generate the false SAR target image corresponding to the pitch and azimuth angle of the optical target image, the method further comprises:
acquiring the pairs of training samples;
constructing the initial conversion to generate an countermeasure network; the initial conversion generation countermeasure network includes: the first generator and the second generator, and the first discriminator and the second discriminator with the same structure; the first generator is used for generating a corresponding false SAR image according to the input optical image, and the second generator is used for generating a corresponding false optical image according to the input SAR image;
training the initial conversion generation countermeasure network by adopting the pairs of training samples and the target loss function to obtain a trained conversion generation countermeasure network; the generator loss function is used for training the first generator and the second generator, and the arbiter loss function is used for training the first arbiter and the second arbiter;
and generating the trained conversion into a trained first generator in an countermeasure network as the trained generator.
In some embodiments, the expression for training the generator loss function of the first generator is as follows:
wherein,representing the generator loss function for training the first generator,/>A Watson gradient penalty generation counter-loss function generator related term representing said first generator,/I>Represents the focus frequency loss function, +.>Representing a cyclic consistency loss function, and beta and alpha each represent a preset weighting coefficient, G 1 Representing the first generator, G 2 Representing the second generator, D 1 Representing the first discriminant, Y representing each optical sample image of a training sample pair for training each time, x representing SAR sample images forming a pair of training samples with Y, Y O Representing the distribution of the optical sample images in a training sample pair for each training, X S Representing the distribution of SAR sample images in a training sample pair for training each time, G 1 (y) represents a false SAR image generated from y, G 2 (G 1 (y)) means according to G 1 (y) the resulting reconstructed optical image, G 2 (x) Representing a false optical image generated from x, G 1 (G 2 (x) Root is represented byAccording to G 2 (x) The resulting reconstructed SAR image, D 1 (G 1 (y)) means D 1 For G 1 (y) discriminating, wherein H is the height of the optical sample image or SAR sample image, W is the width of the optical sample image or SAR sample image, and W freq (p, q) represents the weight of the spatial frequency at the position coordinates (p, q), F n (p, q) represents complex frequency values at position coordinates (p, q) of the image n after two-dimensional discrete Fourier transform, n represents x, y, G 1 (y) and G 2 (x),/>The desired value is indicated to be the desired value, i represents an absolute value and, I.I. | 1 Represents L1 norm,/->Indicating->Minimizing.
In some embodiments, the expression for training the arbiter loss function of the first arbiter is as follows:
wherein,representing the arbiter loss function for training the first arbiter 2 Represents L2 norm->Representing a distribution between a SAR sample space, which is a sample space constituted by SAR sample images in each training sample pair for training, and a false SAR sample space, which is a false SAR image generated by inputting optical sample images in each training sample pair for training into the first generatorSample space formed->Representation pair->Sampling the obtained sample image, < > and>representation pair->Discriminating (judging) the->Representing gradient->Indicating->To the minimum D 1 (x) The expression x is determined.
In some embodiments, the number of convolution kernels for the first convolution layer is 32, the number of convolution kernels for the second convolution layer is 64, the number of convolution kernels for the third convolution layer is 128, and the number of convolution kernels for the fourth convolution layer is 256.
In some embodiments, the number of convolution kernels of the first deconvolution layer is 256, the number of convolution kernels of the second deconvolution layer is 128, the number of convolution kernels of the third deconvolution layer is 64, the number of convolution kernels of the fourth deconvolution layer is 32, and the number of convolution kernels of the fifth deconvolution layer is 1.
The invention also provides an interference method, which comprises the following steps:
acquiring a deception jamming template generated by adopting the SAR deception jamming template generation method based on the optical SAR image conversion;
and generating a false target deception jamming signal according to the deception jamming template, and decepting the SAR by adopting the false target deception jamming signal.
The invention has the following beneficial technical effects:
the trained generator is obtained by training an initial conversion generation countermeasure network by adopting a target loss function, the target loss function comprises a generator loss function and a discriminator loss function, the generator loss function comprises a cyclic consistency loss function, a focusing frequency loss function and a Watson gradient penalty generation countermeasure loss function, the cyclic consistency loss function in the generator loss function can keep partial information of an original image while an image is converted, constraints are added to the image generation, the Watson gradient penalty generation countermeasure loss function can improve the stability of network training, and the focusing frequency loss can further improve the quality of the generated image from a frequency domain, so that the trained generator has better performance and is more stable, and the quality of the generated SAR image is higher and is more similar to a real SAR image; furthermore, the method and the device apply the generator to the aspect of generating the SAR deception jamming templates, and train the initial conversion generation countermeasure network by utilizing a plurality of pairs of training samples comprising the optical sample images and the corresponding SAR sample images, so that only the optical target images are required to be input when the SAR deception jamming templates are generated, thereby generating the SAR target deception jamming templates with multiple postures and high fidelity by utilizing rich optical target images, and improving the diversity and the quality of the SAR deception jamming templates.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a flowchart of a method for generating a SAR spoofing interference template based on optical SAR image conversion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the internal architecture of an exemplary generator provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an exemplary arbiter according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Fig. 1 is a flowchart of a method for generating a SAR spoofing interference template based on optical SAR image conversion according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
s101, acquiring an optical target image.
S102, inputting the optical target image into a trained generator, and generating a false SAR target image under the pitching and azimuth angles corresponding to the optical target image; the generator is obtained by training an initial conversion generation countermeasure network by adopting a plurality of pairs of training samples and a target loss function; each pair of training samples comprises an optical sample image and an SAR sample image; the target loss function comprises a generator loss function and a discriminator loss function; the generator loss function contains cyclic consistency loss, focus frequency loss, and Watson gradient penalty generation of relevant terms for the generator in the anti-loss function.
Here, the initial conversion generation countermeasure network includes: the first generator and the second generator, and the first discriminator and the second discriminator with the same structure; the first generator is used for generating a corresponding false SAR image according to the input optical image, and the second generator is used for generating a corresponding false optical image according to the input SAR image.
The first generator and the second generator each include: the encoder and decoder, however, the number of convolution kernels of the last convolution layer of the decoder of the first generator and the decoder of the second generator are different. Illustratively, as shown in FIG. 2, the encoder in the first generator comprises: a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer and a sixth coding layer. The first coding layer comprises, in order: a first convolutional layer, a convolutional kernel selection network (Selective Kernel Network, SK-NET) layer, an instance normalization (Instance Normalization, IN) layer, and a ReLU (Rectified Linear Unit) activation layer; specifically, the number of convolution kernels of the first convolution layer is 32, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The second coding layer comprises in order: a second convolution layer, an SK-NET layer, an IN layer and a ReLU activation layer; specifically, the number of convolution kernels of the second convolution layer is 64, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The third coding layer comprises in order: a third convolution layer, an SK-NET layer, an IN layer and a ReLU activation layer; specifically, the number of convolution kernels of the third convolution layer is 128, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The fourth coding layer, the fifth coding layer, and the sixth coding layer are the same coding layer, and the fourth coding layer includes: a fourth convolution layer, an IN layer, and a ReLU activation layer; specifically, the number of convolution kernels of the fourth convolution layer is 256, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. As shown in fig. 2, the decoder in the first generator includes, in order from top to bottom: a first decoding layer, a second decoding layer, a third decoding layer, a fourth decoding layer, a fifth decoding layer, a sixth decoding layer, and a combination layer. The first decoding layer sequentially includes: a first deconvolution layer, an IN layer, and a ReLU activation layer; specifically, the number of convolution kernels of the first deconvolution layer is 256, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The second decoding layer comprises in order: a first deconvolution layer, an IN layer, and a ReLU activation layer. The third decoding layer sequentially includes: a second deconvolution layer, an IN layer, and a ReLU activation layer; specifically, the number of convolution kernels of the second deconvolution layer is 128, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The fourth decoding layer sequentially includes: a third deconvolution layer, an IN layer, and a ReLU activation layer; specifically, the number of convolution kernels of the third deconvolution layer is 64, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The fifth decoding layer and the sixth decoding layer each sequentially include: a fourth deconvolution layer, an IN layer, and a ReLU activation layer; specifically, the number of convolution kernels of the fourth deconvolution layer is 32, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The combination layer sequentially comprises: a fifth convolution layer, an IN layer, and a Tanh activation layer; specifically, the number of convolution kernels of the fifth convolution layer is 1, the convolution kernel size is 4×4, the step size is 1, and the padding is 0. As shown in fig. 2, the output channel of the first decoding layer is cascaded with the output channel of the fifth encoding layer, the output channel of the second decoding layer is cascaded with the output channel of the fourth encoding layer, and the output channel of the third decoding layer is cascaded with the output channel of the third encoding layer. In fig. 2, the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, and the fifth convolution layer are collectively referred to as convolution layers; the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, and the fourth deconvolution layer are collectively referred to as deconvolution layers in fig. 2.
Here, the number of convolution kernels of the last convolution layer of the decoder of the second generator is 3.
It should be noted that the above description of the number of layers and the hierarchical structure of the coding layer and the decoding layer, and the specific parameters of the convolution layer and the deconvolution layer therein are exemplary, and the number of layers and the hierarchical structure of the coding layer and the decoding layer, and the specific parameters of the convolution layer and the deconvolution layer therein may be set according to actual needs.
The SK-NET layer and other layers are combined, so that the characteristics of the input image can be better extracted, and more realistic images can be generated. The invention can directly transfer the information of some original images to a decoder part through the channel cascade of the form, thereby reducing the loss of the original information of the images and leading the images generated by the generator to be more lifelike. That is, the generator employing the above-described structure can improve the fidelity of the generated image.
Here, the above S102 may be implemented by:
s1021, inputting the optical target image into a trained generator, and sequentially passing through a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer and a sixth coding layer to generate coding characteristics.
S1022, after the coding feature passes through the first decoding layer, a first decoding feature is generated.
S1023, after the output of the first decoding characteristic and the output of the fifth encoding layer jointly pass through the second decoding layer, generating a second decoding characteristic.
And S1024, after the outputs of the second decoding characteristic and the fourth coding layer jointly pass through the third decoding layer, generating a third decoding characteristic.
And S1025, the third decoding characteristic and the output of the third coding layer jointly pass through a fourth decoding layer to generate a fourth decoding characteristic.
And S1026, after the fourth decoding characteristic sequentially passes through the fifth decoding layer, the sixth decoding layer and a combination layer, generating a false SAR target image.
Here, the first arbiter and the second arbiter each adopt a PatchGAN structure, and as shown in fig. 3, the first arbiter sequentially includes: a first combined layer, a second combined layer, a third combined layer, a fourth combined layer and a fifth layer. The first combined layer sequentially comprises: a second convolution layer and a leaky linear rectifying (Leaky Rectified Linear Unit, lrerlu) active layer; specifically, the number of convolution kernels of the second convolution layer is 64, the convolution kernel size is 4×4, the step size is 2, and the padding is 0. The second combined layer sequentially comprises: a third convolution layer, an IN layer, and an lrehu activation layer. The third combined layer sequentially comprises: a fourth convolution layer, an IN layer, and an lrehu activation layer. The fourth combination layer comprises in sequence: a sixth convolution layer, an IN layer, and an lrehu activation layer; specifically, the number of convolution kernels of the sixth convolution layer is 512, the convolution kernel size is 4×4, the step size is 1, and the padding is 0. The fifth layer comprises: a fifth convolution layer; specifically, the number of convolution kernels of the fifth convolution layer is 1, the convolution kernel size is 4×4, the step size is 1, and the padding is 0. In fig. 3, the second convolution layer, the third convolution layer, the fourth convolution layer, the sixth convolution layer, and the fifth convolution layer are collectively referred to as convolution layers.
It should be noted that the above description of the number of layers and the hierarchical structure of the respective combination layers and the fifth layers, and the specific parameters of the convolution layers therein are exemplary, and the number of layers and the hierarchical structure of the respective combination layers and the fifth layers, and the specific parameters of the convolution layers therein may be set according to actual needs.
Here, the input of the first generator is an optical image, and the output is a false SAR image corresponding to the optical image; the input of the second generator is SAR image, and the output is false optical image corresponding to the SAR image. The input of the first discriminator is the false SAR image and the SAR image, and the output is the discrimination value of the false SAR image and the SAR image; the input of the second discriminator is the false optical image and the optical image, and the output is the discrimination value of the false optical image and the optical image.
S103, taking the false SAR target image as a deception jamming template.
Here, the spoofing interference template is used to generate a false spoofing interference signal.
In some embodiments, before the step S102, the method further includes:
s201, acquiring a plurality of pairs of training samples.
Here, a plurality of different optical images may be acquired, and a SAR image corresponding to each optical image may be acquired, and each optical image and the SAR image corresponding to the optical image may be formed into a pair of training samples (i.e., a training sample pair), where the optical image is used as an optical sample image, and the SAR image corresponding to the optical image is used as a SAR sample image. For example, the altai FEKO electromagnetic analysis software and the 3ds Max graph modeling software may be used to generate a plurality of pairs of different SAR image-optical images, and select a plurality of pairs of SAR image-optical images from the pairs of SAR image-optical images as a training set, and select a plurality of pairs of SAR image-optical images as a test set.
S202, constructing an initial conversion generation countermeasure network; the initial conversion generation countermeasure network includes: the first generator and the second generator, and the first discriminator and the second discriminator with the same structure; the first generator is used for generating a corresponding false SAR image according to the input optical image, and the second generator is used for generating a corresponding false optical image according to the input SAR image.
S203, training the initial conversion generation countermeasure network by adopting a plurality of pairs of training samples and target loss functions to obtain a trained conversion generation countermeasure network; the generator penalty function is used to train the first generator and the second generator, and the arbiter penalty function is used to train the first arbiter and the second arbiter.
Specifically, the expression for training the generator loss function of the first generator is as follows:
specifically, the expression for training the generator loss function of the second generator is as follows:
specifically, a discriminant loss function for training a first discriminantThe expression of (2) is as follows:
specifically, the arbiter loss function for training the second arbiterThe expression of (2) is as follows:
in the above formula, the method, among others,representing generator loss function for training the first generator,/->Representing a generator loss function for training the second generator, < >>Watson gradient penalty generation representing the first generator against the relevant term of the generator in the loss function, is>Watson gradient penalty generation representing the second generator against the relevant term of the generator in the loss function,/->And->All represent focus frequency loss, < >>And->All represent a cyclic consistency loss function, and both beta and alpha represent preset weighting coefficients, G 1 Representing a first generator G 2 Representing a second generator, D 1 Represents the first discriminator, D 2 Representing a second discriminant, Y representing each optical sample image of a training sample pair each time used for training, x representing SAR sample images forming a pair of training samples with Y, Y O Representing each timeDistribution, X of optical sample images in training sample pairs for training S Representing the distribution of SAR sample images in a training sample pair for training each time, G 1 (y) represents a false SAR image generated from y, G 2 (G 1 (y)) means according to G 1 (y) the resulting reconstructed optical image, G 2 (x) Representing a false optical image generated from x, G 1 (G 2 (x) According to G) 2 (x) The resulting reconstructed SAR image, D 1 (G 1 (y)) means D 1 For G 1 (y) discriminating, D 2 (G 2 (x) D) represents D 2 For G 2 (x) Discriminating, wherein H is the height of the optical sample image or SAR sample image, W is the width of the optical sample image or SAR sample image, and W freq (p, q) represents the weight of the spatial frequency at the position coordinates (p, q), F n (p, q) represents complex frequency values at position coordinates (p, q) of the image n after two-dimensional discrete Fourier transform, n represents x, y, G 1 (y) and G 2 (x),/>The desired value is indicated to be the desired value, i represents an absolute value and, I.I. | 1 The L1 norm is represented by the expression, I.I. | 2 Represents an L2 norm; />Representing the distribution between a SAR sample space, which is a sample space composed of SAR sample images in each training sample pair for training, and a false SAR sample space, which is a sample space composed of false SAR images generated after inputting optical sample images in each training sample pair for training into a first generator, and a false SAR sample space>Representation pairSampling the obtained sample image, < > and>representation pair->D, discriminating 1 (x) Indicating that x is discriminated ++>Representing the gradient; />Representing the distribution between an optical sample space, which is a sample space constituted by an optical sample image in a training sample pair for each training, and a false optical sample space, which is a sample space constituted by a false optical image generated after an SAR sample image in a training sample pair for each training is input to the second generator, and a noise filter, which is a filter for filtering noise generated by the SAR sample image in the training sample pair for each training, and the false optical sample space>Representation pair->Sampling the obtained sample image, < > and>representation pair->D, discriminating 2 (y) represents discrimination of y, +.>Representing gradient->Indicating->Minimum, ->Indicating->Minimum, ->Indicating->Minimum, ->Indicating->Minimum.
It should be noted that, when training the conversion generation countermeasure network, the second discriminator is trained first, then the second generator is trained, then the first discriminator is trained, and finally the first generator is trained.
Here, when training is performed, when the training parameters reach the preset stopping conditions, training may be stopped, so that the challenge network generated by the conversion obtained by the last training is used as a trained challenge network generated by the conversion; and, training parameters and preset stopping conditions can be set according to actual needs. For example, when the training parameter is the number of training times, the preset stop condition may be the preset number of training times (for example, epoch (training round number) in the training process is 200), and the present application is not limited thereto.
Illustratively, when training the initial conversion generation countermeasure network, the batch-size (batch size) may be 1, and the adaptive moment estimation (Adaptive moment estimation, adam) is selected as the network optimization algorithm, and the initial learning rate of the network is 0.0002.
S204, generating a trained first generator in the countermeasure network by using the trained conversion as a trained generator.
The invention also provides an interference method, which comprises the following steps:
s301, acquiring a deception jamming template generated by the SAR deception jamming template generation method based on the optical SAR image conversion.
S302, generating a false target deception jamming signal according to the deception jamming template, and decepting SAR by adopting the false target deception jamming signal.
Here, an existing false interference signal generating method may be adopted to generate a corresponding false target false interference signal according to the false interference template. For example, the existing spurious interference signal generation method may be "a spurious large-scene SAR fast forwarding type spoofing interference".
The technical effects of the embodiments of the present invention are further described below by simulation experiment data.
(1) Experimental conditions
The invention adopts a simulation experiment hardware platform: the GPU is NVIDIA GeForce RTX and 3090, and the video memory size is 24GB; CPU is 11 Intel i7-11700K, the main frequency is 3.6GHz, and the memory size is 64GB. The simulation experiment software platform adopted by the invention: the operating system is windows 11.
(2) Emulation content
1. Limited by the lack of samples to the optical image-SAR image in the actual scenario, software 3ds Max and FEKO simulations were used in experiments to generate the optical sample image dataset and the corresponding SAR sample image dataset, respectively. The optical sample image test dataset comprises: one optical image of tank 1 at azimuth angle 1 and pitch angle 1 and one optical image of tank 2 at azimuth angle 2 and pitch angle 2; corresponding to the above; the SAR sample image test dataset comprises: one SAR image of tank 1 at azimuth angle 1 and pitch angle 1 and one SAR image of tank 2 at azimuth angle 2 and pitch angle 2.
2. In the network training step, training samples composed of an optical sample image data set and a corresponding SAR sample image data set are input into a light-SAR conversion generation countermeasure network (the initial conversion generation countermeasure network described above herein), and the light-SAR conversion generation countermeasure network is trained.
3. In the test link, as the final purpose of the invention is to generate the SAR target spoofing interference template, only one optical image (hereinafter referred to as test optical image a) of the tank 1 and one optical image (hereinafter referred to as test optical image b) of the tank 2 need to be input into a first generator in the countermeasure network, which can output one SAR image corresponding to each of the test optical image a and the test optical image b, and the generated SAR images have extremely high fidelity through actual comparison.
4. In the deception jamming simulation experiment, three tank SAR target images are generated by using the generator in the step 3 to serve as SAR deception jamming templates, and the deception SAR images are generated according to the SAR deception jamming templates and the SAR images of the original scene. For example, the SAR image of the original scene is provided with a land area and a lake area, three tank SAR images are embedded into the land area in the SAR image of the original scene in the generated SAR image after deception jamming, and the high-fidelity deception jamming template is well embedded into the original scene through comparing the SAR image of the original scene with the generated SAR image after deception jamming, so that the effectiveness of the deception jamming template generation method is verified, and the method can be well applied to the SAR deception jamming field.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (10)

1. The SAR deception jamming template generation method based on the optical SAR image conversion is characterized by comprising the following steps of:
acquiring an optical target image;
inputting the optical target image into a trained generator to generate a false SAR target image under the pitching and azimuth angles corresponding to the optical target image; the generator is obtained by training an initial conversion generation countermeasure network by adopting a plurality of pairs of training samples and a target loss function; each pair of training samples comprises an optical sample image and an SAR sample image; the target loss function comprises a generator loss function and a discriminator loss function; the generator loss function comprises a cyclic consistency loss function, a focusing frequency loss function and a Watson-Stant gradient penalty generation correlation terms of the generator in the counterloss function;
and taking the false SAR target image as a deception jamming template.
2. The method for generating SAR spoofing interference templates based on optical SAR image conversion according to claim 1, wherein said inputting the optical target image into a trained generator generates a false SAR target image corresponding to pitch and azimuth of the optical target image, comprising:
after the optical target image is input into a trained generator, the optical target image sequentially passes through a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer and a sixth coding layer to generate coding characteristics;
after the coding features pass through the first decoding layer, generating first decoding features;
the output of the first decoding feature and the output of the fifth coding layer jointly pass through a second decoding layer to generate a second decoding feature;
the output of the second decoding feature and the output of the fourth coding layer jointly pass through a third decoding layer to generate a third decoding feature;
the third decoding characteristic and the output of the third coding layer jointly pass through a fourth decoding layer to generate a fourth decoding characteristic;
and the fourth decoding characteristic sequentially passes through the fifth decoding layer, the sixth decoding layer and a combination layer to generate the SAR target image.
3. The SAR spoofing interference template generation method based on optical SAR image conversion of claim 2, wherein the first coding layer sequentially comprises: a first convolution layer, a convolution kernel selection network layer, an instance normalization layer, and a ReLU activation layer;
the second coding layer sequentially comprises: a second convolution layer, the convolution kernel selection network layer, the instance normalization layer, and the ReLU activation layer;
the third coding layer sequentially comprises: a third convolution layer, the convolution kernel selection network layer, the instance normalization layer, and the ReLU activation layer;
the fourth coding layer, the fifth coding layer and the sixth coding layer are the same coding layer; the fourth coding layer includes: a fourth convolution layer, the instance normalization layer, and the ReLU activation layer.
4. The SAR spoofing interference template generation method based on optical SAR image conversion of claim 2, wherein the first decoding layer sequentially comprises: a first deconvolution layer, an instance normalization layer, and a ReLU activation layer;
the second decoding layer sequentially includes: the first deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the third decoding layer sequentially includes: a second deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the fourth decoding layer sequentially includes: a third deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the fifth decoding layer and the sixth decoding layer each sequentially include: a fourth deconvolution layer, the instance normalization layer, and the ReLU activation layer;
the combination layer sequentially comprises: a fifth convolution layer, the example normalization layer, and a Tanh activation layer.
5. The method for generating SAR spoofing interference template based on optical SAR image conversion of claim 1, further comprising, prior to said inputting the optical target image into a trained generator to generate a false SAR target image at pitch and azimuth for the optical target image:
acquiring the pairs of training samples;
constructing the initial conversion to generate an countermeasure network; the initial conversion generation countermeasure network includes: the first generator and the second generator, and the first discriminator and the second discriminator with the same structure; the first generator is used for generating a corresponding false SAR image according to the input optical image, and the second generator is used for generating a corresponding false optical image according to the input SAR image;
training the initial conversion generation countermeasure network by adopting the pairs of training samples and the target loss function to obtain a trained conversion generation countermeasure network; the generator loss function is used for training the first generator and the second generator, and the arbiter loss function is used for training the first arbiter and the second arbiter;
and generating the trained conversion into a trained first generator in an countermeasure network as the trained generator.
6. The SAR spoofing interference template generation method based on optical SAR image conversion of claim 5, wherein the expression for training the generator loss function of the first generator is as follows:
wherein,representing the generator loss function for training the first generator,/>A Watson gradient penalty generation counter-loss function generator related term representing said first generator,/I>Represents the focus frequency loss function, +.>Representing a cyclic consistency loss function, and beta and alpha each represent a preset weighting coefficient, G 1 Representing the first generator, G 2 Representing the second generator, D 1 Representing the first discriminant, Y representing each optical sample image of a training sample pair for training each time, x representing SAR sample images forming a pair of training samples with Y, Y O Representing the distribution of the optical sample images in a training sample pair for each training, X S Representing the distribution of SAR sample images in a training sample pair for training each time, G 1 (y) represents a false SAR image generated from y, G 2 (G 1 (y)) means according to G 1 (y) the resulting reconstructed optical image, G 2 (x) Representing a false optical image generated from x, G 1 (G 2 (x) According to G) 2 (x) The resulting reconstructed SAR image, D 1 (G 1 (y)) means D 1 For G 1 (y) discriminating, wherein H is the height of the optical sample image or SAR sample image, W is the width of the optical sample image or SAR sample image, and W freq (p, q) represents the weight of the spatial frequency at the position coordinates (p, q), F n (p, q) represents complex frequency values at position coordinates (p, q) of the image n after two-dimensional discrete Fourier transform, n represents x, y, G 1 (y) and G 2 (x),/>The desired value is indicated to be the desired value, i represents an absolute value and, I.I. | 1 Represents L1 norm,/->Indicating->Minimizing.
7. The SAR spoofing interference template generation method based on optical SAR image conversion of claim 6, wherein the expression of the arbiter loss function for training the first arbiter is as follows:
wherein,representing the arbiter loss function for training the first arbiter 2 The L2 norm is represented by the number,representing a distribution between a SAR sample space, which is a sample space constituted by SAR sample images in each training sample pair for training, and a false SAR sample space, which is a sample space constituted by false SAR images generated after inputting optical sample images in each training sample pair for training into the first generator>Representation pair->Sampling the obtained sample image, < > and>representation pair->Discriminating (judging) the->The gradient is represented by a gradient,indicating->To the minimum D 1 (x) The expression x is determined.
8. The SAR spoofing interference template generating method based on optical SAR image conversion of claim 3, wherein the number of convolution kernels of the first convolution layer is 32, the number of convolution kernels of the second convolution layer is 64, the number of convolution kernels of the third convolution layer is 128, and the number of convolution kernels of the fourth convolution layer is 256.
9. The SAR spoofing interference template generating method based on optical SAR image conversion of claim 4, wherein the number of convolution kernels of the first deconvolution layer is 256, the number of convolution kernels of the second deconvolution layer is 128, the number of convolution kernels of the third deconvolution layer is 64, the number of convolution kernels of the fourth deconvolution layer is 32, and the number of convolution kernels of the fifth deconvolution layer is 1.
10. A method of interference, comprising:
acquiring a spoofing interference template generated by the method of any one of claims 1 to 9;
and generating a false target deception jamming signal according to the deception jamming template, and decepting the SAR by adopting the false target deception jamming signal.
CN202311193660.1A 2023-09-15 2023-09-15 SAR deception jamming template generation and jamming method based on optical SAR image conversion Active CN116930884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311193660.1A CN116930884B (en) 2023-09-15 2023-09-15 SAR deception jamming template generation and jamming method based on optical SAR image conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311193660.1A CN116930884B (en) 2023-09-15 2023-09-15 SAR deception jamming template generation and jamming method based on optical SAR image conversion

Publications (2)

Publication Number Publication Date
CN116930884A CN116930884A (en) 2023-10-24
CN116930884B true CN116930884B (en) 2023-12-26

Family

ID=88380726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311193660.1A Active CN116930884B (en) 2023-09-15 2023-09-15 SAR deception jamming template generation and jamming method based on optical SAR image conversion

Country Status (1)

Country Link
CN (1) CN116930884B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988818A (en) * 2019-12-09 2020-04-10 西安电子科技大学 Cheating interference template generation method for countermeasure network based on condition generation formula
CN111292220A (en) * 2020-01-19 2020-06-16 西北工业大学 Target camouflage image generation method for target image recognition of synthetic aperture radar
CN113870157A (en) * 2021-09-26 2021-12-31 电子科技大学 SAR image synthesis method based on cycleGAN
CN113960551A (en) * 2021-08-30 2022-01-21 西安电子科技大学 Clutter image generation method and target detection method for SAR image
WO2022047625A1 (en) * 2020-09-01 2022-03-10 深圳先进技术研究院 Image processing method and system, and computer storage medium
CN114897723A (en) * 2022-05-05 2022-08-12 南京航空航天大学 Image generation and noise adding method based on generation type countermeasure network
CN115201768A (en) * 2022-06-10 2022-10-18 西安电子科技大学 Active deception jamming method for generating countermeasure network based on cycle consistency
WO2022262209A1 (en) * 2021-06-17 2022-12-22 深圳市商汤科技有限公司 Neural network training method and apparatus, computer device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021028650A1 (en) * 2019-08-13 2021-02-18 University Of Hertfordshire Higher Education Corporation Predicting visible/infrared band images using radar reflectance/backscatter images of a terrestrial region

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988818A (en) * 2019-12-09 2020-04-10 西安电子科技大学 Cheating interference template generation method for countermeasure network based on condition generation formula
CN111292220A (en) * 2020-01-19 2020-06-16 西北工业大学 Target camouflage image generation method for target image recognition of synthetic aperture radar
WO2022047625A1 (en) * 2020-09-01 2022-03-10 深圳先进技术研究院 Image processing method and system, and computer storage medium
WO2022262209A1 (en) * 2021-06-17 2022-12-22 深圳市商汤科技有限公司 Neural network training method and apparatus, computer device, and storage medium
CN113960551A (en) * 2021-08-30 2022-01-21 西安电子科技大学 Clutter image generation method and target detection method for SAR image
CN113870157A (en) * 2021-09-26 2021-12-31 电子科技大学 SAR image synthesis method based on cycleGAN
CN114897723A (en) * 2022-05-05 2022-08-12 南京航空航天大学 Image generation and noise adding method based on generation type countermeasure network
CN115201768A (en) * 2022-06-10 2022-10-18 西安电子科技大学 Active deception jamming method for generating countermeasure network based on cycle consistency

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SAR有源欺骗干扰模版生成与效果评估方法;李雨辰;中国硕士学位论文全文数据库 信息科技辑;全文 *
Wide-Band Interference Mitigation for SAR Based on Generative Adversarial Network;Zhaodong Zhan;2021 CIE International Conference on Radar (Radar);全文 *
基于DCGAN的SAR虚假目标图像仿真;卢庆林;叶伟;李国靖;;电子信息对抗技术(02);全文 *
基于SAR目标特征的欺骗式干扰技术研究;赵明明;中国硕士学位论文全文数据库 信息科技辑;全文 *
基于谱归一化生成对抗网络的目标SAR图像仿真方法;孙智博;徐向辉;;计算机与现代化(08);全文 *
生成对抗网络的发展与挑战;董永生;信号处理;全文 *

Also Published As

Publication number Publication date
CN116930884A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
Yu et al. A survey on deepfake video detection
Wang et al. Enhanced 1-bit radar imaging by exploiting two-level block sparsity
CN107301641A (en) A kind of detection method and device of Remote Sensing Imagery Change
CN111010356A (en) Underwater acoustic communication signal modulation mode identification method based on support vector machine
Weinberg et al. Bayesian framework for detector development in Pareto distributed clutter
Pan et al. Residual attention-aided U-Net GAN and multi-instance multilabel classifier for automatic waveform recognition of overlapping LPI radar signals
Guo et al. Sea clutter and target detection with deep neural networks
Liu et al. Afdet: Toward more accurate and faster object detection in remote sensing images
CN116047427A (en) Small sample radar active interference identification method
Sun et al. Convolutional neural network (CNN)-based fast back projection imaging with noise-resistant capability
Huanyu et al. Research on visual tracking algorithm based on deep feature expression and learning
CN111062321A (en) SAR detection method and system based on deep convolutional network
Song et al. Multi-view HRRP generation with aspect-directed attention GAN
Zhai et al. Adaptive feature extraction and fine‐grained modulation recognition of multi‐function radar under small sample conditions
Jia et al. Bipartite adversarial autoencoders with structural self-similarity for unsupervised heterogeneous remote sensing image change detection
CN116930884B (en) SAR deception jamming template generation and jamming method based on optical SAR image conversion
Kong et al. A transformer-based contrastive semi-supervised learning framework for automatic modulation recognition
Huang et al. EST-YOLOv5s: SAR Image Aircraft Target Detection Model Based on Improved YOLOv5s
Aziz et al. Optimising compressive sensing matrix using Chicken Swarm Optimisation algorithm
Yang et al. GAN-based radar spectrogram augmentation via diversity injection strategy
Pan et al. Ship detection using online update of clutter map based on fuzzy statistics and spatial property
Ram et al. Sparsity‐based autoencoders for denoising cluttered radar signatures
Wang et al. SAR image synthesis based on conditional generative adversarial networks
CN116503685A (en) optical-ISAR image conversion method and device based on hierarchical domain prior
Cai et al. Sea‐surface weak target detection scheme using a cultural algorithm aided time‐frequency fusion strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant