CN112415514A - Target SAR image generation method and device - Google Patents

Target SAR image generation method and device Download PDF

Info

Publication number
CN112415514A
CN112415514A CN202011278930.5A CN202011278930A CN112415514A CN 112415514 A CN112415514 A CN 112415514A CN 202011278930 A CN202011278930 A CN 202011278930A CN 112415514 A CN112415514 A CN 112415514A
Authority
CN
China
Prior art keywords
sar
image
images
generated
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011278930.5A
Other languages
Chinese (zh)
Other versions
CN112415514B (en
Inventor
翟佳
陈�峰
董毅
彭实
贾雨生
谢晓丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN202011278930.5A priority Critical patent/CN112415514B/en
Publication of CN112415514A publication Critical patent/CN112415514A/en
Application granted granted Critical
Publication of CN112415514B publication Critical patent/CN112415514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9094Theoretical aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a method and a device for generating a target SAR image, computer equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring real SAR image data to form a training set; selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network; inputting the single image characteristics of the first SAR real image and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images; respectively carrying out feature comparison on the N-1 SAR generated images and the corresponding N-1 SAR real images through a discriminator for generating a countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function; and obtaining a generated countermeasure network after training is completed, and randomly generating an SAR generated image. The invention can realize the extrapolation generation of SAR image data so as to perfect and expand the data volume.

Description

Target SAR image generation method and device
Technical Field
The invention relates to the technical field of target detection and identification, in particular to a target SAR image generation method and device, computer equipment and a computer readable storage medium.
Background
The research on the electromagnetic scattering property of the target has important application in the aspects of electronic countermeasure, stealth design, target detection and identification and the like. At present, for most targets with conductor surface materials, such as vehicles, ships, airplanes and the like, a theoretical modeling and parametric modeling method can give better results, but great difficulty still exists in realizing electromagnetic scattering characteristic modeling of targets with complex shapes, materials and structures, and for the targets, target electromagnetic scattering characteristic data can only be obtained through laboratory measurement. However, the laboratory measurement has the problems of high cost, long time consumption, and the measurement frequency/angle/polarization range is limited by the measurement conditions, so that the requirements of practical application are often difficult to meet. Therefore, it is still difficult to obtain complete electromagnetic scattering property data efficiently and at low cost for complex shapes, materials and structural targets.
Disclosure of Invention
The invention aims to provide a target SAR image data generation method based on a depth implicit model and a probability map model to solve the problems of unstable generated image and weak robustness of the generation method in SAR image extrapolation generation aiming at least part of defects.
In order to solve the technical problem, the invention provides a target SAR image generation method, which comprises the following steps:
s1, acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3;
s2, selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network, wherein the features comprise the overall relation features of the N SAR real images and the single image features of each SAR real image;
s3, inputting the single image characteristics of the first SAR real image extracted in the step S2 and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between SAR image data sample space characteristics is simulated based on a Bayesian network;
s4, respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step S3 and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function;
s5, judging whether the training is finished or not, and if not, returning to the step S2;
and S6, obtaining the generation countermeasure network after training, and randomly generating an SAR generation image.
Preferably, in step S4, when the loss function is used to measure the similarity between the SAR-generated image and the SAR-real image, a penalty term based on mutual information is introduced into the loss function, where the expression of the mutual information is:
Figure BDA0002780060230000021
wherein X and Y represent SAR real image and SAR generated image respectively, X represents pixel gray level in X, Y represents pixel gray level in Y, PXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
Preferably, the target SAR image generation method further includes:
and S7, evaluating the similarity of the SAR generated image generated randomly and the SAR real image by adopting the structural similarity index.
Preferably, in step S7, when the structural similarity index is used to evaluate the similarity between the randomly generated SAR generated image and the SAR real image, the expression is:
Figure BDA0002780060230000031
where SSIM (x, y) represents the structural similarity between image x and image y, μxRepresents the mean of the image x; mu.syRepresents the mean of the image y; sigmax 2、σy 2Respectively representing the variances of the images x and y; sigmaxyRepresenting the covariance of image x and image y, C1And C2Is a constant.
Preferably, the step S4 further includes: if the similarity between the SAR generated image and the SAR real image is lower than a set threshold, storing the corresponding training sample into a fine adjustment sample set;
the step S6 further includes: after the generated countermeasure network which is trained is obtained, before SAR generated images are generated randomly, fine tuning training is carried out on the generated countermeasure network by utilizing the training samples in the fine tuning sample set.
The invention also provides a target SAR image generation device, which comprises:
the training set constructing unit is used for acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3;
a model training unit for performing the steps of:
A. selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network, wherein the features comprise the integral relation features of the N SAR real images and the single image features of each SAR real image;
B. inputting the single image characteristics of the first SAR real image extracted in the step A and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between SAR image data sample space characteristics is simulated based on a Bayesian network;
C. respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step B and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function;
D. judging whether the training is finished or not, and returning to the step A if not;
and the image generation unit is used for obtaining the generation countermeasure network after training is finished and randomly generating an SAR generation image.
Preferably, when the model training unit measures the similarity between the SAR generated image and the SAR real image by using a loss function, a penalty term based on mutual information is introduced into the loss function, and the expression of the mutual information is as follows:
Figure BDA0002780060230000041
wherein X and Y represent SAR real image and SAR generated image respectively, X represents pixel gray level in X, Y represents pixel gray level in Y, PXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
Preferably, the target SAR image generation apparatus further includes:
and the model evaluation unit is used for evaluating the similarity between the SAR generated image generated randomly and the SAR real image by adopting the structural similarity index.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of any one of the target SAR image generation methods when executing the computer program.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the target SAR image generation method of any of the preceding claims.
The technical scheme of the invention has the following advantages: the invention provides a method and a device for generating a target SAR image, computer equipment and a computer readable storage medium, wherein the method is based on the electromagnetic scattering data characteristic distribution analysis of a convolutional neural network, and utilizes the convolutional neural network to extract and analyze the characteristics of electromagnetic scattering data (namely SAR image data) and model SAR image characteristics; the invention provides an electromagnetic scattering data generation method which has high robustness and can generate high-confidence data, applies an artificial intelligence generation method to the field of target electromagnetic scattering characteristic modeling, expands a data range by using artificial intelligence and can solve the problem of incomplete data of the current SAR image.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a target SAR image generation method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a target SAR image generation method in an embodiment of the present invention;
FIGS. 3(a) and 3(b) illustrate the generation of an anti-net extrapolation effect without fine-tuning training using a conventional GAN loss function, wherein FIG. 3(a) is a SAR real image and FIG. 3(b) is a SAR generated image;
FIGS. 4(a) and 4(b) illustrate the generation of the anti-network extrapolation effect after training with a conventional GAN loss function and fine-tuning the training, wherein FIG. 4(a) is a SAR real image and FIG. 4(b) is a SAR generated image;
fig. 5(a) and 5(b) illustrate the extrapolation effect of a target SAR image generation method in the embodiment of the present invention, where fig. 5(a) is a SAR real image, and fig. 5(b) is a SAR generated image.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1 and fig. 2, a method for generating a target SAR image according to an embodiment of the present invention includes the following steps:
s1, acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3. N is preferably in the range of 3 to 6, more preferably 4.
Step S1 is to group the real SAR image data, the SAR real images in each set of training samples have a fixed order, the pitch angles of the respective SAR real images are the same, and the azimuth angles are continuously changed in order. For example, when N is 4, the images of 8 adjacent azimuth angles are combined in step S1, and the sequence number of the image in each set of training samples is from 1 to 8 corresponding to 8 azimuth angles that change continuously.
S2, selecting N successive SAR real images with azimuth angles from a set of training samples, and extracting features by using a convolutional neural network, wherein extracting features includes extracting global relationship features (i.e., global features) of the selected successive N SAR real images, and extracting single-graph features (i.e., local features, or specific features) of each selected SAR real image.
Step S2 selects a part of the SAR real images with continuous azimuth and extracts features. The invention considers the characteristics of a single image and the incidence relation between a plurality of continuous images, so two characteristics need to be extracted: one is the overall features of the N successive images (containing the relationship between successive images), and the other is the characteristic features of each image. The processing mode takes the whole characteristics and the special characteristics into consideration, and can improve the confidence of the generated image. For a specific implementation of extracting features by using a convolutional neural network, reference may be made to the prior art, and further description is omitted here.
S3, inputting the single-map feature of the first SAR real image (in the continuous N SAR real images) extracted in the step S2 and the overall relation feature of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between the space characteristics of SAR image data samples is simulated on the basis of the Bayesian network.
The invention is based on a method of combining a depth implicit model and a probability map model, after a model of the relation between space characteristic vectors of SAR image samples is established, aiming at the same target in the SAR image, SAR images with the same pitch angle and different azimuth angles are used as training samples, the distribution of the training samples is tried to be simulated, SAR images under new azimuth angles are generated, and the SAR image extrapolation generation is realized. In step S3, azimuth angles of N-1 SAR generated images generated by the generation countermeasure network are continuous, and positions of the N-1 SAR generated images correspond to first SAR real images (of the continuous N images) extracted in step S2, taking N-4 as an example, if in step S2, 4 continuous SAR real images with sequence numbers of 2-5 are selected for extraction of features, in step S3, a single-image feature of the SAR real image with sequence number of 2 and an overall relationship feature of 4 continuous SAR real images with sequence numbers of 2-5 are extracted, and a generator for generating the countermeasure network is input to obtain 3 SAR generated images, which correspond to sequence numbers of 3-5 and form 4 images with the same pitch angle and continuous azimuth angles with the SAR real image with sequence number of 2.
The method designs a generation countermeasure network model for SAR image generation by combining a depth implicit model and a probability map model, wherein the model comprises a generator (also called a generation network) and a discriminator (also called a discrimination network), a Bayesian network is used in the model to represent structures among variables, and a depth implicit likelihood function is used for modeling complex data.
Considering that the Bayesian network has local properties, namely that a dependency relationship exists among variables in the Bayesian network, the SAR image modeling method is applied to the SAR images, a group of SAR images with continuous azimuth angles are modeled into variables with the dependency relationship in the Bayesian network, and then a plurality of SAR images of adjacent angles can be obtained under the condition that the SAR image of a certain angle is known according to the dependency relationship. Given a dependency structure, the dependency function between variables can be parameterized into a deep neural network to fit complex data. This can make the posterior probability difficult to calculate, since the model itself is highly non-linear. To solve this problem, a neural network may be used to approximate the posterior probability, i.e., a neural network is used to estimate the approximate distribution of the posterior probability.
Let X denote the observable variable, and the observable sample present in X is represented by XjJ is 1,2, N is the number of observable samples, Z is an implicit variable (an unobservable variable), and Z is used as an implicit sample existing in ZiLet G denote an associated directed acyclic graph (i.e., a bayesian network), pG(X, Z) represents the joint probability distribution between an observable variable X and an implicit variable (non-observable variable) Z, which can be decomposed, due to the local structural shape of the bayesian network, into:
Figure BDA0002780060230000071
wherein paG(xj) Representing x in an associated directed acyclic graph GjParent node (pa) ofG(zi) Denotes z in GiParent node of), p (|) represents the local conditional probability.
According to the chain rule P _ model (X, Z) ═ P _ model (Z) P _ model (X | Z), the generating formula pattern can be converted into a modeling of two distributions: one is the conditional distribution of the observed variable X P _ model (X | Z) and the other is the a priori distribution of the hidden variables P _ model (Z). The method adopts a deep implicit model, namely the combination of the latter model and deep learning, namely the condition distribution p (x | z) is implicitly modeled by using a neural network mode. The implicit modeling means that the conditional distribution p (x | z) is not modeled, but a modeling generation process, i.e., a mapping function g: z → x is learned. The input of the generated countermeasure network is a hidden variable z, and the output is an observation variable x.
S4, respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step S3 and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function.
Step S4 is to extract and compare the features of the real image and the generated image by using the discriminator for generating the countermeasure network and the loss function, and when comparing, compare the SAR generated image and the SAR real image with the same serial number one by one.
And S5, judging whether the training is finished or not, and returning to the step S2 if the training is finished. If the training is completed, the process continues to step S6.
The invention trains a generator and a discriminator for generating the confrontation network at the same time. The generator inputs the features extracted by the convolutional neural network, and can also be considered as covering the work of the convolutional neural network, and the generator outputs a generated image. The discriminator inputs the generated image and the real image, and outputs the difference and the discriminated result. The generator and the arbiter are trained together to finally reach balance.
And S6, obtaining the generation countermeasure network after training, and randomly generating an SAR generation image.
When the SAR generated images are generated randomly in the step, continuous N SAR real images are input randomly, and the SAR generated images with continuous N-1 azimuths corresponding to the first SAR real image are generated by utilizing the generated countermeasure network after training.
Preferably, in step S4 of the target SAR image generation method, when the loss function is used to measure the similarity between the SAR generated image and the SAR real image, a penalty term based on mutual information is introduced into the loss function, where the expression of the mutual information is:
Figure BDA0002780060230000091
wherein I (X, Y) is mutual information, X and Y respectively represent SAR real image and SAR generated image, X represents pixel gray level in X, Y represents pixel gray level in Y, P represents pixel gray level in XXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
The invention modifies the loss function of the GAN on the basis of the thought of the GAN (generating a countermeasure network), introduces a punishment item based on mutual information into the loss function of the GAN, and is used for measuring the similarity of a real image and a generated image and optimizing the extrapolation generation of electromagnetic scattering data. The generation countermeasure network based on mutual information association can extract more target characteristics and information, so that more and richer characteristics are provided for realizing data-driven modeling and optimizing the data generation process when electromagnetic scattering data (SAR image data) are generated.
Mutual information is a measure of the degree of interdependency between random variables. Assuming that there exists one random variable X and another random variable Y, their mutual information is I (X; Y) ═ H (X) -H (Y | X), where H (X) is the information entropy of X and H (Y | X) is the information entropy (conditional entropy) brought by Y in the case of known X.
From the analysis of probability, the mutual information I (X, Y) is a joint probability distribution P of random variables X, YXY(x, y) and marginal probability distribution PX(x)、PYAnd (y) obtaining the expression of the mutual information. Unlike the correlation coefficient, the mutual information is not limited to real-valued random variables, which are more general and determine the joint distribution PXYProduct P of (x, y) and the decomposed edge distributionX(x)PY(y) degree of similarity.
The condition of maximizing mutual information is to maximize the correlation of two random events. In a data set, the correlation of the probability distributions fitted to the two data sets is maximized. In machine learning, ideally, when the mutual information is the largest, the probability distribution of the random variables fitted from the dataset can be considered to be the same as the true distribution. Therefore, the invention researches the mutual information correlation electromagnetic scattering generation optimization method, the mutual information is used as a measuring technology of the generated electromagnetic scattering data and also as an excitation measure, and the mutual information correlation optimization design is adopted, so that the distribution of the generated electromagnetic scattering data is close to the real data distribution.
Preferably, the method further comprises:
and S7, evaluating the similarity between the SAR generated image generated randomly and the SAR real image by adopting a Structural Similarity (SSIM) index.
The similarity between the real image and the generated image can be generally measured by using the evaluation criteria of SSIM and MSE. The MSE, namely the mean square error, is an index value used for calculating the similarity between two images, the smaller the value is, the more similar the two images are, and the MSE is widely applied in the academic field, and the calculation formula is as follows:
Figure BDA0002780060230000101
wherein m and n respectively represent the width and height of the image, and I (I, j) and K (I, j) respectively represent pixel values corresponding to two image coordinates (I, j), namely subtracting the pixel values of the corresponding positions of the two images and accumulating the result. MSE is very simple to implement, but when it is used for similarity determination, there is a problem in that a large difference between pixel intensities does not necessarily mean that the contents of images are very different.
In order to better judge the similarity of two images, the invention adopts an SSIM index which can reflect the similarity of the two images from three aspects of brightness, contrast and structure, wherein the mean value is used as brightness estimation, the standard deviation is used as contrast estimation, and the covariance is used as structure similarity measurement. The index is in the range of [0,1], indicating that the two images are not similar at all when SSIM is 0, and indicating that the two images are very similar when SSIM is 1, i.e., the closer the value is to 1, the more similar the two images are.
Further, in step S7, when the structural similarity index is used to evaluate the similarity between the randomly generated SAR generated image and the SAR real image, the expression is:
Figure BDA0002780060230000111
where SSIM (x, y) represents the structural similarity between image x and image y, μxRepresents the mean of the image x; mu.syRepresents the mean of the image y; sigmax 2、σy 2Respectively representing the variances of the images x and y; sigmaxyRepresenting the covariance of image x and image y, C1And C2Is constant and is used for maintaining the stability of the system. And respectively substituting the SAR generated image and the SAR real image into the image x and the image y, so that the similarity between the SAR generated image and the SAR real image can be calculated.
Evaluation based on the SSIM index is more complex than MSE, which attempts to model the perceived changes in image structure information, and MSE is actually an estimate of the perceived error. There is a slight difference between the two, but the difference in the results is still relatively large. If the images are identical, the MSE is 0, the SSIM is 1, if the difference between the pixels of the two images is large and the contents are similar, the MSE is large, and the SSIM can better evaluate the similarity of the contents. Therefore, the invention adopts SSIM to evaluate the generation data of the whole generation countermeasure network after the network model is stable. If the evaluation result is not good, a new training set can be reconstructed, and a new round of training for the generation countermeasure network is performed again.
In a preferred embodiment, step S4 of the target SAR image generation method further includes: if the similarity between the SAR generated image and the SAR real image is lower than a set threshold, storing the corresponding training sample into a fine adjustment sample set;
the step S6 further includes: after the generated countermeasure network which is trained is obtained, before SAR generated images are generated randomly, fine tuning training is carried out on the generated countermeasure network by utilizing the training samples in the fine tuning sample set.
If the similarity between the SAR generated image and the SAR real image is lower than a set threshold, the shape of the image generated by the model is considered to be not good, a training sample corresponding to the image with the bad shape is selected as a fine tuning sample set, the form of the training sample in the fine tuning sample set is the same as that of the training set, after the training is completed and a basic model for generating the countermeasure network is obtained, the fine tuning sample set is input again, the fine tuning training is carried out again to obtain the fine tuned model, the fine tuning training is equivalent to the fast training again, the specific training mode refers to the steps S2 to S4, and the description is not repeated. FIGS. 3(a) and 3(b) illustrate the effect of generating images using a base model trained with a conventional GAN loss function without fine-tuning, where FIG. 3(a) is an input SAR real image, FIG. 3(b) is an SAR generated image, and the training set is the MSTAR (bmp2 and btr70) training set; fig. 4(a) and 4(b) illustrate the effect of training with a conventional GAN loss function and fine-tuning the trained fine-tuning model to generate an image, where fig. 4(a) is an input SAR real image and fig. 4(b) is an SAR generated image; fig. 5(a) and 5(b) show the effect of generating an anti-network generated image after training by using the loss function optimized by the present invention and fine-tuning training in the embodiment of the present invention, where fig. 5(a) is an input SAR real image, and fig. 5(b) is an SAR generated image. By carrying out fine tuning training on the basic model and optimizing the loss function through mutual information, the accuracy of generating the countermeasure network can be improved, namely the electromagnetic scattering SAR image data extrapolation model is optimized.
The invention also provides a target SAR image generation device, which comprises a training set construction unit, a model training unit and an image generation unit, wherein:
the training set construction unit is used for acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3.
The model training unit is used for executing the following steps:
A. selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network, wherein the features comprise the integral relation features of the N SAR real images and the single image features of each SAR real image;
B. inputting the single image characteristics of the first SAR real image extracted in the step A and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between SAR image data sample space characteristics is simulated based on a Bayesian network;
C. respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step B and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function;
D. and C, judging whether the training is finished or not, and returning to the step A if not. Until the training is completed.
The image generation unit is used for obtaining the generation countermeasure network after training and randomly generating an SAR generation image.
Preferably, when the model training unit measures the similarity between the SAR generated image and the SAR real image by using the loss function, a penalty term based on mutual information is introduced into the loss function, and the expression of the mutual information is as follows:
Figure BDA0002780060230000131
wherein X and Y represent SAR real image and SAR generated image respectively, X represents pixel gray level in X, Y represents pixel gray level in Y, PXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
Preferably, the target SAR image generation device further comprises a model evaluation unit, and the model evaluation unit is configured to evaluate the similarity between the randomly generated SAR generated image and the SAR real image by using the structural similarity index.
Since the content of information interaction, execution process, and the like between the units of the target SAR image generation apparatus is based on the same concept as the method embodiment of the present invention, specific content may refer to the description in the method embodiment of the present invention, and is not described herein again.
In the above embodiments, the hardware unit may be implemented mechanically or electrically. For example, a hardware element may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware elements may also comprise programmable logic or circuitry, such as a general purpose processor or other programmable processor, that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
In particular, in some preferred embodiments of the present invention, there is also provided a computer device, including a memory and a processor, the memory storing a computer program, and the processor implementing the steps of the target SAR image generation method in any one of the above embodiments when executing the computer program.
In other preferred embodiments of the present invention, a computer-readable storage medium is further provided, on which a computer program is stored, which when executed by a processor implements the steps of the target SAR image generation method described in any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes in the method according to the above embodiments may be implemented by a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes in the above embodiments of the target SAR image generation method, and will not be described again here.
In summary, the invention provides a method and a device for generating an SAR image by combining a depth implicit model and a probability map model, and the method is used for finally obtaining an optimized electromagnetic scattering SAR image data extrapolation model by sequentially carrying out SAR image feature analysis, training and generation processes and mutual information association optimization design on incomplete target electromagnetic scattering SAR image data, thereby realizing the extrapolation generation of the SAR image data and perfecting and expanding the data volume. The method can solve the problem of incomplete data of the current SAR image, and can efficiently acquire more complete electromagnetic scattering characteristic data at low cost particularly for targets with complex shapes, materials and structures. The method can generate electromagnetic scattering characteristic data with high confidence level based on an intelligent algorithm model, and can be continuously and effectively used when factors such as target attitude, frequency and polarization change.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A target SAR image generation method is characterized by comprising the following steps:
s1, acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3;
s2, selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network, wherein the features comprise the overall relation features of the N SAR real images and the single image features of each SAR real image;
s3, inputting the single image characteristics of the first SAR real image extracted in the step S2 and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between SAR image data sample space characteristics is simulated based on a Bayesian network;
s4, respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step S3 and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function;
s5, judging whether the training is finished or not, and if not, returning to the step S2;
and S6, obtaining the generation countermeasure network after training, and randomly generating an SAR generation image.
2. The method of generating a target SAR image according to claim 1, characterized in that:
in step S4, when the loss function is used to measure the similarity between the SAR-generated image and the SAR-real image, a penalty term based on mutual information is introduced into the loss function, where the expression of the mutual information is:
Figure FDA0002780060220000011
wherein X and Y represent SAR real image and SAR generated image respectively, X represents pixel gray level in X, Y represents pixel gray level in Y, PXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
3. The method of generating a target SAR image according to claim 1, further comprising:
and S7, evaluating the similarity of the SAR generated image generated randomly and the SAR real image by adopting the structural similarity index.
4. The method of generating a target SAR image according to claim 3, characterized in that:
in step S7, when the structural similarity index is used to evaluate the similarity between the randomly generated SAR generated image and the SAR real image, the expression is:
Figure FDA0002780060220000021
where SSIM (x, y) represents the structural similarity between image x and image y, μxRepresents the mean of the image x; mu.syRepresents the mean of the image y; sigmax 2、σy 2Respectively representing the variances of the images x and y; sigmaxyRepresenting the covariance of image x and image y, C1And C2Is a constant.
5. The target SAR image generation method according to claim 1,
the step S4 further includes: if the similarity between the SAR generated image and the SAR real image is lower than a set threshold, storing the corresponding training sample into a fine adjustment sample set;
the step S6 further includes: after the generated countermeasure network which is trained is obtained, before SAR generated images are generated randomly, fine tuning training is carried out on the generated countermeasure network by utilizing the training samples in the fine tuning sample set.
6. A target SAR image generation apparatus, comprising:
the training set constructing unit is used for acquiring real SAR image data to form a training set; the training set comprises a plurality of groups of training samples, each group of training samples comprises 2N SAR real images with the same pitch angle and continuous azimuth angles, and N is an integer greater than or equal to 3;
a model training unit for performing the steps of:
A. selecting N SAR real images with continuous azimuth angles from a group of training samples, and extracting features by using a convolutional neural network, wherein the features comprise the integral relation features of the N SAR real images and the single image features of each SAR real image;
B. inputting the single image characteristics of the first SAR real image extracted in the step A and the integral relation characteristics of the continuous N SAR real images into a generator for generating a countermeasure network to obtain N-1 SAR generated images, wherein the azimuth angles of the N-1 SAR generated images are continuous and correspond to the rear of the first SAR real image; the generation countermeasure network is constructed by combining a depth implicit model and a probability graph model, and the relation between SAR image data sample space characteristics is simulated based on a Bayesian network;
C. respectively comparing the characteristics of the N-1 SAR generated images obtained by the generator in the step B and the corresponding N-1 SAR real images through the discriminator for generating the countermeasure network, and measuring the similarity of the SAR generated images and the SAR real images by using a loss function;
D. judging whether the training is finished or not, and returning to the step A if not;
and the image generation unit is used for obtaining the generation countermeasure network after training is finished and randomly generating an SAR generation image.
7. The target SAR image generation device of claim 6, wherein: when the model training unit measures the similarity of the SAR generated image and the SAR real image by using a loss function, a punishment item based on mutual information is introduced into the loss function, and the expression of the mutual information is as follows:
Figure FDA0002780060220000031
wherein X and Y represent SAR real image and SAR generated image respectively, X represents pixel gray level in X, Y represents pixel gray level in Y, PXY(X, Y) is the combined probability density of X and Y, PX(x) And PY(Y) is the edge probability density function of X and Y, respectively.
8. The target SAR image generation device of claim 6, further comprising:
and the model evaluation unit is used for evaluating the similarity between the SAR generated image generated randomly and the SAR real image by adopting the structural similarity index.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the target SAR image generation method according to any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the target SAR image generation method according to any one of claims 1 to 5.
CN202011278930.5A 2020-11-16 2020-11-16 Target SAR image generation method and device Active CN112415514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011278930.5A CN112415514B (en) 2020-11-16 2020-11-16 Target SAR image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011278930.5A CN112415514B (en) 2020-11-16 2020-11-16 Target SAR image generation method and device

Publications (2)

Publication Number Publication Date
CN112415514A true CN112415514A (en) 2021-02-26
CN112415514B CN112415514B (en) 2023-05-02

Family

ID=74831929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011278930.5A Active CN112415514B (en) 2020-11-16 2020-11-16 Target SAR image generation method and device

Country Status (1)

Country Link
CN (1) CN112415514B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114997238A (en) * 2022-06-24 2022-09-02 西北工业大学 SAR target identification method and device based on distributed correction
CN116051426A (en) * 2023-03-27 2023-05-02 南京誉葆科技股份有限公司 Synthetic aperture radar image processing method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130106649A1 (en) * 2011-10-31 2013-05-02 Kenneth W. Brown Methods and apparatus for wide area synthetic aperture radar detection
CN109284786A (en) * 2018-10-10 2019-01-29 西安电子科技大学 The SAR image terrain classification method of confrontation network is generated based on distribution and structure matching
CN110783177A (en) * 2019-10-31 2020-02-11 中山大学 Method for growing graphical GaN on sapphire template and GaN epitaxial wafer
CN110930471A (en) * 2019-11-20 2020-03-27 大连交通大学 Image generation method based on man-machine interactive confrontation network
US20200142057A1 (en) * 2018-11-06 2020-05-07 The Board Of Trustees Of The Leland Stanford Junior University DeepSAR: Specific Absorption Rate (SAR) prediction and management with a neural network approach
CN111179207A (en) * 2019-12-05 2020-05-19 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
US20200167161A1 (en) * 2017-08-08 2020-05-28 Siemens Aktiengesellschaft Synthetic depth image generation from cad data using generative adversarial neural networks for enhancement
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network
CN111767861A (en) * 2020-06-30 2020-10-13 苏州兴钊防务研究院有限公司 SAR image target identification method based on multi-discriminator generation countermeasure network
CN111861906A (en) * 2020-06-22 2020-10-30 长安大学 Pavement crack image virtual augmentation model establishment and image virtual augmentation method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130106649A1 (en) * 2011-10-31 2013-05-02 Kenneth W. Brown Methods and apparatus for wide area synthetic aperture radar detection
US20200167161A1 (en) * 2017-08-08 2020-05-28 Siemens Aktiengesellschaft Synthetic depth image generation from cad data using generative adversarial neural networks for enhancement
CN109284786A (en) * 2018-10-10 2019-01-29 西安电子科技大学 The SAR image terrain classification method of confrontation network is generated based on distribution and structure matching
US20200142057A1 (en) * 2018-11-06 2020-05-07 The Board Of Trustees Of The Leland Stanford Junior University DeepSAR: Specific Absorption Rate (SAR) prediction and management with a neural network approach
CN110783177A (en) * 2019-10-31 2020-02-11 中山大学 Method for growing graphical GaN on sapphire template and GaN epitaxial wafer
CN110930471A (en) * 2019-11-20 2020-03-27 大连交通大学 Image generation method based on man-machine interactive confrontation network
CN111179207A (en) * 2019-12-05 2020-05-19 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network
CN111861906A (en) * 2020-06-22 2020-10-30 长安大学 Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN111767861A (en) * 2020-06-30 2020-10-13 苏州兴钊防务研究院有限公司 SAR image target identification method based on multi-discriminator generation countermeasure network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KU, WONHOE等: "The Method for Colorizing SAR Images of Kompsat-5 Using Cycle GAN with Multi-scale Discriminators", KOREAN JOURNAL OF REMOTE SENSING *
ZHAI, JIA等: "AR Image Generation Using Structural Bayesian Deep Generative Adversarial Network", 2019 PHOTONICS & ELECTROMAGNETICS RESEARCH SYMPOSIUM - FALL (PIERS - FALL) *
吴飞: "基于生成对抗网络和非局部神经网络的SAR图像变化检测", 中国优秀硕士学位论文全文数据库 信息科技辑 *
徐英;谷雨;彭冬亮;刘俊;: "基于DRGAN和支持向量机的合成孔径雷达图像目标识别", 光学精密工程 *
程深: "生成对抗网络及其应用研究", 中国优秀硕士学位论文全文数据库 信息科技辑 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114997238A (en) * 2022-06-24 2022-09-02 西北工业大学 SAR target identification method and device based on distributed correction
CN114997238B (en) * 2022-06-24 2023-04-07 西北工业大学 SAR target identification method and device based on distributed correction
CN116051426A (en) * 2023-03-27 2023-05-02 南京誉葆科技股份有限公司 Synthetic aperture radar image processing method

Also Published As

Publication number Publication date
CN112415514B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN107680120B (en) Infrared small target tracking method based on sparse representation and transfer limited particle filtering
CN109522857B (en) People number estimation method based on generation type confrontation network model
Zhang et al. Global and local saliency analysis for the extraction of residential areas in high-spatial-resolution remote sensing image
Lee et al. Unsupervised multistage image classification using hierarchical clustering with a Bayesian similarity measure
CN112132012B (en) High-resolution SAR ship image generation method based on generation countermeasure network
CN108399430B (en) A kind of SAR image Ship Target Detection method based on super-pixel and random forest
CN109446894A (en) The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
CN110826428A (en) Ship detection method in high-speed SAR image
CN108681689B (en) Frame rate enhanced gait recognition method and device based on generation of confrontation network
CN108171119B (en) SAR image change detection method based on residual error network
Khan et al. A customized Gabor filter for unsupervised color image segmentation
CN103729854A (en) Tensor-model-based infrared dim target detecting method
CN112415514B (en) Target SAR image generation method and device
CN108345898A (en) A kind of novel line insulator Condition assessment of insulation method
Picco et al. Unsupervised Classification of SAR Images Using Markov Random Fields and ${\cal G} _ {I}^{0} $ Model
CN115705393A (en) Radar radiation source grading identification method based on continuous learning
CN115861595B (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning
Pelliza et al. Optimal Canny’s parameters regressions for coastal line detection in satellite-based SAR images
CN116030300A (en) Progressive domain self-adaptive recognition method for zero-sample SAR target recognition
CN115223033A (en) Synthetic aperture sonar image target classification method and system
CN113920391A (en) Target counting method based on generated scale self-adaptive true value graph
Xinyu et al. Methods for underwater sonar image processing in objection detection
Yu et al. A lightweight ship detection method in optical remote sensing image under cloud interference
CN109934292B (en) Unbalanced polarization SAR terrain classification method based on cost sensitivity assisted learning
Zhang et al. Statistical shape model of Legendre moments with active contour evolution for shape detection and segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant