CN114998749B - SAR data amplification method for target detection - Google Patents

SAR data amplification method for target detection Download PDF

Info

Publication number
CN114998749B
CN114998749B CN202210900855.4A CN202210900855A CN114998749B CN 114998749 B CN114998749 B CN 114998749B CN 202210900855 A CN202210900855 A CN 202210900855A CN 114998749 B CN114998749 B CN 114998749B
Authority
CN
China
Prior art keywords
sample
target
sar image
image data
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210900855.4A
Other languages
Chinese (zh)
Other versions
CN114998749A (en
Inventor
车程安
符晗
贺广均
冯鹏铭
金世超
张鹏
刘世烁
常江
邹同元
王勇
梁银川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Satellite Information Engineering
Original Assignee
Beijing Institute of Satellite Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Satellite Information Engineering filed Critical Beijing Institute of Satellite Information Engineering
Priority to CN202210900855.4A priority Critical patent/CN114998749B/en
Publication of CN114998749A publication Critical patent/CN114998749A/en
Application granted granted Critical
Publication of CN114998749B publication Critical patent/CN114998749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to an SAR data amplification method for target detection, which comprises the following steps: acquiring an original SAR image data set and marking information thereof; carrying out target detection on the original SAR image data set, and constructing a negative sample set by combining the labeling information; obtaining a target sample in the original SAR image data set by using the labeling information, and constructing an azimuth angle target sample set; constructing a generation countermeasure network based on a self-attention mechanism, and performing iterative training on the generation countermeasure network by using the negative sample set and the azimuth angle target sample set; evaluating the quality of the sample generated by the generation countermeasure network to obtain a high-quality generated sample; and inserting the high-quality generated sample and corresponding marking information into the original SAR image data set. The SAR image target identification method based on the self-adaptive clustering algorithm realizes automatic amplification of SAR data, and improves diversity and balance of target samples in an SAR image target identification task training set.

Description

SAR data amplification method for target detection
Technical Field
The invention relates to the technical field of remote sensing image intelligent interpretation cognition and application, in particular to an SAR data amplification method for target detection.
Background
Synthetic Aperture Radar (SAR) has the characteristics of all weather, all time, weather resistance and the like, is one of the important means for ground observation at present, and Radar image target identification becomes an important research direction. Meanwhile, with the continuous development of the deep learning technology, the deep learning is already applied to various industries, and the application of the deep learning to the research of SAR image target detection and identification also achieves good effects. However, since the target characteristics of the SAR image are directly related to the imaging angle, in the absence of sufficient data sample size, the azimuth angles of the target samples may be unevenly distributed, for example, training target samples of some azimuth angles are significantly less than other targets, so that the generalization capability of the target detection model trained by deep learning is too low, and a target with a small training sample size or a target with a large azimuth angle difference from the training sample cannot be accurately identified. Meanwhile, a part of objects close to the target features exist in the SAR image, so that the false detection is easy to occur in the detection process.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention aims to provide an SAR data amplification method for target detection, which realizes automatic amplification of SAR data and improves diversity and balance of target samples in an SAR image target recognition task training set.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the invention provides an SAR data amplification method for target detection, which comprises the following steps:
acquiring an original SAR image data set and marking information thereof;
carrying out target detection on the original SAR image data set, and constructing a negative sample set by combining the labeling information;
obtaining a target sample in the original SAR image data set by using the labeling information, and constructing an azimuth angle target sample set;
constructing a generation countermeasure network based on an attention mechanism, and performing iterative training on the generation countermeasure network by using the negative sample set and the azimuth angle target sample set;
evaluating the quality of the samples generated by the generation countermeasure network to obtain high-quality generated samples;
and inserting the high-quality generated sample and corresponding marking information into the original SAR image data set.
According to an aspect of the present invention, the annotation information includes: a target category label and an azimuth label of the target in the raw SAR image dataset.
According to one aspect of the invention, the target detection is performed on the original SAR image data set, and a negative sample set is constructed by combining the labeling information, and the method comprises the following steps:
Training a target detector by using the original SAR image data set;
detecting the targets in the original SAR image data set by using a trained target detector, and acquiring a negative sample of each type of target false detection according to the labeling information;
adding the positive samples of the target categories subjected to false detection into the negative samples of the corresponding categories according to the quantity proportion;
and constructing a false detection type label for the negative sample to obtain a negative sample set.
According to one aspect of the invention, the target detector employs a Yolov5s model.
According to an aspect of the present invention, the obtaining a target sample in the original SAR image data set by using the labeling information to construct an azimuth target sample set includes:
utilizing the labeling information of the original SAR image data set to cut out a target sample in the original SAR image according to the length-width ratio of the image, and adjusting the image size of the target sample;
and constructing an azimuth angle-target class label for the target sample according to the labeling information to obtain an azimuth angle target sample set.
According to an aspect of the present invention, the constructing a generation countermeasure network based on an attention mechanism, and performing iterative training on the generation countermeasure network by using the negative sample set and the azimuth target sample set includes:
Step 401, constructing a generator and adding an SA (self attention mechanism) module;
step 402, constructing a discriminator and adding the self-attention mechanism SA module;
step 403, generating a random vector, performing unique hot coding on the labels in the negative sample set and the azimuth target sample set, connecting the coded labels with the random vector by using a Concat layer, inputting the connected vector into the generator, and outputting a generated sample;
step 404, fixing the parameters of the generator, and updating the parameters of the discriminator;
step 405, fixing the parameters of the discriminator and updating the parameters of the generator;
step 406, looping the steps 403-405 to iteratively train the generated countermeasure network until the arbiter and the generator reach equilibrium.
According to an aspect of the present invention, the step 404 of fixing the parameters of the generator and updating the parameters of the discriminator includes:
connecting the coded label with a generated sample output by the generator, and inputting the coded label into the discriminator;
connecting the coded label with a real sample in an original SAR image data set corresponding to the label, and inputting the real sample into the discriminator;
Updating parameters of the discriminator to maximize a expectation of logD (x | y) + log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, x is the true sample, y is a label in the set of negative samples and the set of azimuth target samples, and z is a random vector.
According to an aspect of the invention, the step 405, fixing the parameters of the discriminator and updating the parameters of the generator, includes:
connecting the coded label with a generated sample output by the generator, and inputting the coded label into the discriminator;
updating parameters of the generator to minimize a expectation of log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, y is a label in the negative sample set and the azimuth target sample set, and z is a random vector.
According to one aspect of the invention, said evaluating the quality of the sample generated by said generating the antagonistic network comprises:
and respectively calculating the mean value, the variance, the equivalent view and the radiation resolution of the generated sample finally output by the generation countermeasure network and the real sample of the original SAR image data set, removing the generated sample with large difference with the mean value, the variance, the equivalent view or the radiation resolution of the real sample, and obtaining a high-quality generated sample.
According to an aspect of the invention, the inserting of the high quality generated samples, and the corresponding annotation information, in the raw SAR image dataset,
and inserting the high-quality generated sample into the original SAR image data set by using a Poisson fusion method, and adding corresponding marking information to the inserted sample to obtain the SAR image data set after data enhancement.
Compared with the prior art, the invention has the following advantages:
according to the scheme of the invention, the azimuth angle target sample set and the negative sample set are respectively constructed by adding the azimuth angle and the feature labels of the target class to restrict the generation of the countermeasure network, and the self-attention mechanism is introduced into the generation of the countermeasure network to realize the generation of the target samples with high precision and different azimuth angles and different classes, thereby realizing the automatic amplification of SAR data and improving the diversity and balance of the target samples. Meanwhile, a negative sample set which is easy to be falsely detected is generated and added into the original SAR image as a new category, so that the target detection model is helped to distinguish the positive sample from the negative sample. The method solves the problems of missed detection and false detection caused by insufficient training samples and low quality in the SAR image target detection task, further improves the generalization capability and accuracy of the SAR target detection model, and is well applied to data enhancement of the SAR image target recognition task training set.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a flowchart of an implementation of a method for amplifying SAR data for target detection according to an embodiment of the present invention;
FIG. 2 is a flow chart of an implementation of generating a countermeasure network training provided by an embodiment of the invention;
fig. 3 is a schematic diagram of a network structure of a generator in a generation countermeasure network according to an embodiment of the present invention;
fig. 4 schematically shows a structural diagram of an SA module according to an embodiment of the present invention.
Detailed Description
The description of the embodiments of this specification is intended to be taken in conjunction with the accompanying drawings, which are to be considered part of the complete specification. In the drawings, the shape or thickness of the embodiments may be exaggerated and simplified or conveniently indicated. Further, the components of the structures in the drawings are described separately, and it should be noted that the components not shown or described in the drawings are well known to those skilled in the art.
Any reference to directions and orientations to the description of the embodiments herein is merely for convenience of description and should not be construed as limiting the scope of the invention in any way. The following description of the preferred embodiments refers to combinations of features which may be present independently or in combination, and the present invention is not particularly limited to the preferred embodiments. The scope of the invention is defined by the claims.
Referring to fig. 1, an embodiment of the present invention provides a method for amplifying SAR data for target detection, including the following steps:
step 100, acquiring an original SAR image data set and labeling information thereof.
In one embodiment, the labeling information in step 100 specifically includes: a target class label and an azimuth label of a target in the raw SAR image dataset. That is, the target and its azimuth on the SAR image in the original SAR image dataset are labeled with the corresponding target category label and the azimuth label of the target. The SAR image samples contained in the original SAR image dataset are positive or true samples.
And 200, performing target detection on the original SAR image data set, and constructing a negative sample set by combining the labeling information.
In one embodiment, the specific implementation process of performing target detection on the original SAR image data set in step 200 and constructing a negative sample set by combining the labeling information includes:
step 201, training a target detector by using the raw SAR image data set.
And 202, detecting the targets in the original SAR image data set by using a trained target detector, and acquiring a negative sample of each type of target false detection according to the labeling information. Specifically, negative samples of the object false detection of each category are cut from the original SAR image according to the image size with the aspect ratio of 1.
And step 203, adding the positive samples of the target categories subjected to false detection into the negative samples of the corresponding categories according to the quantity proportion. Specifically, the positive sample is also referred to as a real sample, and refers to a real image corresponding to each target category that is falsely detected in the original SAR image dataset. With negative samples: the positive sample is 10: the number ratio of 1 is cut into the positive samples of the corresponding false positive categories with the same aspect ratio and added to the negative samples of each category. And the size of the positive and negative sample images is adjusted to 64 x 64.
And 204, constructing a false detection type label for the negative sample to obtain a negative sample set. Specifically, a false detection category label is constructed for the negative sample image according to the false detection category, for example, false detection-tanker, and a negative sample set of the image-false detection category label is obtained, where the negative sample set further includes a small number of positive samples.
Specifically, the target detector adopts a Yolov5s model.
And 300, acquiring a target sample in the original SAR image data set by using the labeling information, and constructing an azimuth angle target sample set.
In one embodiment, the step 300 of obtaining the target sample in the raw SAR image data set by using the annotation information includes a specific implementation process of constructing an azimuth target sample set:
and 301, utilizing the labeling information of the original SAR image data set to cut out a target sample in the original SAR image according to the length-width ratio of the image, and adjusting the image size of the target sample. Specifically, the aspect ratio is 1, and the image size of the target sample is adjusted to 64 × 64.
Step 302, constructing an azimuth-target class label for the target sample according to the labeling information to obtain an azimuth target sample set. Specifically, an azimuth-target class label, for example, 30 ° -tanker, is constructed for each target sample image according to the azimuth and the target class label of the target sample, to obtain an azimuth target sample set of image-angle and target class labels.
Step 400, constructing a generated countermeasure network based on a self-attention mechanism, and performing iterative training on the generated countermeasure network by using the negative sample set and the azimuth angle target sample set.
In one embodiment, referring to fig. 2, the step 400 of constructing a generative warfare network based on a self-attention mechanism, and the implementation of the iterative training of the generative warfare network using the negative sample set and the azimuth target sample set includes:
step 401, build a generator and add a self-attention mechanism SA module.
Specifically, referring to fig. 3, the generator is composed of a fully connected layer and four up-sampled deconvolution neural networks, and a self-attention mechanism SA module is added to each layer of deconvolution neural network. Referring to fig. 4, a feature map n extracted by the deconvolution neural network is subjected to three 1 × 1 deconvolution operations (fig. 4 sequentially includes, from top to bottom, a first 1 × 1 deconvolution, a second 1 × 1 deconvolution, and a third 1 × 1 deconvolution), an output of the first 1 × 1 deconvolution is transposed, multiplied by an output of the second 1 × 1 deconvolution, and subjected to softmax normalization to obtain a first attention feature map. Multiplying the obtained first attention feature map and the output of the third 1 × 1 deconvolution pixel by pixel to obtain a feature map o of adaptive attention. Further, the adaptive attention feature map o is multiplied by a scaling parameter γ and added to the input feature map n, which is formulated as:
m=γ×o+n
Wherein m represents a characteristic diagram after passing through the SA module of the self-attention mechanism.
Step 402, constructing a discriminator and adding the self-attention mechanism SA module.
Specifically, the structure of the discriminator and the generator is symmetrical, the discriminator and the generator is composed of four down-sampled convolutional neural networks and two fully connected layers, and a self-attention mechanism SA module (see FIG. 4) is also added in each layer of convolutional neural network.
Step 403, generating a random vector, performing One-hot (One-hot) encoding on the labels in the negative sample set and the azimuth target sample set, connecting the encoded labels with the random vector by using a Concat layer, inputting the connected vector into the generator, and outputting a generated sample. Specifically, the length of the random vector is 100, the size of the encoded label is consistent with that of the random vector, and the size of the generated sample image output by the generator is 64 × 64 × 1.
Step 404, fixing the parameters of the generator, and updating the parameters of the discriminator.
Specifically, the fixing the generator parameters in step 404 and the implementation process of updating the parameters of the discriminator includes: connecting the coded label with the generated sample output by the generator, and inputting the coded label into the discriminator; connecting the coded label with a real sample in an original SAR image dataset corresponding to the label, and inputting the real sample into the discriminator; updating parameters of the discriminator to maximize a expectation of logD (x | y) + log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, x is the true sample, y is a label in the set of negative samples and the set of azimuth target samples, and z is a random vector. Here, the encoded tag is a vector, the encoded tag vector is adjusted to an image size of 64 × 64 × 1, and the image sizes of the generated sample and the true sample are 64 × 64 × 1.
Step 405, fixing the parameters of the discriminator and updating the parameters of the generator.
Specifically, the implementation process of fixing the parameters of the discriminator and updating the parameters of the generator in step 405 includes: connecting the coded label with a generated sample output by the generator, and inputting the coded label into the discriminator; updating parameters of the generator to minimize a expectation of log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, y is a label in the negative sample set and the azimuth target sample set, and z is a random vector. Here, the encoded tag is a vector, and the encoded tag vector is adjusted to an image size of 64 × 64 × 1.
And 406, looping the steps 403-405 to iteratively train the generated countermeasure network until the arbiter and the generator reach balance, and at this time, finishing training of the generated countermeasure network.
And 500, evaluating the quality of the sample generated by the generation countermeasure network to obtain a high-quality generated sample.
In one embodiment, the specific implementation of evaluating the quality of the sample generated to resist network generation in step 500 comprises: and respectively calculating the mean value, the variance, the equivalent visual number and the radiation resolution of the generated sample image finally output by the generated countermeasure network and the real sample image of the original SAR image data set, removing the generated sample with large difference with the mean value, the variance, the equivalent visual number or the radiation resolution of the real sample, and obtaining a high-quality generated sample. Specifically, the criterion for determining the index difference between the generated sample and the real sample is that if the difference between the mean value, the variance, the equivalent view and the radiation resolution of the generated sample and the real sample respectively exceeds the corresponding preset threshold, it indicates that the difference between the mean value, the variance, the equivalent view and the radiation resolution of the generated sample and the real sample is large. Specifically, the calculation formula of the mean value is as follows:
Figure DEST_PATH_IMAGE001
Wherein L represents the pixel length of the image, W represents the pixel width of the image, I ij Indicating the pixel size of the pixel point at position (i, j) in the image, where L and W both take the value of 64 in this embodiment.
The variance is calculated as:
Figure 498150DEST_PATH_IMAGE002
the calculation formula of the equivalent vision is as follows:
Figure 946449DEST_PATH_IMAGE004
the calculation formula of the radiation resolution is as follows:
Figure DEST_PATH_IMAGE005
step 600, inserting the high-quality generated sample and the corresponding annotation information into the raw SAR image data set.
In an embodiment, in step 600, the high-quality generated sample is inserted into the original SAR image data set by using a poisson fusion method, and corresponding labeling information is added to the inserted sample to obtain a data-enhanced SAR image data set, so as to implement the amplification of SAR data. Specifically, the generated sample includes a plurality of target sample images and negative sample images at different azimuth angles.
The method of the embodiment of the invention can realize the SAR image data enhancement for target detection. And connecting the added angle label and the target class label with the image to form a conditional constraint relation so as to constrain and generate the SAR image target sample with the confrontation network synthesized specified azimuth angle and target class. Meanwhile, a negative sample set which is easy to be subjected to false detection is generated and added into the original SAR image as an independent category, so that the target detection model is helped to distinguish positive and negative samples, the method can be applied to data enhancement of an SAR image target recognition task training set, and the generalization capability and accuracy of the model are improved.
The sequence numbers of the above steps related to the method of the present invention do not mean the sequence of the execution of the method, and the execution sequence of each step should be determined by its function and inherent logic, and should not limit the implementation process of the embodiment of the present invention at all.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of SAR data amplification for target detection, comprising:
acquiring an original SAR image data set and marking information thereof;
carrying out target detection on the original SAR image data set, and constructing a negative sample set by combining the labeling information;
obtaining a target sample in the original SAR image data set by using the labeling information, and constructing an azimuth angle target sample set;
constructing a generation countermeasure network based on an attention mechanism, and performing iterative training on the generation countermeasure network by using the negative sample set and the azimuth angle target sample set;
evaluating the quality of the samples generated by the generation countermeasure network to obtain high-quality generated samples;
Inserting the high-quality generated sample and corresponding labeling information into the original SAR image dataset;
performing target detection on the original SAR image data set, and constructing a negative sample set by combining the labeling information, wherein the method comprises the following steps:
training a target detector by using the original SAR image data set;
detecting the targets in the original SAR image data set by using a trained target detector, and acquiring a negative sample of each type of target false detection according to the labeling information;
adding the positive samples of the target categories which are falsely detected into the negative samples of the corresponding categories according to the quantity proportion;
constructing a false detection type label for the negative sample to obtain a negative sample set;
the constructing of the self-attention mechanism-based generation countermeasure network, and the iterative training of the generation countermeasure network by using the negative sample set and the azimuth angle target sample set, includes:
step 401, constructing a generator, and adding an automatic attention mechanism SA module;
step 402, constructing a discriminator and adding the self-attention mechanism SA module;
step 403, generating a random vector, performing unique hot coding on the labels in the negative sample set and the azimuth target sample set, connecting the coded labels with the random vector by using a Concat layer, inputting the connected vector into the generator, and outputting a generated sample;
Step 404, fixing the parameters of the generator, and updating the parameters of the discriminator;
step 405, fixing the parameters of the discriminator and updating the parameters of the generator;
step 406, looping the steps 403-405 to iteratively train the generated countermeasure network until the arbiter and the generator reach equilibrium.
2. The method of claim 1, wherein the annotation information comprises: a target class label and an azimuth label of a target in the raw SAR image dataset.
3. The method of claim 1, wherein the target detector employs a Yolov5s model.
4. The method of claim 1, wherein the obtaining target samples in the raw SAR image data set by using the annotation information to construct an azimuth target sample set comprises:
utilizing the labeling information of the original SAR image data set to cut out a target sample in the original SAR image according to the length-width ratio of the image, and adjusting the image size of the target sample;
and constructing an azimuth angle-target class label for the target sample according to the labeling information to obtain an azimuth angle target sample set.
5. The method of claim 1, wherein the step 404 of fixing the parameters of the generator and updating the parameters of the discriminator comprises:
connecting the coded label with a generated sample output by the generator, and inputting the coded label into the discriminator;
connecting the coded label with a real sample in an original SAR image dataset corresponding to the label, and inputting the real sample into the discriminator;
updating parameters of the discriminator to maximize a expectation of logD (x | y) + log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, x is the true sample, y is a label in the set of negative samples and the set of azimuth target samples, and z is a random vector.
6. The method of claim 1, wherein the step 405 of fixing the parameters of the discriminator and updating the parameters of the generator comprises:
connecting the coded label with a generated sample output by the generator, and inputting the coded label into the discriminator;
updating parameters of the generator to minimize a expectation of log (1-D (G (z | y))), wherein D is the discriminator, G is the generator, y is a label in the negative sample set and the azimuth target sample set, and z is a random vector.
7. The method of claim 1, wherein evaluating the quality of the sample generated by the generative countermeasure network comprises:
and respectively calculating the mean value, the variance, the equivalent visual number and the radiation resolution of the final output generated sample of the generated countermeasure network and the real sample of the original SAR image data set, and removing the generated sample with large difference with the mean value, the variance, the equivalent visual number or the radiation resolution of the real sample to obtain a high-quality generated sample.
8. The method of claim 1, wherein the inserting of the high quality generated samples, and corresponding annotation information, into the raw SAR image dataset,
and inserting the high-quality generated sample into the original SAR image data set by using a Poisson fusion method, and adding corresponding marking information to the inserted sample to obtain the SAR image data set after data enhancement.
CN202210900855.4A 2022-07-28 2022-07-28 SAR data amplification method for target detection Active CN114998749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210900855.4A CN114998749B (en) 2022-07-28 2022-07-28 SAR data amplification method for target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210900855.4A CN114998749B (en) 2022-07-28 2022-07-28 SAR data amplification method for target detection

Publications (2)

Publication Number Publication Date
CN114998749A CN114998749A (en) 2022-09-02
CN114998749B true CN114998749B (en) 2023-04-07

Family

ID=83021507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210900855.4A Active CN114998749B (en) 2022-07-28 2022-07-28 SAR data amplification method for target detection

Country Status (1)

Country Link
CN (1) CN114998749B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036753B (en) * 2023-07-18 2024-06-21 北京观微科技有限公司 SAR image expansion method based on template matching and InfoGAN

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461191A (en) * 2020-03-25 2020-07-28 杭州跨视科技有限公司 Method and device for determining image sample set for model training and electronic equipment
CN112613036A (en) * 2020-12-29 2021-04-06 北京天融信网络安全技术有限公司 Malicious sample enhancement method, malicious program detection method and corresponding devices
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN114169389A (en) * 2021-10-22 2022-03-11 福建亿榕信息技术有限公司 Class-expanded target detection model training method and storage device

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9984315B2 (en) * 2015-05-05 2018-05-29 Condurent Business Services, LLC Online domain adaptation for multi-object tracking
US11062179B2 (en) * 2017-11-02 2021-07-13 Royal Bank Of Canada Method and device for generative adversarial network training
CN108664971B (en) * 2018-05-22 2021-12-14 中国科学技术大学 Pulmonary nodule detection method based on 2D convolutional neural network
US10748281B2 (en) * 2018-07-21 2020-08-18 International Business Machines Corporation Negative sample enhanced object detection machine
CN109800778B (en) * 2018-12-03 2020-10-09 浙江工业大学 Faster RCNN target detection method based on difficultly-divided sample mining
CN110136162B (en) * 2019-05-20 2021-06-04 北方工业大学 Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN110188824B (en) * 2019-05-31 2021-05-14 重庆大学 Small sample plant disease identification method and system
CN111062385A (en) * 2019-11-18 2020-04-24 上海眼控科技股份有限公司 Network model construction method and system for image text information detection
US11972604B2 (en) * 2020-03-11 2024-04-30 Shenzhen Institutes Of Advanced Technology Image feature visualization method, image feature visualization apparatus, and electronic device
CN111598787B (en) * 2020-04-01 2023-06-02 西安电子科技大学 Biological radar image denoising method and device, electronic equipment and storage medium thereof
CN112785565B (en) * 2021-01-15 2024-01-05 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112766381B (en) * 2021-01-22 2023-01-24 西安电子科技大学 Attribute-guided SAR image generation method under limited sample
CN112949384B (en) * 2021-01-23 2024-03-08 西北工业大学 Remote sensing image scene classification method based on antagonistic feature extraction
CN113705640B (en) * 2021-08-16 2024-03-01 中国电子科技集团公司第二十八研究所 Method for quickly constructing airplane detection data set based on remote sensing image
CN113642666B (en) * 2021-08-29 2024-02-02 浙江工业大学 Active enhancement soft measurement method based on sample expansion and screening
CN114119386A (en) * 2021-10-11 2022-03-01 内蒙古科技大学 Data enhancement method and device based on deep convolution countermeasure network and Poisson fusion
CN114120220A (en) * 2021-10-29 2022-03-01 北京航天自动控制研究所 Target detection method and device based on computer vision
CN114170532B (en) * 2021-11-23 2024-08-06 北京航天自动控制研究所 Multi-target classification method and device based on difficult sample migration learning
CN114120280A (en) * 2021-11-26 2022-03-01 北京航空航天大学合肥创新研究院(北京航空航天大学合肥研究生院) Traffic sign detection method based on small target feature enhancement
CN114346761B (en) * 2022-01-06 2023-04-28 中国科学技术大学 Cutter abrasion condition detection method based on improved condition generation countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461191A (en) * 2020-03-25 2020-07-28 杭州跨视科技有限公司 Method and device for determining image sample set for model training and electronic equipment
CN112613036A (en) * 2020-12-29 2021-04-06 北京天融信网络安全技术有限公司 Malicious sample enhancement method, malicious program detection method and corresponding devices
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN114169389A (en) * 2021-10-22 2022-03-11 福建亿榕信息技术有限公司 Class-expanded target detection model training method and storage device

Also Published As

Publication number Publication date
CN114998749A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109977918B (en) Target detection positioning optimization method based on unsupervised domain adaptation
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN113657388B (en) Image semantic segmentation method for super-resolution reconstruction of fused image
CN111476159B (en) Method and device for training and detecting detection model based on double-angle regression
CN108052940A (en) SAR remote sensing images waterborne target detection methods based on deep learning
CN110570433B (en) Image semantic segmentation model construction method and device based on generation countermeasure network
CN113962883B (en) Method for enhancing ship target detection SAR image data
CN111738113B (en) Road extraction method of high-resolution remote sensing image based on double-attention mechanism and semantic constraint
US10755146B2 (en) Network architecture for generating a labeled overhead image
CN114612835A (en) Unmanned aerial vehicle target detection model based on YOLOv5 network
CN112862774A (en) Accurate segmentation method for remote sensing image building
CN111553280A (en) Target part identification method based on deep learning
CN112270285B (en) SAR image change detection method based on sparse representation and capsule network
Fan et al. A novel sonar target detection and classification algorithm
CN114998749B (en) SAR data amplification method for target detection
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN116189139A (en) Traffic sign detection method based on Transformer
CN114187506B (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
Chen et al. Class-aware domain adaptation for coastal land cover mapping using optical remote sensing imagery
CN106897683B (en) Ground object detection method and system of remote sensing image
CN117557884A (en) Rotating target detection method based on multi-scale attention
CN116953702A (en) Rotary target detection method and device based on deduction paradigm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant