CN107886123A - A kind of synthetic aperture radar target identification method based on auxiliary judgement renewal learning - Google Patents

A kind of synthetic aperture radar target identification method based on auxiliary judgement renewal learning Download PDF

Info

Publication number
CN107886123A
CN107886123A CN201711088179.0A CN201711088179A CN107886123A CN 107886123 A CN107886123 A CN 107886123A CN 201711088179 A CN201711088179 A CN 201711088179A CN 107886123 A CN107886123 A CN 107886123A
Authority
CN
China
Prior art keywords
mrow
msub
mtr
mtd
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711088179.0A
Other languages
Chinese (zh)
Other versions
CN107886123B (en
Inventor
崔宗勇
唐翠
曹宗杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201711088179.0A priority Critical patent/CN107886123B/en
Publication of CN107886123A publication Critical patent/CN107886123A/en
Application granted granted Critical
Publication of CN107886123B publication Critical patent/CN107886123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to radar remote sensing applied technical field, particularly relates to a kind of synthetic aperture radar target identification method based on auxiliary judgement renewal learning.The method of the present invention utilizes a small amount of initial training sample training initial model, the image of newly-increased non-label is as test sample, recognition result is re-used as the training sample trained next time, and the repetitive exercise on the basis of existing model is until obtain the stable ripe identifying system of a recognition efficiency.The present invention is classified using the further feature that convolutional neural networks are main body extraction SAR targets, in conjunction with the auxiliary judgement of subsidiary classification device, allow what is increased newly directly to apply to existing grader without label SAR image, while avoid sample repetition training, improve recognition efficiency.

Description

Synthetic aperture radar target identification method based on auxiliary judgment update learning
Technical field
The invention belongs to the technical field of radar remote sensing application, and particularly relates to a synthetic aperture radar target identification method based on auxiliary judgment update learning.
Background
Synthetic Aperture Radar (hereinafter referred to as SAR) has the characteristics of all-time and all-weather, and is an important earth observation means. The SAR target recognition utilizes SAR image information to judge the attributes such as target types and types, has clear application requirements in military fields such as battlefield reconnaissance and accurate striking, and is one of key technologies for improving the information perception capability of SAR sensors and realizing SAR technology application.
The SAR target recognition performance is closely related to the training sample. Object recognition requires a large number of samples with class labels, which requires a significant expenditure of human and material resources. Compared with an optical image, the SAR image samples are small in number and slowly increase along with time, and part of newly added SAR images do not carry labels and are difficult to be directly used for improving the performance of a detector and a classifier.
Meanwhile, in the training process, under the condition that an SAR image sample is newly added, the traditional method directly adds the sample with the new classification label into the original sample set and carries out the training step of the training sample again, which means the repeated training of the training sample, so that a large amount of expenses are caused for the repetitive work, and the recognition efficiency is reduced. Therefore, how to effectively utilize the newly added SAR image to realize the performance increase of the SAR target recognition system and reduce the training overhead is an important problem in the SAR image interpretation field.
The existing research for effectively utilizing the newly added sample to improve the target identification performance mainly comprises the following steps: (1) constructing a layered model by utilizing a neural network, logically forming a tree, dividing all types of samples to be identified into super classes, distributing each super class to a leaf model, when the samples are increased, newly adding the samples to trigger a root node for outputting the probability of the leaf model to which the samples belong, then selecting the leaf model with the highest probability to determine the identification type of the leaf model, and only updating or adding a part of subtrees to achieve the purpose of improving the identification performance by only adding the newly added samples; (2) by transversely expanding the structure of a Convolutional Neural Network (CNN), when samples are added, corresponding new CNNs are generated, and finally, all the networks in the transverse direction are combined to output as a final recognition result. But these studies are based on optical image data. However, the imaging mechanism of the SAR image is greatly different from that of a common optical sensor, so that the SAR image cannot be intuitively understood like an optical image, the newly added SAR image does not have a classification label, information conveyed by the radar image can be confirmed only through training, and complete manual reading and understanding cannot meet the real-time requirement in some applications. Meanwhile, compared with an optical image, the SAR image has certain distortion due to a special imaging mechanism of the SAR image, so that the characteristic extraction of the SAR image is difficult.
Disclosure of Invention
The invention aims to solve the problems or the defects, and aims to effectively utilize a newly added SAR image sample without a classification label to realize the performance enhancement of an SAR target recognition system and simultaneously avoid the overhead caused by repeated training of the sample. The method utilizes the CNN as a main body to extract deep features of the SAR target for classification, and then combines an auxiliary classifier to perform auxiliary judgment, so that the newly added SAR image can be directly used for improving the performance of the existing classifier.
The technical scheme of the invention is as shown in figure 1, which comprises the following steps:
step 1, constructing a CNN model.
The structure of CNN is shown in FIG. 2. Wherein the activation function of the neural node is a modified linear Unit (ReLU).
CNN is capable of extracting image target features at different depths. The convolutional layer of the CNN extracts different features of the input SAR image sample by convolution operation of a convolution filter of size ω, which outputs:
where the convolution kernel has a sliding step of S1, S is the input, S' of the convolution layer is the input of the next level, wnmThe n row and m column parameters representing the convolution kernel; and the size of the convolution kernel is changed by adjusting omega according to the size of the target to be identified in the SAR image.
The pooling layer follows the convolutional layer, and the characteristic graph size output by the pooling layer is as follows:
ho=(hid)/stride+1
wherein, ω isdStride represents the spacing of adjacent pooled filters, which is the size of the pooled filters;
after passing through the plurality of convolution layers and pooling layers, the fully connected layers are connected. Each neuron in the full-connection layer is fully connected with all neurons in the previous layer. The elements of each size LxW feature map are weighted and summed, i.e.Wherein k isijAs a parameter of the ith row and j column of the filter, enmThe element of the n-th row and m-th column of the feature map has a feature matrix of X ═ X1x2x3...xn]TAnd finally, the output value of the full connection layer of the last layer is transmitted to an output layer, and the output probability matrix is obtained by classification through Softmax logistic regression:
wherein,as a parameter of Softmax, y is the convolution neural network to SAR imageAnd identifying the target category.
Step 2, taking the original image set as an initial training sample to obtain a CNN (CNN) model for target identification of the initial SAR image and an auxiliary classifier with higher identification accuracy; the original image set contains a small number of classification-tagged SAR image samples.
Step 3, sending the unlabeled SAR image sample set to be identified into the CNN model and the auxiliary classifier obtained in the step 1 for target classification to obtain respective probability identification matrix hθ(x) And hAssist
Step 4, probability identification matrix hθ(x) And hAssistObtaining a final classification result and a label matrix l by a judgment method, specifically:
in the step, the judgment method is utilized, the function of the auxiliary classifier is fully exerted, the problem of low performance of the convolutional neural network in the initial stage is solved, and the performance of the convolutional neural network is steadily and gradually improved.
Setting the output probability identification matrix of the n types of SAR image samples as follows:
h=[p1,p2...pn]
wherein p is1,p2...pnThe constraint conditions of (1) are:
obtaining a class label matrix l by using a Gaussian integer functioncAnd c represents the classifier:
to lCNNAnd lAssistPerforming Hadamard product to obtain final output class label matrix l, whereinlCNNAnd lAssistRespectively representing the class label matrixes obtained by the convolutional neural network and the auxiliary classifier:
l=lCNN*lAssist
and the SAR image sample corresponding to the class label matrix l of the non-zero matrix is used as a newly added training sample, and meanwhile, the class corresponding to the maximum probability is used as the class label of the SAR image sample.
And 5, taking the samples obtained after judgment and the labels corresponding to the samples as newly-added image training sample training auxiliary classifiers, and meanwhile, updating parameters of the CNN model by combining an error back propagation algorithm.
Let PiAnd Pi+1Respectively representing the recognition accuracy of the recognition system after the ith time and the (i + 1) th time of update learning. When the following conditions are satisfied:
at this point, the update learning iteration can be stopped, and max (P) is retainedi,Pi+1) A corresponding identification system. Wherein Ω and Θ are values set according to actual requirements.
And 6, repeating the step 2, the step 3 and the step 4 until obtaining an identification system with stable and reliable identification efficiency.
The invention has the beneficial effects that: the method utilizes the CNN as a main body to extract the deep features of the SAR target for classification, and combines the auxiliary judgment of an auxiliary classifier, so that the newly added unlabeled SAR image can be directly applied to the existing classifier, the repeated training of samples is avoided, and the identification efficiency is improved.
Drawings
Fig. 1 is a diagram of a convolutional neural network structure.
Fig. 2 is a diagram of an update learning framework.
Fig. 3 is an MSTAR tank raw image.
Fig. 4 is a verification diagram of the learning and recognition performance of CNN combined with SVM update.
Detailed Description
The technical solution of the present invention is described in detail below with reference to examples.
Examples
Embodiments of the present invention employ MSTAR image data and a brief description of MSTAR will now be provided.
The mstar (moving and static Target Acquisition recognition) Project was initiated in 1994 and is a SARATR topic of combined Research provided by the Defense Advanced Research Project Agency (DARPA) and the Air Force Research Laboratory (AFRL). The experimental data adopts a bunching MSTAR SAR image set of a ground military vehicle, the image resolution is 0.3m multiplied by 0.3m, and the pixel size is 128 multiplied by 128. The MSTAR data has now become a standard database for examining SAR target recognition and classification algorithms. Most of the SAR target recognition and classification algorithms published in authoritative magazines and conferences are tested and evaluated using MSTAR data.
The size of the MSTAR image in fig. 3 is 128 × 128, and the image contains 3 regions: tanks, shadows, and background.
The invention aims to enable the SAR target recognition system to have the updating learning capability and effectively utilize the newly added SAR image with the unknown label to improve the performance of the classifier. Therefore, the training sample is divided into an initial sample and a newly added sample, the newly added sample is a test sample and is divided into a plurality of batches, and the condition that the samples are obtained in batches in practical application is simulated. The test sample is an unknown label sample, the sample with correct judgment and the label thereof are obtained after the test as the next training sample, meanwhile, the CNN model obtained by the last training is used as the initial CNN model of the next training, and the network parameters are continuously updated on the basis.
Table 1 records 6 updates of the CNN model parameters, where the test data sets are Set1 to Set6, and the data Set with '×' represents that the image sample is part of the original data Set and is labeled, i.e., that part of the sample of each test Set is actually part of the training sample component of the updated learned CNN model. To simulate the case of a small number of initial samples, only twenty sample images of the MSTAR class ten target were selected in the first part of the experiment.
TABLE 1 update procedure for CNN model parameters
Updating batches 1 2 3 4 5 6
Initial CNN model Random Model 1 Model 2 Model 3 Model 4 Model 5
Training data set Seed image set Set1* Set2* Set3* Set4* Set5*
CNN model Model 1 Model 2 Model 3 Model 4 Model 5 Model 6
Test data set Set1 Set2 Set3 Set4 Set5 Set6
The total number of the experimental tests is six, and each test set comprises 1000 SAR image samples. The specific categories and corresponding numbers are shown in table 2.
TABLE 2 number of target samples in test sample
Object type 2S1 BMP2 BRDM2 BTR60 BTR70 D7 T62 T72 ZIL131 ZSU234
Number of 90 195 90 65 65 90 135 135 90 90
The experiment comprises three parts which are respectively: (1) updating learning without auxiliary judgment, namely taking the result of each test sample on the CNN model as a newly-added training sample set, training on the existing network model, and repeating the process to realize updating learning; (2) artificially assisted updating learning, namely artificially removing the sample with the CNN identifying errors on the test sample, and taking the correctly classified sample as the next newly added training sample; (3) the SVM is used for updating and learning for assisting judgment, a newly added training sample set is selected, and the process is repeated to realize updating and learning, namely the method provided by the invention. In the experiment, the identification accuracy and the number of error samples of each test are recorded. The identification properties are shown in tables 3, 4 and 5.
TABLE 3 update learning without auxiliary decisions
TABLE 4 human-assisted update learning
TABLE 5 update learning of SVM-assisted decisions
In the third part of the experiment, the number of incorrectly identified samples in each update batch was counted. As shown in table 6, the number of incorrectly identified samples decreases as the number of update batches increases. The first 5 Test sets were retested using the recognition model obtained in Test5, and as shown in fig. 4, the recognition accuracy of the model was high on each Test set.
TABLE 6 update of SVM-aided decisions learns the number of misrecognized samples at each stage on each target
The experimental results show that the identification accuracy rate cannot be increased by the aid of updating learning without auxiliary judgment, and the identification performance of the CNN model is continuously reduced by accumulation of error label samples along with increase of updating batches; in combination with artificially-assisted update learning, because the error label samples are artificially removed, the recognition performance of the CNN gradually tends to be stable along with the continuous increase of update learning batches; the identification accuracy rate is improved to 89% by the updating learning of SVM auxiliary judgment, and the updating learning accuracy rate is 2.1% higher than that of the updating learning method which artificially removes wrong label samples.
Experiments prove that in SAR image target identification application, the invention can utilize the newly added label-free image to continuously improve the identification performance of the system.

Claims (2)

1. A synthetic aperture radar target identification method based on auxiliary decision update learning is characterized by comprising the following steps:
s1, building a convolutional neural network structure, wherein the neural network structure comprises a convolutional layer, a pooling layer, a full-link layer and a softmax classifier; adopting an activation function at a node of the convolutional neural network;
the convolutional neural network model has the following characteristics:
the convolution layer of the convolutional neural network extracts different characteristics of the input SAR image sample through the convolution operation of a convolution filter with the size of omega, and the convolution layer outputs:
<mrow> <msup> <msub> <mi>S</mi> <mrow> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </mrow> </msub> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>&amp;omega;</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>&amp;omega;</mi> </munderover> <msub> <mi>S</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> <mo>(</mo> <mi>j</mi> <mo>+</mo> <mi>m</mi> <mo>)</mo> </mrow> </msub> <msub> <mi>w</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mrow>
where the convolution kernel has a sliding step of S1, S is the input, S' of the convolution layer is the input of the next level, wnmThe n row and m column parameters representing the convolution kernel; adjusting omega by the size of the target to be identified in the SAR image to change the size of a convolution kernel;
the pooling layer follows the convolutional layer, and the characteristic graph size output by the pooling layer is as follows:
ho=(hid)/stride+1
wherein, ω isdStride represents the spacing of adjacent pooled filters, which is the size of the pooled filters;
after passing through a plurality of convolution layers and pooling layers, connecting the full connection layers; each neuron in the full connection layer is fully connected with all neurons in the previous layer; adding elements in each characteristic diagram with the size of L multiplied by WSum of weights, i.e.Wherein k isijAs a parameter of the ith row and j column of the filter, enmThe element of the n-th row and m-th column of the feature map has a feature matrix of X ═ X1x2x3...xn]TAnd obtaining an output probability matrix through a Softmax classifier:
<mrow> <msub> <mi>h</mi> <mi>&amp;theta;</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>|</mo> <mi>X</mi> <mo>,</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mn>2</mn> <mo>|</mo> <mi>X</mi> <mo>,</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>=</mo> <mi>T</mi> <mo>|</mo> <mi>X</mi> <mo>,</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msup> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mi>T</mi> </msup> <mi>X</mi> </mrow> </msup> </mrow> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msup> <msub> <mi>&amp;theta;</mi> <mn>1</mn> </msub> <mi>T</mi> </msup> <mi>X</mi> </mrow> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msup> <msub> <mi>&amp;theta;</mi> <mn>2</mn> </msub> <mi>T</mi> </msup> <mi>X</mi> </mrow> </msup> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <msup> <mi>e</mi> <mrow> <msup> <msub> <mi>&amp;theta;</mi> <mi>T</mi> </msub> <mi>T</mi> </msup> <mi>X</mi> </mrow> </msup> </mtd> </mtr> </mtable> </mfenced> </mrow>
wherein,the parameter is Softmax, and y is the recognition result of the convolutional neural network on the target type in the SAR image;
s2, taking the original image set as an initial training sample to obtain an initial SAR image target recognition convolutional neural network model and an auxiliary classifier with high recognition accuracy; the original image set contains a small number of SAR image samples with classification labels;
s3, sending the unlabeled SAR image sample set to be identified into a convolutional neural network model and an auxiliary classifier for target classification to obtain respective probability identification matrix hθ(x) And hAssist
S4 probability identification matrix hθ(x) And hAssistObtaining a final classification result and a label matrix l by a judgment method, specifically:
setting the output probability identification matrix of the n types of SAR image samples as follows:
h=[p1,p2...pn]
wherein p is1,p2...pnThe constraint conditions of (1) are:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>+</mo> <mn>...</mn> <mo>+</mo> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mn>...</mn> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>&amp;GreaterEqual;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
obtaining a class label matrix l by using a Gaussian integer functioncAnd c represents the classifier:
<mrow> <msub> <mi>l</mi> <mi>c</mi> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <mfrac> <mi>h</mi> <mrow> <mi>max</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
to lCNNAnd lAssistPerforming Hadamard product to obtain final output class label matrix l, wherein lCNNAnd lAssistRespectively representing the class label matrixes obtained by the convolutional neural network and the auxiliary classifier:
l=lCNN*lAssist
an SAR image sample corresponding to a category label matrix l of the non-zero matrix is used as a newly added training sample, and meanwhile, the category corresponding to the maximum probability is used as a category label of the SAR image sample;
s5, taking the samples obtained after judgment and the labels corresponding to the samples as newly added image training sample training auxiliary classifiers, and meanwhile, updating the parameters of the convolutional neural network by combining an error back propagation algorithm;
s6, repeating the steps S2-S5 until obtaining a recognition system with stable and reliable recognition efficiency;
let PiAnd Pi+1Respectively representing the recognition accuracy of the recognition system after the ith and (i + 1) th update learning; when the following conditions are satisfied:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mo>|</mo> <msub> <mi>P</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>&amp;le;</mo> <mi>&amp;Omega;</mi> </mtd> </mtr> <mtr> <mtd> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>&amp;GreaterEqual;</mo> <mi>&amp;Theta;</mi> </mtd> </mtr> </mtable> </mfenced>
at this time, the updating of the learning iterative process can be stopped, and max (P) is reservedi,Pi+1) A corresponding recognition system; wherein Ω and Θ are values set according to actual requirements.
2. The method for target recognition of synthetic aperture radar based on aided decision update learning as claimed in claim 1, wherein the activation function in step S1 comprises Sigmod activation function f (x) -e (1+ e)-x)-1Hyperbolic tangent function f (x) tanh (x), f (x) tanh (x) l, and modified Linear Unit f (x) max (0, x).
CN201711088179.0A 2017-11-08 2017-11-08 synthetic aperture radar target identification method based on auxiliary judgment update learning Active CN107886123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711088179.0A CN107886123B (en) 2017-11-08 2017-11-08 synthetic aperture radar target identification method based on auxiliary judgment update learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711088179.0A CN107886123B (en) 2017-11-08 2017-11-08 synthetic aperture radar target identification method based on auxiliary judgment update learning

Publications (2)

Publication Number Publication Date
CN107886123A true CN107886123A (en) 2018-04-06
CN107886123B CN107886123B (en) 2019-12-10

Family

ID=61779291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711088179.0A Active CN107886123B (en) 2017-11-08 2017-11-08 synthetic aperture radar target identification method based on auxiliary judgment update learning

Country Status (1)

Country Link
CN (1) CN107886123B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564094A (en) * 2018-04-24 2018-09-21 河北智霖信息科技有限公司 A kind of Material Identification method based on convolutional neural networks and classifiers combination
CN108647707A (en) * 2018-04-25 2018-10-12 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, method for diagnosing faults and device, storage medium
CN108664933A (en) * 2018-05-11 2018-10-16 中国科学院遥感与数字地球研究所 The training method and its sorting technique of a kind of convolutional neural networks for SAR image ship classification, ship classification model
CN108931771A (en) * 2018-06-06 2018-12-04 电子科技大学 A kind of method for tracking target based on synthetic aperture radar image-forming technology
CN109492556A (en) * 2018-10-28 2019-03-19 北京化工大学 Synthetic aperture radar target identification method towards the study of small sample residual error
CN109934282A (en) * 2019-03-08 2019-06-25 哈尔滨工程大学 A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information
CN110708469A (en) * 2018-07-10 2020-01-17 北京地平线机器人技术研发有限公司 Method and device for adapting exposure parameters and corresponding camera exposure system
CN111832580A (en) * 2020-07-22 2020-10-27 西安电子科技大学 SAR target identification method combining few-sample learning and target attribute features
CN111898661A (en) * 2020-07-17 2020-11-06 交控科技股份有限公司 Method and device for monitoring working state of turnout switch machine
CN111967479A (en) * 2020-07-27 2020-11-20 广东工业大学 Image target identification method based on convolutional neural network idea
CN112237087A (en) * 2019-07-19 2021-01-19 迪尔公司 Crop residue based field work adjustment
CN114926745A (en) * 2022-05-24 2022-08-19 电子科技大学 Small-sample SAR target identification method based on domain feature mapping
CN114966596A (en) * 2022-05-23 2022-08-30 哈尔滨工业大学 Ionized layer clutter recognition method based on multi-machine learning and hierarchical classification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976376A (en) * 2016-05-09 2016-09-28 电子科技大学 High resolution SAR image target detection method based on part model
CN106407986A (en) * 2016-08-29 2017-02-15 电子科技大学 Synthetic aperture radar image target identification method based on depth model
CN106874889A (en) * 2017-03-14 2017-06-20 西安电子科技大学 Multiple features fusion SAR target discrimination methods based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976376A (en) * 2016-05-09 2016-09-28 电子科技大学 High resolution SAR image target detection method based on part model
CN106407986A (en) * 2016-08-29 2017-02-15 电子科技大学 Synthetic aperture radar image target identification method based on depth model
CN106874889A (en) * 2017-03-14 2017-06-20 西安电子科技大学 Multiple features fusion SAR target discrimination methods based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SIZHE CHEN ET AL: "Target Classification Using the Deep Convolutional Networks for SAR Images", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
ZONGJIE CAO ET AL: "Automatic target recognition with joint sparse representation of heterogeneous multi-view SAR images over a locally adaptive dictionary", 《SIGNAL PROCESSING》 *
田壮壮 等: "基于卷积神经网络的SAR图像目标识别研究", 《雷达学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564094B (en) * 2018-04-24 2021-09-14 河北智霖信息科技有限公司 Material identification method based on combination of convolutional neural network and classifier
CN108564094A (en) * 2018-04-24 2018-09-21 河北智霖信息科技有限公司 A kind of Material Identification method based on convolutional neural networks and classifiers combination
CN108647707A (en) * 2018-04-25 2018-10-12 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, method for diagnosing faults and device, storage medium
CN108647707B (en) * 2018-04-25 2022-09-09 北京旋极信息技术股份有限公司 Probabilistic neural network creation method, failure diagnosis method and apparatus, and storage medium
CN108664933A (en) * 2018-05-11 2018-10-16 中国科学院遥感与数字地球研究所 The training method and its sorting technique of a kind of convolutional neural networks for SAR image ship classification, ship classification model
CN108664933B (en) * 2018-05-11 2021-12-28 中国科学院空天信息创新研究院 Training method of convolutional neural network for SAR image ship classification, classification method of convolutional neural network and ship classification model
CN108931771A (en) * 2018-06-06 2018-12-04 电子科技大学 A kind of method for tracking target based on synthetic aperture radar image-forming technology
CN110708469A (en) * 2018-07-10 2020-01-17 北京地平线机器人技术研发有限公司 Method and device for adapting exposure parameters and corresponding camera exposure system
CN109492556A (en) * 2018-10-28 2019-03-19 北京化工大学 Synthetic aperture radar target identification method towards the study of small sample residual error
CN109934282A (en) * 2019-03-08 2019-06-25 哈尔滨工程大学 A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information
CN112237087A (en) * 2019-07-19 2021-01-19 迪尔公司 Crop residue based field work adjustment
CN111898661A (en) * 2020-07-17 2020-11-06 交控科技股份有限公司 Method and device for monitoring working state of turnout switch machine
CN111832580A (en) * 2020-07-22 2020-10-27 西安电子科技大学 SAR target identification method combining few-sample learning and target attribute features
CN111832580B (en) * 2020-07-22 2023-07-28 西安电子科技大学 SAR target recognition method combining less sample learning and target attribute characteristics
CN111967479A (en) * 2020-07-27 2020-11-20 广东工业大学 Image target identification method based on convolutional neural network idea
CN114966596A (en) * 2022-05-23 2022-08-30 哈尔滨工业大学 Ionized layer clutter recognition method based on multi-machine learning and hierarchical classification
CN114926745A (en) * 2022-05-24 2022-08-19 电子科技大学 Small-sample SAR target identification method based on domain feature mapping
CN114926745B (en) * 2022-05-24 2023-04-25 电子科技大学 Domain feature mapping small sample SAR target recognition method

Also Published As

Publication number Publication date
CN107886123B (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN107886123B (en) synthetic aperture radar target identification method based on auxiliary judgment update learning
CN110929603B (en) Weather image recognition method based on lightweight convolutional neural network
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN110210486B (en) Sketch annotation information-based generation countermeasure transfer learning method
CN106407986B (en) A kind of identification method of image target of synthetic aperture radar based on depth model
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN103955702B (en) SAR image terrain classification method based on depth RBF network
CN106682694A (en) Sensitive image identification method based on depth learning
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN111160217B (en) Method and system for generating countermeasure sample of pedestrian re-recognition system
CN106874688A (en) Intelligent lead compound based on convolutional neural networks finds method
CN113222011B (en) Small sample remote sensing image classification method based on prototype correction
CN106203625A (en) A kind of deep-neural-network training method based on multiple pre-training
CN107528824B (en) Deep belief network intrusion detection method based on two-dimensional sparsification
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN107423747A (en) A kind of conspicuousness object detection method based on depth convolutional network
CN108416270A (en) A kind of traffic sign recognition method based on more attribute union features
CN112200262B (en) Small sample classification training method and device supporting multitasking and cross-tasking
CN110543916A (en) Method and system for classifying missing multi-view data
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN111832580B (en) SAR target recognition method combining less sample learning and target attribute characteristics
CN108229557A (en) The acceleration training method and system of a kind of neural network with label
CN109063750B (en) SAR target classification method based on CNN and SVM decision fusion
CN113255814A (en) Edge calculation-oriented image classification method based on feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant