CN109993050B - Synthetic aperture radar image identification method - Google Patents

Synthetic aperture radar image identification method Download PDF

Info

Publication number
CN109993050B
CN109993050B CN201811430191.XA CN201811430191A CN109993050B CN 109993050 B CN109993050 B CN 109993050B CN 201811430191 A CN201811430191 A CN 201811430191A CN 109993050 B CN109993050 B CN 109993050B
Authority
CN
China
Prior art keywords
layer
order
convolution
size
turning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811430191.XA
Other languages
Chinese (zh)
Other versions
CN109993050A (en
Inventor
占荣辉
田壮壮
张军
欧建平
陈诗琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201811430191.XA priority Critical patent/CN109993050B/en
Publication of CN109993050A publication Critical patent/CN109993050A/en
Application granted granted Critical
Publication of CN109993050B publication Critical patent/CN109993050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a synthetic aperture radar image identification method, and aims to solve the problem that the existing SAR image identification method is inaccurate in identification. The technical scheme is based on a convolutional neural network, and the weight of a convolutional kernel is introduced into a convolutional layer. Firstly, a neural network model is constructed according to an SAR image, and the SAR image is transmitted in a network forward direction to obtain a probability predicted value; matching and comparing the probability predicted value with the class label of the image to obtain a target loss function; parameters in the neural network model are then adjusted using a back propagation algorithm. And finally, identifying the SAR image to be identified through the trained network to obtain an identification result. By adopting the method, the dependence of the identification process on the expert experience can be avoided, and the subjectivity and the breaking property of the identification are avoided; in addition, the invention introduces the weight of the convolution kernel in the convolution layer of the convolution neural network, and can carry out self-adaptive adjustment on the weight of the convolution kernel, thereby further improving the accuracy of target classification and identification.

Description

Synthetic aperture radar image identification method
Technical Field
The invention belongs to the field of image processing, and relates to a Synthetic Aperture Radar (SAR) image recognition method, in particular to an SAR image recognition method based on a deep learning framework.
Background
The Synthetic Aperture Radar (SAR) has the characteristics of high resolution, long distance, all-time, all-weather and the like, compared with the traditional radar, the new system expands the dimension of radar imaging and can provide two-dimensional scattering information of a target, so that the SAR has wide application in a plurality of fields such as battlefield monitoring, weapon guidance, topographic mapping and the like. The conventional SAR image recognition is mainly implemented by a feature extraction and pattern classification method based on artificial experience, for example, document 1: qun Zhao, Jose c. principal, "Support vector machines for SAR automatic target registration", ieee transactions on Aerospace and Electronic Systems, 2001, 37 (2): 643-655(Qun Zhao et al, "Support Vector Machine for SAR automatic target recognition" published in 2001 of the institute of electrical and electronics engineers, aerospace and electronic systems, proceedings of 37 th), it is proposed to use Support Vector Machine (SVM) to perform recognition and classification of targets.
Document 2: jayaraman J.Thiagarajan, Karthikeyan N.Ramamurthy, et al "spark representation for automatic target classification in SAR images", 4th International Symposium on Communications, Control and Signal Processing (ISCSP), 2010: 1-4 ("sparse representation for SAR image automatic target recognition" published by Jayaraman J. Thiagarajan et al on the 4th Commission on International discussions of communication, control and Signal processing 2010) propose to use a normalized set of training vectors to construct a sparse representation dictionary and use the dictionary to find a local linear approximation of the underlying class manifold to compute a sparse representation of the test set.
Document 3: ganggang Dong, Wang Na, et al "spark representation of monogenic signal: with application to target recognition in SAR images ", IEEE Signalprocessing Letters, 2014, 21 (8): 952-956 (the sparse representation of the monogenic signal for target recognition of SAR image published by gangging Dong et al in 2014 at 21 st phase of signal processing promulgation of the institute of electrical and electronics engineers) proposes to extract the monogenic signal of the SAR image, generate an enhanced monogenic feature vector by uniformly down-sampling, normalizing and cascading the monogenic signal, and finally input the monogenic feature vector to a Sparse Representation Classifier (SRC) for training and testing.
Document 4: ganggang Dong, GangyaoKuang "Target recognition in SAR image vision classification on Riemannian semiconductors", IEEE Geoscience and Remote sensing letters, 2015, 12 (1): 199-. The method comprises the steps of firstly representing a target image through a monogenic signal, and constructing a covariance matrix by calculating the correlation among monogenic components. Due to the symmetry and the positive nature of the covariance matrix, a connected Riemannian manifold can be constructed and can be converted into a tangent vector space. Then, the obtained tangent vector is used as a feature vector, and classification is performed by using a Sparse Representation Classifier (SRC).
The above methods usually need to manually extract target features and select a classifier, the recognition effect of the methods depends greatly on manual experience, and feature extraction and classifier design are two relatively independent links. As a data-driven method, the convolutional neural network can automatically learn effective characteristic information from data through machine training and simultaneously complete the classification and identification of targets.
Document 5: the use of a full convolutional neural network to reduce training parameters to avoid overfitting in the case of undersized training samples was proposed in Sizhe Chen, Haipeng Wang, et al, "Target classification using the deep dependent networks for SAR images," IEEE Transactions on geoscience and remove Sensing,2016,54(8):4806- "deep convolutional network for SAR image object classification," published by Sizhe Chen et al in the 2016 institute of Electrical and electronics Engineers, Earth science and telemetry, 54 th.
Document 6: in Jun Ding, Bo Chen et al, "connected neural network with data amplification for SAR target recognition", IEEE Geoscience and Remote sensing letters,2016,13(3), 364-.
Although the convolutional neural network is applied to the SAR image recognition field, the influence of the relationship between different convolutional kernels on the generation of the characteristic diagram is not considered, so that the further improvement of the recognition effect is influenced. The synthetic aperture radar image identification method provided by the invention is based on the convolutional neural network, and the relation between convolutional kernels is constructed by introducing the convolutional kernel weight into the convolutional layer, so that the effective characteristics are enhanced, the non-effective characteristics are inhibited, and the classification performance of the convolutional neural network is improved. No published document relates to the introduction of convolution kernel weights in convolution layers of a convolutional neural network and the application thereof in SAR image recognition.
Disclosure of Invention
The invention aims to provide a synthetic aperture radar image identification method, and solves the problem of inaccurate identification caused by the fact that the relation between different convolution kernels is not considered in the existing SAR image identification method based on a convolution neural network.
The invention introduces the weight of convolution kernel in convolution layer based on convolution neural network. Firstly, a neural network model is constructed according to an SAR image, and the SAR image is transmitted in a network forward direction to obtain a probability predicted value; matching and comparing the probability predicted value with the class label of the image to obtain a target loss function; parameters in the neural network model are then adjusted using a back propagation algorithm. And finally, identifying the SAR image to be identified through the trained network to obtain an identification result.
The invention mainly comprises the following steps:
first, build for trainingThe SAR image database consists of N SAR images and category labels of the N SAR images; n is the number of SAR images in an SAR image database, and each SAR image only comprises one target; SAR image is denoted as G1,…Gn,…GN,GnIs of size Wn×Hn,WnIs GnWidth of (H)nIs GnHigh of (d); class label denoted L1,…Ln,…LN,LnIs a one-dimensional matrix containing C elements corresponding to C object classes, LnThe middle element representing the real category is assigned with 1, and the rest elements are assigned with 0; c is the number of the image categories, C is a positive integer, and C is less than or equal to N; .
Second step, for G1,…Gn,…GNThe pretreatment is carried out, and the method comprises the following steps:
2.1 initializing variable n ═ 1;
2.2 if GnAll the pixels in the pixel array are complex data, and 2.3 is converted; if G isnAll the pixels in the image are real number data, and 2.4 turns;
2.3 pairs of GnW of (2)n×HnEach pixel is subjected to modulus extraction and converted into a real number, and the method comprises the following steps:
2.3.1 let the row variable p be 1;
2.3.2 let column variable q be 1;
2.3.3 orderGn(p, q) is GnThe value of the upper (p, q) point, a is Gn(p, q) the real part of the complex data, b being Gn(p, q) the imaginary part of the complex data.
2.3.4q=q+1;
2.3.5 determining whether q is equal to or less than WnIf yes, turning to 2.3.3; if not, go to 2.3.6.
2.3.6p=p+1;
2.3.7 determining whether p is equal to or less than HnIf yes, turning to 2.3.2; if not, G will be describednThe conversion is to a real image, 2.4.
2.4 Manual determination G1,…,Gn,…,GNPosition of the target and size of the target(Is GnThe width of the medium target is greater than the target width,is GnHigh for medium targets).
2.5 pairs of G1,…Gn,…GNCutting to make all SAR images have uniform size, and making uniform size be WG×HG。WGAnd HGAre all positive integers, and the method comprises the following steps: get1.1 to 2 times the maximum value of (A) as WGGet it1.1 to 2 times the maximum value of (A) as HGWith the object as the center, press W on the imageG×HGAnd (5) cutting.
2.6 Generation of random sequences rn from 1 to N Using random sequence Generation function (e.g., randderm function in MATLAB)1…,rnn,…,rnNUsing random sequence as index, for G1,…Gn,…GN,L1,…Ln,…LNReading is performed to make the random image readRandom class label for readingObtaining a random image sequence G'1,…G′n,…,G′NAnd a random class tag L'1,…,L′n,…,L′N
2.7 to G'1,…G′n,…,G′N,L′1,…,L′n,…,L′NGrouping is carried out, the number of the images and the number of the labels in each group are in, in is more than or equal to 1 and less than or equal to N, and in belongs to [1,64 ]],G′1,…G′n,…,G′NIs totally BN group, L'1,…,L′n,…,L′NAnd also into BN groups.WhereinIndicating rounding up.
Third step, according to WG×HGConstructing a neural network model, wherein the neural network model at least comprises a convolution layer, and a pooling layer can be constructed in order to reduce the dimension of the characteristic diagram in propagation; in order to improve the robustness of the network model, a discarding layer can be constructed; to get a better mapping of the final features, a fully connected layer can be constructed.
3.1 initialize the layer number variable, make the layer number variable cn of the convolutional layer equal to 1, the layer number variable pn of the pooling layer equal to 1, the layer number variable dn of the discard layer equal to 1, and the layer number variable fn of the all-connected layers equal to 1.
3.2 calculating the size of the convolution layer output characteristic diagram, wherein the method comprises the following steps:
3.2.1 if cn is 1, let CWcn=WG,CHcn=HG,CDcn1, otherwise, 3.2.2, where CWcnAn input feature map CG for the cn-th convolution layercnWidth of (C, CH)cnIs CGcnHigh, CDcnIs CGcnDepth of (CW)cn×CHcn×CDcnRepresentation CGcnIs measured in the dimension (d).
3.2.2 pairs of CGcnBuilding convolutional layers with the convolutional kernel of the cn convolutional layer of sizeThe number of convolution kernels of the cn convolution layer is KNcnStep size of convolution kernel at sliding isThe zero element filling size is xcn. Order toRepresenting the kn of the cn-th convolutional layercn(1≤kncn≤KNcn) A number of convolution kernels, each of which is a convolution kernel,representing the kn of the cn-th convolutional layercnThe bias voltage is set to be equal to the bias voltage,representing the kn of the cn-th convolutional layercnThe weights of the convolution kernels. Order toOutputting a feature map for a convolutional layerThe size of (d) is as follows:
in convolutional layers, the size of the convolutional kernelIs usually in the range ofAnd isAnd isKN when cn is 1cnIs gotThe value range is usually KNcn∈[10,20]When cn ≠ 1, KNcnIs usually KNcn∈[KNcn-1,2×KNcn-1]. Step size of convolution kernel during slidingUsually set to 1 or 2, zero element fill size
3.3 construct pooling layer, calculate the size of the pooling layer output characteristic map, the method is:
3.3.1 input profile of pn-th pooling layerOrder toWherein PWcnIs PGpnWidth of (D), PHcnIs PGpnHigh, PD ofcnIs PGpnDepth of (1), then PWpn×PHpn×PDpnRepresents PGpnIs measured in the dimension (d).
3.3.2 for PGpnConstructing the pooling layer such that the size of the sliding window in the pn-th pooling layer isThe step size of the sliding window during sliding isOrder toOutputting a feature map for the pooling layerThe size of (d) is as follows:
wherein,usually set to 2 or 3, step size
3.4, constructing a discarding layer, and calculating the size of a feature map of the discarding layer, wherein the method comprises the following steps:
3.4.1 input feature map for the dn-th discard layerOrder toIn which DWdnRepresents DGdnWidth of (D), DHdnRepresents DGdnHigh, DDdnRepresents DGdnDepth of (2), then DWdn×DHdn×DDdnRepresents DGdnIs measured in the dimension (d).
3.4.2 pairs DGdnBuild a discard layer, orderIndicating the drop probability of the dn-th drop layer, typically orderOrder toOutputting a feature map for a discard layerThe size of (d) is as follows:
3.5 determinationthf is the discard level output profileIs a positive integer, is usually set to 3000, and if satisfied, let cn be cn +1, pn be pn +1, and dn be dn +1, and then letAnd orderTurning to step 3.2; if not, it indicates that the dimension of the output feature map is small, so step 3.6 is performed.
3.6 calculate the size of the fully-connected layer output feature vector.
3.6.1 let fn be 1, let the input feature vector FG for the fn-th fully-connected layerfnDimension of (2)
3.6.2 FW for dimension sizefnFG of (1)fnConstructing a full connection layer, and making a weight matrix A of the full connection layerfnHas a dimension ofOffset fbfnHas a dimension ofWhereinOutputting feature vectors for full connection layersIs usually set
3.7 calculate the feature vector size of the discarded layer.
3.7.1 let dn be dn + 1.
3.7.2 order DWdn=1,DHdn=1,In which DWdn×DHdn×DDdn=DDdnInput feature vector DG representing the dn-th discarded layerdnOf size of (1), wherein
3.7.3 for dimension size DWdn×DHdn×DDdnDG of (1)dnBuild a discard layer, orderIndicating the drop probability of the dn-th drop layer, typically orderOrder toOutputting a feature map for a discard layerThe size of (d) is as follows:
3.7.4 determinationthd is the discard layer output feature mapThe second threshold of (2) is a positive integer, usually set to 1000, and if satisfied, let fn be fn +1, dn be dn +1,and orderTurning to step 3.6.1; if not, go to step 3.8.
3.8 for dimension size ofIs/are as followsConstructing a full connection layer, and making a weight matrix A of the full connection layerfnHas a dimension ofOffset fbfnIs C.
3.9 let CN ═ PN, DN ═ DN, FN ═ FN, i.e. the neural network model has CN convolution layers, PN pooling layers, DN dropping layers and FN +1 full connection layers.
And fourthly, initializing model parameters of the neural network by adopting an Xavier method which is proposed in page 2.3 of page 2 and is published by Gloot Xavier et al in 13 th artificial intelligence and statistics international conference in 2010 and used for understanding difficulty in training the deep forward neural network. The specific operation mode is as follows:
4.1 convolution kernels in pairs of convolution layersInitialization is performed.
4.1.1 initializing the number of layers variable cn of the convolutional layer to 1.
4.1.2 initializing the number variable kn of the convolution kernel of the cn-th convolution layercn=1。
4.1.3 Generation of random matrices using random functions (e.g., rand functions in MATLAB)Wherein the value range of each element is within (0,1),has a dimension of
4.1.4 pairsInitializing to enable:
wherein, the formula (5) representsIs initialized toThe element at the corresponding position is subtracted by 0.5 and multiplied byThe operation of the matrix is in this sense hereinafter.
4.1.5 order kncn=kncn+1, decision kncn≤KNcnIf yes, turn to 4.1.3, and if not, turn to 4.1.6.
4.1.6 making CN ═ CN +1, judging CN ≤ CN, if yes, turning to 4.1.2; if not, indicating the convolution kernelInitialization is complete, 4.2.
4.2 weights for convolution kernels in convolutional layerInitialization is performed.
4.2.1 initialize the number of layers of convolutional layers variable cn 1.
4.2.2 initializing kncn=1。
4.2.3 Using randomizersNumbers (e.g., rand functions in MATLAB) generate random number matrices The value range of each element in the (1) is within (0).
4.2.4 pairsInitializing to enable:
4.2.5 order kncn=kncn+1, decision kncn≤KNcnIf yes, turning to 4.2.3; if not, 4.2.6 is performed.
4.2.6 making CN equal to CN +1, judging CN less than or equal to CN, if yes, turning to 4.2.2; if not, the initialization of the weight of the convolution kernel is completed, and 4.3 is executed.
4.3 weight matrix A in full connection layer1,…,Afn,…,AFN+1Initialization is performed.
4.3.1 initialize fn to 1.
4.3.2 Generation of random number matrix RA Using random function (e.g., rand function in MATLAB)fn,RAfnThe value range of each element in the (1) is within (0).
4.3.3 pairs of AfnInitializing to enable:
4.3.4, making FN equal to FN +1, judging FN is less than or equal to FN +1, if yes, turning to 4.3.2, if not, turning to 4.4.
4.4 bias in the pairs of convolution kernelsAnd offset fb in the full connection layer1,…,fbfn,…,fbFN+1Initialization is performed to assign all biases to 0.
And fifthly, in the training stage, parameters in the neural network model need to be updated through continuous iteration of forward propagation and backward propagation. The number of initialization iterations en is 1.
And sixthly, initializing the group number bn to be 1.
In the seventh step, the initialization variable n ═ bn-1 × in + 1.
Eighth step, to G'nAnd (4) carrying out forward propagation by adopting a forward propagation method, namely carrying out forward propagation on the nth SAR image in the constructed neural network model to obtain the probability prediction value of the SAR image category. And in the transmission process, the output of each layer is the characteristic extracted by the neural network.
8.1 initialize the layer number variable, make the layer number variable cn of the convolutional layer equal to 1, the layer number variable pn of the pooling layer equal to 1, the layer number variable dn of the discard layer equal to 1, and the layer number variable fn of the all-connected layers equal to 1.
8.2 calculating the output characteristic diagram of the convolution layer, the method is as follows:
8.2.1 if cn is 1, let us sayWhereinAnd inputting the feature map of the nth input image at the cn convolutional layer, and otherwise, executing 8.2.2.
8.2.2 pairsAnd (3) carrying out zero element filling, and specifically operating as follows:
8.2.2.1 initialize a size of (CW)cn+2×χcn)×(CHcn+2×χcn)×CDcnIs/are as followsThe matrix is all zeros.
8.2.2.2 initializing cdcn=1,cdcnIs the coordinate of the third dimension on the feature map.
8.2.2.3 initialize chcn=1,chcnIs the second dimension coordinate on the feature map.
8.2.2.4 initializing cwcn=1,cwcnIs the coordinate of the first dimension on the feature map.
8.2.2.5 orderWhereinTo representIn the coordinate (cw)cncn,chcncn,cdcn) The values above are the same as below.
8.2.2.6 order cwcn=cwcn+1, decision cwcn≤CWcnIf yes, go to 8.2.2.5, if not, go to 8.2.2.7.
8.2.2.7 order chcn=chcn+1, determining chcn≤CHcnIf yes, go to 8.2.2.4, if not, go to 8.2.2.8.
8.2.2.8 order cdcn=cdcn+1, decision cdcn≤CDcnIf yes, go to 8.2.2.3, if not, indicating that the assignment is complete, execute 8.2.3.
8.2.3 initializing kncn=1。
8.2.4 calculating the kthcnA convolution kernel andresult of convolution ofThe calculation method is as follows:
wherein,to representIn the coordinate (cw)cn,chcn,kncn) The value of (a); (kwcn,khcn) Is the position coordinate of the convolution kernel, (cw)cn,chcn,cdcn) For inputting feature mapsThe position coordinates of (a).
8.2.5 activation function σ (-) pairsThe value in (3) is subjected to nonlinear processing to obtain a convolution result after the nonlinear processingCommon activation functions include sigmoid function (sigmoid), hyperbolic tangent function (tanh), and modified Linear element (Rectified Linear Unit, ReLU) proposed in "ImageNet Classification with deep neural network" published by Alex Krizhevsky et al at the Ness International conference in 2012.
8.2.6 will utilize an S-shaped functionMapping to 0-1 to obtain the weight of the mapped convolution kernel
8.2.7 utilizeTo pairWeighting to obtain the output of the convolution layerThe specific operation is as follows:
8.2.7.1 initializing kncn=1。
8.2.7.2 initialize chcn=1。
8.2.7.3 initializing cwcn=1。
8.2.7.4 order
8.2.7.5 order cwcn=cwcn+1, decision cwcn≤CWcn+2×χcnIf yes, go to 8.2.7.4, if not, go to 8.2.7.6.
8.2.7.6 order chcn=chcn+1, determining chcn≤CHcn+2×χcnIf yes, go to 8.2.7.3, if not, go to 8.2.7.7.
8.2.7.7 order kncn=kncn+1, decision kncn≤KNcnIf yes, go to 8.2.7.2, if not, indicating that the assignment is complete, execute 8.3.
8.3 calculating the output characteristic diagram of the pooling layer, wherein the method comprises the following steps:
8.3.1 input profile of the pn-th pooling layer
8.3.2 according to the size of the sliding WindowAnd step size of sliding window during slidingFor PGpnPerforming a sliding window operation to obtainAnd (4) a region.
8.3.3 pairsIn equal areas respectively pooled, whereinThe coordinates of the position of the output feature map of the pooling layer are represented, and the position of the output feature map corresponding to the region of the input feature map is represented by the coordinates. The specific operation is as follows:
wherein,as a coordinate relative to the sliding window area, usually a function of taking the maximum valueOr taking the mean functionIf the maximum function is taken, the coordinates of the sliding window area at the position of the maximum are recordedRotating 8.4; if the average function is taken, the operation is directly changed to 8.4.
8.4 calculating the output characteristic graph of the discarding layer, wherein the method comprises the following steps:
8.4.1 input feature map for the dn-th discard layer
8.4.2 Generation of 1 through DD using random sequence Generation function (e.g., randderm function in MATLAB)dnRandom sequence of (2)Random sequence is used as index, and dimension is DDdnIs assigned to the one-dimensional matrix phi. Order toIs 0, orderIs 1. WhereinIndicating a rounding down.
8.4.3 coupling phi with DGdnMultiplication. The specific operation is as follows:
8.4.3.1 initialize dddn=1。
8.4.3.2 initialize dhdn=1。
8.4.3.3 initialize dwdn=1。
8.4.3.4 order
8.4.3.5 order dwdn=dwdn+1, decision dwdn≤DWdnIf yes, go to 8.4.3.4, if not, go to 8.4.3.6.
8.4.3.6 order dhdn=dhdn+1, decision dhdn≤DHdnIf yes, go to 8.4.3.3, if not, go to 8.4.3.7.
8.4.3.7 order dddn=dddn+1, decision dddn≤DDdnIf yes, go to 8.4.3.2, if not, indicating that the assignment is complete, execute 8.5.
8.5 let CN be CN +1, pn be pn +1, dn be dn +1, and dn be dn +1, determine CN ≦ CN, if satisfied, go to 8.2, if not, execute 8.6.
8.6 calculating the output characteristic vector of the full connection layer, the method is as follows:
8.6.1 feature map output by the dn-1 st layer drop layer if fn is 1Converting into one-dimensional feature vector as input feature vector FG of full connection layerfnThe method comprises the following specific operation steps:
8.6.1.1 initialization
8.6.1.2 initialization
8.6.1.3 initialization
8.6.1.4 order:
8.6.1.5 orderDeterminationIf so, go to 8.6.1.4, if not, go to 8.6.1.6.
8.6.1.6 orderDeterminationIf so, go to 8.6.1.3, if not, go to 8.6.1.7.
8.6.1.7 orderDeterminationIf so, go to 8.6.1.2, if not, indicating that the assignment is complete, execute 8.6.2.
8.6.2 calculating the weight matrix AfnAnd FGfnAnd with an offset fbfnAdding to obtain a feature vector FG between convolution layersfn', the calculation is as follows:
FGfn′=Afn×FGfn+fbfn (11)
8.6.3 Using the activation function σ (-) on FGfnThe values in' are subjected to nonlinear processing to obtain nonlinear processed characteristic vectors
8.7 calculating the output feature vector of the discarding layer, the method is:
8.7.1 input feature map for the dn-th discard level
8.7.2 use random sequence generation function (such as randderm function in MATLAB) to generate 1 to DDdnRandom sequence of (2)Random sequence is used as index, and dimension is DDdnIs assigned to the one-dimensional matrix phi. Order toIs 0, orderIs 1. WhereinIndicating a rounding down.
8.7.3 will phi and DGdnMultiplication. The specific operation is as follows:
8.7.3.1 initialize dddn=1。
8.7.3.2 order
8.7.3.3 order dddn=dddn+1, decision dddn≤DDdnIf yes, go to 8.7.3.2, if not, indicating that the assignment is complete, execute 8.8.
8.8 let FN be FN +1 and dn be dn +1, determine FN is not more than FN, if satisfied, go to 8.6, if not, execute 8.9.
8.9 calculating the output characteristic vector of the full connection layer, the method is as follows:
8.9.1 calculating the weight matrix AfnAnd FGfnAnd with an offset fbfnAdding to obtain feature vector (FG) in the middle of the full connection layerfn) ', the calculation is as follows:
(FGfn)′=Afn×FGfn+fbfn (12)
8.9.2 utilize a softmax function pair (FG)fn) ' the values in (1) are non-linearized to obtain a fully connected layer resultTo be provided withFor example, the calculation is as follows:
the probability prediction value of the class c is the image.
The ninth step is based onAnd L'nCalculating a loss function J of the nth SAR imagen
L′n(cc) represents L'nThe cc element of (1); 9.1, let n be n +1, judge BN be BN, if satisfied, execute 9.2; if not, 9.3 is executed.
9.2 judging that N is less than or equal to N, if so, turning to the eighth step; if not, executing the tenth step.
9.3 judging that n is less than or equal to bn x in, if yes, turning to the eighth step, if not, executing the tenth step.
The tenth step is that the loss functions obtained by the SAR images in the bn group are averaged, and the loss function JJ of the bn group in the en iteration is calculated(en-1)×in+bn(i.e., the average of the loss functions of the intra-group SAR images), BN is determined, and if satisfied, 10.1 is passed, and if not, 10.2 is passed.
10.1
10.2
The tenth step, determining the loss function JJ of the bn th group in the en-th iteration(en-1)×in+bnJJM where JJM is a set loss function threshold (typically less than 0.01), if met, indicating that forward propagation is complete, go to the seventeenth step, if not, go to the twelfth step.
A twelfth step of mixing JnAnd carrying out backward propagation in the network so as to adjust the neural network model parameters of the convolution kernel and the weight and the offset of the convolution kernel in the convolution layer and the weight matrix and the offset in the full-connection layer. The method comprises the following specific steps:
12.1 initialize the layer number variable, make the layer number variable CN of the convolution layer equal to CN, the layer number variable PN of the pooling layer equal to PN, the layer number variable DN of the discarding layer equal to DN, and the layer number variable FN of the all-connected layer equal to FN + 1.
12.2 calculation of JnFor weight matrix A in the full connection layer1,…,Afn,…,AFN+1And an offset fb1,…,fbfn,…,fbFN+1Partial derivatives of (a).
12.2.1 if FN ═ FN +1, thenIf not, then,wherein softmax 'and S' represent derivatives of the softmax function and sigmoid function.
12.2.2 calculating JnFor weight matrix AfnAnd JnTo offset fbfnPartial derivatives of (a).
12.2.3, making fn equal to fn-1, judging that fn is equal to or more than 1, if so, converting to 12.2; if not, go to 12.3.
12.3 if PN ═ PN, converting J intonFor FGfn+1Partial derivatives ofIs converted intoFor three-dimensional matrix, the specific operation steps are as follows, otherwise 12.4 is turned.
12.3.1 initialization
12.3.2 initialization
12.3.3 initialization
12.3.4 order:
12.3.5determinationIf so, go to 12.3.4; if not, go to 12.3.6.
12.3.6DeterminationIf so, go to 12.3.3; if not, go to 12.3.7.
12.3.7DeterminationIf so, go to 12.3.2; if not, the assignment is complete, and 12.4 is turned.
12.4 use of JnTo pairPartial derivatives ofCalculating a loss function JnFor PGpnInPartial derivatives of elements within the equal regions.
12.4.1 if in 8.3.2As a function of the maximum, go to 12.4.2, and if it is a function of the average, go to 12.4.3.
12.4.2 order JnFor PGpnThe middle position isThe partial derivatives of the elements of (a) are:
Jnfor PGpnThe partial derivative of the remaining elements in (a) is 0.
12.4.3 order JnFor PGpnPartial derivatives of all elements in
12.5 according to 8.3.1,so can make
12.6 calculation of JnFor convolution kernel in convolution layerWeights of convolution kernelsAnd bias in convolution kernelThe method comprises the following steps:
12.6.1 calculating JnFor convolution kernelPartial derivatives of (a).
12.6.2 calculating JnWeight to convolution kernelPartial derivatives of (a).
12.6.3 calculating JnFor bias in convolution kernelPartial derivatives of (a).
12.6.4, making cn ═ cn-1, pn ═ pn-1, dn ═ dn-1, judging cn ≥ 1, if not, it is said that propagation is completed, and going to the thirteenth step; if so, go to 12.6.5.
12.6.5 calculating JnTo pairPartial derivatives ofThe calculation method is as follows:
12.6.5.1 pairsAnd (3) carrying out zero element filling, and specifically operating as follows:
12.6.5.1.1 initialize a size ofAll-zero matrix JC ofcn+1
12.6.5.1.2 initializationIs composed ofOf the third dimension.
12.6.5.1.3 initializationIs composed ofOf the second dimension.
12.6.5.1.4 initializationIs composed ofOf the first dimension of (a).
12.6.5.1.5 order
12.6.5.1.6DeterminationIf so, go to 12.6.5.1.5, if not, go to 12.6.5.1.7.
12.6.5.1.7DeterminationIf so, go to 12.6.5.1.4, if not, go to 12.6.5.1.8.
12.6.5.1.8DeterminationIf so, go to 12.6.5.1.3, if not, indicating that the assignment is complete, go to 12.6.5.2.
12.6.5.2 will beRotate clockwise by 180 degrees to obtainRotated matrix
12.6.5.3 calculates:
12.6.5.4 to 12.4.
Thirteenth, let n be n +1, judge BN be BN, if satisfy, go to 13.1; if not, go to 13.2.
13.1 judging that N is less than or equal to N, and if the N is less than or equal to N, turning to 12.1.2; if not, go to 13.3.
13.2 judging that n is less than or equal to bn x in, if so, turning to 12.1.2; if not, go to 13.3.
13.3 obtaining loss function for SAR images in the bn group vs. convolution kernel in convolution layerWeights of convolution kernels in convolutional layersBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias in the full connection layerFb setting1,…,fbfn,…,fbFN+1Respectively averaging partial derivatives of (A) by the following steps:
13.3.1, judging BN is BN, if so, turning to 13.3.2; if not, go to 13.3.3.
13.3.2
Turning to the fourteenth step;
13.3.3
a fourteenth step, using the output of the thirteenth step, using "Adam: adam algorithm updating convolution kernel provided in random optimization methodWeights of convolution kernelsBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias fb in the full connection layer1,…,fbfn,…,fbFN+1The method specifically comprises the following steps:
14.1 order
Wherein beta is1Is a given first hyperparameter, usually set to β1=0.9。
14.2 order
Wherein beta is2Is a given second hyperparameter, usually set to β2=0.999。
14.3 order
14.4 updateAfnAnd fbfnOrder:
where η is the initial learning rate and is typically set to η ≦ 0.01.
The fifteenth step, making BN be BN +1, judging that BN is less than or equal to BN, and if so, turning to the seventh step; if not, go to the sixteenth step.
Sixthly, making EN equal to EN +1, judging that EN is less than or equal to EN, and if the EN is less than or equal to EN, turning to the sixth step; if not, the training is finished, and the seventeenth step is executed.
Seventeenth, using the trained network model to identify the SAR image TG to be identified.
17.1 forward propagating the image in the network.
17.1.1 initializing the layer number variable dn of the discarded layer to 1
17.1.2 orderAll of which are 0,.
17.2 adopting the forward propagation method of the eighth step to forward propagate TG in the network to obtain
17.3 searchThe position cm of the medium maximum, i.e.:
and 17.4, outputting the cm-th category in the C categories in the label as the identification result.
Description of the drawings: in actual use, in addition to the convolution layer, other layers may be constructed as desired. For pooling layers, if the feature map dimension during propagation is too large, pooling layers can be used to reduce the feature map dimension. For the discard layer, if the training sample is too small or an overfitting phenomenon occurs during the training process, the discard layer may be used. For a fully connected layer, in order to get a better mapping of features, a fully connected layer may be constructed. The invention provides the construction and use methods of the convolution layer, the pooling layer, the discarding layer and the full-connection layer, but a user can construct the corresponding layer according to the actual requirement, and only the convolution layer needs to be constructed.
The invention has the beneficial effects that:
(1) the training of the convolutional neural network model (namely the eighth step to the sixteenth step) is completely and automatically completed through machine learning, so that the dependence of the recognition process on expert experience is avoided, the subjectivity and the force breaking property of recognition are also avoided, and the accuracy of image recognition is improved.
(2) The method for constructing the neural network model for the SAR image provided in the third step can construct a more effective network model for SAR images with different sizes. Compared with the traditional method for subjectively constructing the neural network model, the method provides a guide for constructing the neural network model.
(3) And in the eighth step, a weight value of a convolution kernel is introduced into a convolution layer of the convolution neural network, and the aim of weighting the features extracted by the convolution kernel is fulfilled by weighting the convolution kernel. The weight of the features extracted by the convolution kernel in the output feature graph can be adjusted through the weight, the representation capability of the features is enhanced, the recognition result is influenced finally, and the accuracy of target classification recognition is further improved.
(4) In the twelfth step, a method for performing backward propagation on the convolution layer containing the weight of the convolution kernel is provided, and the weight of the convolution kernel can be adaptively adjusted. The self-adaptive adjustment avoids the subjectivity of manual setting, and a more appropriate weight can be obtained according to training.
(5) In the fourteenth step, by introducing the Adam algorithm, the network parameters can be adaptively adjusted according to the gradient obtained by training, the defect that the learning rate in the original algorithm is fixed is avoided, and the network model can complete training more quickly and stably.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a flow chart of a third step of constructing a neural network model according to the present invention;
FIG. 3 is a flow chart of an eighth forward propagation method of the present invention;
fig. 4 is a flowchart of the twelfth step back propagation method of the present invention.
Detailed Description
FIG. 1 is a general flow diagram of the present invention. The invention is further described in connection with experiments:
in the first step, a SAR image database for training is constructed. In the experiment, SAR image data for training in the MSTAR dataset, N is 2747, Wn×Hn=128×128。
Secondly, preprocessing SAR images to obtain 2747W imagesG×HG96 × 96 SAR images. Here, in is set to 30, BN is set to 92, that is, the number of images and tags in each group is 30, and the images are grouped into 92 groups in total.
And thirdly, constructing a neural network model according to the preprocessed SAR image. The construction flow is shown in FIG. 3.
3.1 initialize the layer number variable, make the layer number variable cn of the convolutional layer equal to 1, the layer number variable pn of the pooling layer equal to 1, the layer number variable dn of the discard layer equal to 1, and the layer number variable fn of the all-connected layers equal to 1.
3.2 calculate the size of the convolutional layer output feature map.
3.3, building a pooling layer, and calculating the size of an output characteristic diagram of the pooling layer.
3.4 constructing a discarding layer, and calculating the size of the feature map of the discarding layer.
3.5 determinationthf is the discard level output profileIf the first threshold value of (1) is satisfied, let cn be cn +1, pn be pn +1, dn be dn +1, and then let And orderTurning to step 3.2; if not, go to step 3.6.
3.6 calculate the size of the fully-connected layer output feature vector.
3.7 calculate the feature vector size of the discarded layer.
3.7.1 let dn be dn + 1.
3.7.2 order DWdn=1,DHdn=1,In which DWdn×DHdn×DDdn=DDdnInput feature vector DG representing the dn-th discarded layerdnOf size of (1), wherein
3.7.3 for dimension size DWdn×DHdn×DDdnDG of (1)dnBuild a discard layer, orderIndicating the drop probability of the dn-th drop layer, typically orderOrder toOutputting a feature map for a discard layerThe size of (d) is as follows:
3.7.4 determinationthd is the discard layer output feature mapIf the second threshold value of (2) is satisfied, let fn be fn +1, dn be dn +1,and orderTurning to step 3.6; if not, go to step 3.8.
3.8 for dimension size ofIs/are as followsConstructing a full connection layer, and making a weight matrix A of the full connection layerfnHas a dimension ofOffset fbfnDimension of (2)Is C.
3.9 let CN ═ PN, DN ═ DN, FN ═ FN, i.e. the neural network model has CN convolution layers, PN pooling layers, DN dropping layers and FN +1 full connection layers.
In the experiment, thf is first set to 3000, and then according to 3.2, the parameters of the first convolutional layer are set to:KN1=16,χ1the output feature map size is 96 × 96 × 16, which is 3; according to 3.3, the parameters of the first pooling layer are set as follows:the size of the output characteristic graph is 48 multiplied by 16; according to 3.4, the parameters of the first discarded layer are set as:the size of the output characteristic graph is 48 multiplied by 16; a decision of 48 × 48 × 16 > thf is made from 3.5, so return to 3.2 and continue building convolutional layers, pooling layers, and discard layers according to the above steps as follows:
the parameters of the second convolutional layer are set as follows:KN2=32,χ2the output feature map size is 48 × 48 × 32, which is 3; the parameters of the second pooling layer were:the output feature map size is 24 × 24 × 32; the parameters of the second discarded layer are:the output feature map has a size of 24 × 24 × 32, because24X 32 > thf, so 3.2 was returned.
The parameters for the third convolutional layer were set as:KN3=64,χ32, the output feature map size is 24 × 24 × 64; the parameters of the third pooling layer are:the output feature map size is 12 × 12 × 64; the parameters of the third discard layer are:the output feature map size is 12 × 12 × 64, since 12 × 12 × 64 > thf, 3.2 is returned.
The parameters for the fourth convolutional layer are set as follows:KN4=128,χ42, the output feature map size is 12 × 12 × 128; the parameters of the fourth pooling layer are:the output feature map size is 6 × 6 × 128; the parameters of the fourth discard layer are:the output feature map size is 6 × 6 × 128, since 6 × 6 × 128 > thf, 3.2 is returned.
The parameters for the fifth convolutional layer were set as:KN5=128,χ51, the output feature map size is 6 × 6 × 128; the parameters of the fifth pooling layer are:the size of the output characteristic map is 3 multiplied by 128; the parameters of the fifth discard layer are:the output feature map size is 3 × 3 × 128, since 3 × 3 × 128 < thf, proceed to the next step.
Set thd to 1000, then according to 3.6, set the parameters of the first fully-connected layer as:the output feature vector size is 512; according to 3.7, the parameters of the sixth discard layer are set as:the size of the output feature vector is 512, according to 3.7.4, the judgment result shows that 512 is less than thd, a full connection layer is constructed according to 3.8, image data in an experiment has 10 types of targets, and the setting parameters are as follows: c ═ 10. In the finally constructed network, CN ═ 5, PN ═ 5, DN ═ 6, and FN ═ 1.
And fourthly, initializing model parameters of the neural network by adopting an Xavier method. .
Fifthly, setting EN to 300, and initializing the iteration number EN to 1;
sixthly, initializing a group number bn to be 1;
seventhly, initializing a variable n which is (bn-1) x in + 1;
eighth step, as shown in FIG. 3, for G'nAnd carrying out forward propagation by adopting a forward propagation method to obtain a probability prediction value of the SAR image category.
8.1 initialize the layer number variable, make the layer number variable cn of the convolutional layer equal to 1, the layer number variable pn of the pooling layer equal to 1, the layer number variable dn of the discard layer equal to 1, and the layer number variable fn of the all-connected layers equal to 1.
8.2 calculate the output characteristic map of the convolutional layer.
8.3 calculate the output profile of the pooling layer.
8.4 calculate the output profile of the discarded layer.
8.5 let CN be CN +1, pn be pn +1, dn be dn +1, and dn be dn +1, determine CN ≦ CN, if satisfied, go to 8.2, if not, execute 8.6.
8.6 calculate the output feature vector of the fully-connected layer.
8.7 calculate the output feature vector of the drop layer.
8.8 let FN be FN +1 and dn be dn +1, determine FN is not more than FN, if satisfied, go to 8.6, if not, execute 8.9.
8.9 calculate the output feature vector of the fully-connected layer.
In the experiment, the output characteristic graphs of each convolution layer, each pooling layer and each discarded layer are calculated according to the calculation methods of 8.2, 8.3 and 8.4 and the judgment method of 8.5; the output feature vectors of the first fully-connected layer and the sixth pooled layer are calculated according to the calculation methods of 8.6, 8.7 and the decision method of 8.8. Then according to the calculation method of 8.9, obtaining
The ninth step is based onAnd L'nCalculating a loss function J of the nth SAR image according to the formula (14)n
9.1, let n be n +1, judge BN be BN, if satisfied, execute 9.2; if not, 9.3 is executed.
9.2 judging that N is less than or equal to N, if so, turning to the eighth step; if not, executing the tenth step.
9.3 judging that n is less than or equal to bn x in, if yes, turning to the eighth step, if not, executing the tenth step.
The tenth step is that the loss functions obtained by the SAR images in the bn group are averaged, and the loss function JJ of the bn group in the en iteration is calculated(en-1)×in+bn
A tenth step of determining the bn-th group in the en-th iterationLoss function JJ(en-1)×in+bnJJM where JJM is the set loss function threshold, here set to 0.01, if satisfied, indicating that forward propagation is complete, go to the seventeenth step, if not, go to the twelfth step;
the twelfth step, as shown in FIG. 4, is to mix JnAnd carrying out backward propagation in the network, and adjusting the neural network model parameters of the convolution kernel and the weight and the bias of the convolution kernel in the convolution layer, and the weight matrix and the bias in the full-connection layer.
12.1 initialize the layer number variable, make the layer number variable CN of the convolution layer equal to CN, the layer number variable PN of the pooling layer equal to PN, the layer number variable DN of the discarding layer equal to DN, and the layer number variable FN of the all-connected layer equal to FN + 1.
12.2 calculation of JnFor weight matrix A in the full connection layer1,…,Afn,…,AFN+1And an offset fb1,…,fbfn,…,fbFN+1Partial derivatives of (a).
12.2.1 if FN ═ FN +1, thenIf not, then,wherein softmax 'and S' represent derivatives of the softmax function and sigmoid function.
12.2.2 calculation of J according to equation (17)nFor weight matrix AfnAnd JnTo offset fbfnPartial derivatives of (a).
12.2.3, making fn equal to fn-1, judging that fn is equal to or more than 1, if so, converting to 12.2; if not, go to 12.3.
12.3 if PN ═ PN, converting J intonFor FGfn+1Partial derivatives ofIs converted intoAnd (4) turning the three-dimensional matrix back to 12.4, otherwise, directly turning to 12.4.
12.4 use of JnTo pairPartial derivatives ofCalculating a loss function JnFor PGpnInPartial derivatives of elements within the equal regions.
12.5 order
12.6 calculation of JnFor convolution kernel in convolution layerWeights of convolution kernelsAnd bias in convolution kernelThe method comprises the following steps:
12.6.1 calculating J according to equation (20)nFor convolution kernelPartial derivatives of (a).
12.6.2 calculating J according to equation (21)nWeight to convolution kernelPartial derivatives of (a).
12.6.3 calculating J according to equation (22)nFor bias in convolution kernelPartial derivatives of (a).
12.6.4, making cn ═ cn-1, pn ═ pn-1, dn ═ dn-1, judging cn ≥ 1, if not, it is said that propagation is completed, and going to the thirteenth step; if so, go to 12.6.5.
12.6.5 calculating JnTo pairPartial derivatives ofAnd 12.4. turning.
In the experiment, calculating partial derivatives of the loss function to the weight matrix and the bias in the full-connection layer according to 12.2; according to 12.2.3, fn is judged to be more than or equal to 1, if the condition is satisfied, the rotation is 12.2, and if the condition is not satisfied, the rotation is 12.3. According to 12.3, judging PN as PN, if satisfying, converting the partial derivative of the loss function to the characteristic diagram into three-dimension, if not, directly converting into 12.4. The partial derivatives of the loss function for the elements in each pooled region in the feature map are calculated according to 12.4. The partial derivatives of the loss function to the convolution kernel in the convolutional layer, the weights of the convolution kernel and the bias in the convolution kernel are calculated according to 12.6.
Thirteenth, let n be n +1, judge BN be BN, if satisfy, go to 13.1; if not, 13.2 is switched;
13.1 judging that N is less than or equal to N, and if the N is less than or equal to N, turning to 12.1.2; if not, go to 13.3.
13.2 judging that n is less than or equal to bn x in, if so, turning to 12.1.2; if not, go to 13.3.
13.3 obtaining loss function for SAR images in the bn group vs. convolution kernel in convolution layerWeights of convolution kernels in convolutional layersBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias fb in the full connection layer1,…,fbfn,…,fbFN+1The partial derivatives of (a) are respectively averaged.
Fourteenth, updating the convolution kernel using Adam algorithmWeights of convolution kernelsBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias fb in the full connection layer1,…,fbfn,…,fbFN+1
Fifteenth, making BN equal to BN +1, judging that BN is less than or equal to BN, and if so, turning to the seventh step; if not, turning to the sixteenth step;
sixthly, making EN equal to EN +1, judging that EN is less than or equal to EN, and if the EN is less than or equal to EN, turning to the sixth step; if not, indicating that the training is finished, and executing a seventeenth step;
the eighth step to the tenth step are to carry out forward propagation and backward propagation on the image in the model; the thirteenth step and the fourteenth step are to update the parameters of the model. And the eighth step to the fourteenth step are iterative processes, and whether the iteration is ended is judged according to the fifteenth step and the sixteenth step. And finally, when en is 301, the requirement that en is less than or equal to 300 is not met, so that the training is completed, and the model finally used for recognition is obtained.
Seventeenth, using the trained neural network model to identify the SAR image to be identified. In the experiment, 3203 SAR images used for testing in the MSTAR data set are adopted to test the neural network model. If the output category is the same as the actual category, the recognition is determined to be correct. The final correct recognition rate was 98.39%, demonstrating the effectiveness of the present invention.

Claims (13)

1. A synthetic aperture radar image identification method is characterized by comprising the following steps:
first, build for trainingThe SAR image database consists of N SAR images and category labels of the N SAR images; n is the number of SAR images in an SAR image database, and each SAR image only comprises one target; SAR image is denoted as G1,…Gn,…GN,GnIs of size Wn×Hn,WnIs GnWidth of (H)nIs GnHigh of (d); class label denoted L1,…Ln,…LN,LnIs a one-dimensional matrix containing C elements corresponding to C object classes, LnThe middle element representing the real category is assigned with 1, and the rest elements are assigned with 0; c is the number of the image categories, C is a positive integer, and C is less than or equal to N;
second step, for G1,…Gn,…GNThe pretreatment is carried out, and the method comprises the following steps:
2.1 initializing variable n ═ 1;
2.2 if GnAll the pixels in the pixel array are complex data, and 2.3 is converted; if G isnAll the pixels in the image are real number data, and 2.4 turns;
2.3 mixing GnW of (2)n×HnEach pixel is converted into a real number, and the real number is converted into 2.4;
2.4 determination of G1,…,Gn,…,GNPosition of the target and size of the target Is GnThe width of the medium target is greater than the target width,is GnHigh for medium targets;
2.5 pairs of G1,…Gn,…GNCutting to make all SAR images have uniform size, and making uniform size be WG×HG,WGAnd HGAre all positive integers;
2.6 Generation of random sequences rn from 1 to N Using random sequence Generation function1…,rnn,…,rnNUsing random sequence as index, for G1,…Gn,…GN,L1,…Ln,…LNReading is performed to make the random image readRandom class label for readingObtaining a random image sequence G'1,…G′n,…,G′NAnd a random class tag L'1,…,L′n,…,L′N
2.7 to G'1,…G′n,…,G′N,L′1,…,L′n,…,L′NGrouping is carried out, the number of images and the number of labels in each group are in, and in is more than or equal to 1 and less than or equal to N and G'1,…G′n,…,G′NIs totally BN group, L'1,…,L′n,…,L′NAnd is also divided into a group BN, wherein,whereinRepresents rounding up;
third step, according to WG×HGConstructing a neural network model, wherein the method comprises the following steps:
3.1 initializing the layer number variable cn, setting the layer number variable cn of the convolutional layer to 1, setting the layer number variable pn of the pooling layer to 1, setting the layer number variable dn of the discarded layer to 1, and setting the layer number variable fn of the all-connected layer to 1;
3.2 calculating the size of the convolution layer output characteristic diagram, wherein the method comprises the following steps:
3.2.1 if cn is 1, let CWcn=WG,CHcn=HG,CDcn1, otherwise, 3.2.2, where CWcnAn input feature map CG for the cn-th convolution layercnWidth of (C, CH)cnIs CGcnHigh, CDcnIs CGcnDepth of (CW)cn×CHcn×CDcnRepresentation CGcnThe dimension size of (d);
3.2.2 pairs of CGcnBuilding convolutional layers with the convolutional kernel of the cn convolutional layer of sizeThe number of convolution kernels of the cn convolution layer is KNcnStep size of convolution kernel at sliding isThe zero element filling size is xcn(ii) a Order toRepresenting the kn of the cn-th convolutional layercnA convolution kernel of 1 or more kncn≤KNcnRepresenting the kn of the cn-th convolutional layercnThe bias voltage is set to be equal to the bias voltage,representing the kn of the cn-th convolutional layercnThe weight of each convolution kernel; order toOutputting a feature map for a convolutional layerThe size of (d) is as follows:
3.3 construct pooling layer, calculate the size of the pooling layer output characteristic map, the method is:
3.3.1 input profile of pn-th pooling layerOrder toWherein PWcnIs PGpnWidth of (D), PHcnIs PGpnHigh, PD ofcnIs PGpnDepth of (1), then PWpn×PHpn×PDpnRepresents PGpnThe dimension size of (d);
3.3.2 for PGpnConstructing the pooling layer such that the size of the sliding window in the pn-th pooling layer isThe step size of the sliding window during sliding isOrder toOutputting a feature map for the pooling layerThe size of (d) is as follows:
3.4, constructing a discarding layer, and calculating the size of a feature map of the discarding layer, wherein the method comprises the following steps:
3.4.1 input feature map for the dn-th discard layerOrder toIn which DWdnRepresents DGdnWidth of (D), DHdnRepresents DGdnHigh, DDdnRepresents DGdnDepth of (2), then DWdn×DHdn×DDdnRepresents DGdnThe dimension size of (d);
3.4.2 pairs DGdnBuild a discard layer, orderRepresents the drop probability of the dn-th drop layer, orderOrder toOutputting a feature map for a discard layerThe size of (d) is as follows:
3.5 determinationthf is the discard level output profileIf the first threshold value of (a) is a positive integer, let cn be cn +1, pn be pn +1, and dn be dn +1, and then letAnd orderTurning to step 3.2; if not, executing step 3.6;
3.6 calculate the size of the full-connection layer output feature vector:
3.6.1 let fn be 1, let the input feature vector FG for the fn-th fully-connected layerfnDimension of (2)
3.6.2 FW for dimension sizefnFG of (1)fnConstructing a full connection layer, and making a weight matrix A of the full connection layerfnHas a dimension ofOffset fbfnHas a dimension ofWhereinOutputting feature vectors for full connection layersSize, setting of
3.7 calculate the eigenvector size of the discarded layer:
3.7.1 let dn be dn + 1;
3.7.2 order DWdn=1,DHdn=1,Then DWdn×DHdn×DDdnInput feature vector DG for the dnth discarded layerdnOf size of (1), wherein
3.7.3 for dimension size DWdn×DHdn×DDdnDG of (1)dnConstruction ofDiscard the layer, orderRepresents the drop probability of the dn-th drop layer, orderOrder toOutputting a feature map for a discard layerThe size of (d) is as follows:
3.7.4 determinationthd is the discard layer output feature mapIf the second threshold value of (2) is a positive integer, let fn be fn +1, dn be dn +1,and orderTurning to step 3.6.1; if not, executing step 3.8;
3.8 for dimension size ofIs/are as followsConstructing a full connection layer, and making the weight moment of the full connection layerArray AfnHas a dimension ofOffset fbfnThe dimension of (a) is C;
3.9 make CN ═ PN, DN ═ DN, FN ═ FN, that is, the neural network model has CN convolution layers, PN pooling layers, DN discarding layers and FN +1 full connection layers;
fourthly, initializing model parameters of the neural network to obtain convolution kernels in the initialized convolution layerWeights of convolution kernels in initialized convolution layerWeight matrix A in initialized full connection layer1,…,Afn,…,AFN+1Bias in initialized convolution kernelAnd offset fb in the full connection layer1,…,fbfn,…,fbFN+1
Fifthly, initializing the iteration number en as 1;
sixthly, initializing a group number bn to be 1;
seventhly, initializing a variable n which is (bn-1) x in + 1;
eighth step, to G'nAdopting a forward propagation method to carry out forward propagation, namely carrying out forward propagation on the nth SAR image in the constructed neural network model to obtain a probability prediction value of the SAR image category, wherein the method comprises the following steps:
8.1 initializing the layer number variable, setting the layer number variable cn of the convolutional layer to 1, the layer number variable pn of the pooling layer to 1, the layer number variable dn of the discard layer to 1, and the layer number variable fn of the all-connected layer to 1;
8.2 calculating the output characteristic diagram of the convolution layer, the method is as follows:
8.2.1 if cn is 1, let us sayWhereinInputting the feature map of the nth input image at the cn convolution layer, otherwise, executing 8.2.2;
8.2.2 pairsCarrying out zero element filling to obtain a filled characteristic diagram
8.2.3 initializing kncn=1;
8.2.4 calculating the kthcnA convolution kernel andresult of convolution ofThe calculation method is as follows:
wherein,to representIn the coordinate (cw)cn,chcn,kncn) The value of (a); (kwcn,khcn) Is the position coordinate of the convolution kernel, (cw)cn,chcn,cdcn) For inputting feature mapsThe position coordinates of (a);
8.2.5 activation function σ (-) pairsThe value in (3) is subjected to nonlinear processing to obtain a convolution result after the nonlinear processing
8.2.6 will utilize an S-shaped functionMapping to 0-1 to obtain the weight of the mapped convolution kernel
8.2.7 utilizeTo pairWeighting to obtain the output of the convolution layer
8.3 calculating the output characteristic diagram of the pooling layer, wherein the method comprises the following steps:
8.3.1 input profile of the pn-th pooling layer
8.3.2 according to the size of the sliding WindowAnd step size of sliding window during slidingFor PGpnPerforming a sliding window operation to obtainAn area;
8.3.3 pairsAre respectively pooled in whichThe position coordinates of the output characteristic diagram of the pooling layer are represented, and meanwhile, the coordinates are used for representing that the position of the output characteristic diagram corresponds to the area of the input characteristic diagram, and the specific operation is as follows:
wherein,as a coordinate relative to the sliding window area, as a function of taking the maximumOr taking the mean functionIf the maximum function is taken, the coordinates of the sliding window area at the position of the maximum are recordedRotating 8.4; if get the flatThe mean function is directly converted to 8.4;
8.4 calculating the output characteristic graph of the discarding layer, wherein the method comprises the following steps:
8.4.1 input feature map for the dn-th discard layer
8.4.2 Generation of 1 to DD Using random sequence Generation functiondnRandom sequence of (2)Random sequence is used as index, and dimension is DDdnAssigning the one-dimensional matrix phi; order toIs 0, orderIs 1; whereinRepresents rounding down;
8.4.3 coupling phi with DGdnMultiplying;
8.5 making CN ═ CN +1, pn ═ pn +1, dn ═ dn +1, judging CN ≦ CN, if yes, turning to 8.2, if no, executing 8.6;
8.6 calculating the output characteristic vector of the full connection layer, the method is as follows:
8.6.1 feature map output by the dn-1 st layer drop layer if fn is 1Converting into one-dimensional feature vector as input feature vector FG of full connection layerfn
8.6.2 calculating the weight matrix AfnAnd FGfnAnd with an offset fbfnAdding to obtain a feature vector FG between convolution layersfn′The calculation method is as follows:
FGfn′=Afn×FGfn+fbfn (11);
8.6.3 Using the activation function σ (-) on FGfn′The value in (3) is subjected to nonlinear processing to obtain a characteristic vector after the nonlinear processing
8.7 calculating the output feature vector of the discarding layer, the method is:
8.7.1 input feature map for the dn-th discard level
8.7.2 use random sequence generation function to generate 1 to DDdnRandom sequence of (2)Random sequence is used as index, and dimension is DDdnAssigning the one-dimensional matrix phi; order toIs 0, orderIs 1;
8.7.3 will phi and DGdnMultiplication, specifically:
8.7.3.1 initialize dddn=1;
8.7.3.2 order
8.7.3.3 order dddn=dddn+1, decision dddn≤DDdnIf yes, 8.7.3.2 is turned to, if no, the assignment is finished, and 8.8 is executed;
8.8 making FN ═ FN +1 and dn ═ dn +1, judging FN is less than or equal to FN, if it is satisfied, turning to 8.6, if it is not satisfied, executing 8.9;
8.9 calculating the output characteristic vector of the full connection layer, the method is as follows:
8.9.1 calculating the weight matrix AfnAnd FGfnAnd with an offset fbfnAdding to obtain feature vector (FG) in the middle of the full connection layerfn) ', the calculation is as follows:
(FGfn)′=Afn×FGfn+fbfn (12)
8.9.2 utilize a softmax function pair (FG)fn) ' the values in (1) are non-linearized to obtain a fully connected layer result The calculation method of (c) is as follows:
the probability prediction value of the image as the class c is obtained;
the ninth step is based onAnd L'nCalculating a loss function J of the nth SAR imagen
L′n(cc) represents L'nThe cc element of (1);
9.1, let n be n +1, judge BN be BN, if satisfied, execute 9.2; if not, executing 9.3;
9.2 judging that N is less than or equal to N, if so, turning to the eighth step; if not, executing the tenth step;
9.3 judging that n is less than or equal to bn x in, if so, turning to the eighth step, and if not, executing the tenth step;
the tenth step is that the loss functions obtained by the SAR images in the bn group are averaged, and the loss function JJ of the bn group in the en iteration is calculated(en-1)×in+bnIf so, turning to 10.1, and if not, turning to 10.2;
10.1
10.2
the tenth step, determining the loss function JJ of the bn th group in the en-th iteration(en-1)×in+bnJJM, wherein JJM is a loss function threshold, if yes, the forward propagation is completed, the seventeenth step is carried out, and if not, the twelfth step is carried out;
a twelfth step of mixing JnBackward propagation is carried out in the network, neural network model parameters of convolution kernels and weights and offsets of the convolution kernels in the convolution layers and weight matrixes and offsets in the full-connection layers are adjusted, and the method specifically comprises the following steps:
12.1 initializing a layer number variable, wherein the layer number variable CN of the convolutional layer is CN, the layer number variable PN of the pooling layer is PN, the layer number variable DN of the discarded layer is DN, and the layer number variable FN of the fully-connected layer is FN + 1;
12.2 calculation of JnFor weight matrix A in the full connection layer1,…,Afn,…,AFN+1And an offset fb1,…,fbfn,…,fbFN+1Partial derivatives of (d);
12.2.1 if FN ═ FN +1, thenIf not, then,wherein T represents a matrix A(fn+1)Transpose of (1), softmax 'and S' represent derivatives of the softmax function and sigmoid function;
12.2.2 calculating JnFor weight matrix AfnAnd JnTo offset fbfnPartial derivatives of (a):
12.2.3, making fn equal to fn-1, judging that fn is equal to or more than 1, if so, converting to 12.2; if not, turning to 12.3;
12.3 if PN ═ PN, converting J intonFor FGfn+1Partial derivatives ofConversion into three dimensions12.4, turning; otherwise, directly rotating to 12.4;
12.4 use of JnTo pairPartial derivatives ofCalculating a loss function JnFor PGpnInPartial derivatives of the elements;
12.4.1 if in 8.3.2The maximum function is taken, and the operation is switched to 12.4.2, and if the maximum function is taken as the average function, the operation is switched to 12.4.3;
12.4.2 order JnFor PGpnThe middle position isThe partial derivatives of the elements of (a) are:
Jnfor PGpnThe partial derivatives of the other elements are 0;
12.4.3 order JnFor PGpnPartial derivatives of all elements in
12.5 order
12.6 calculation of JnFor convolution kernel in convolution layerWeights of convolution kernelsAnd bias in convolution kernelThe method comprises the following steps:
12.6.1 calculating JnFor convolution kernelWhere σ' (. cndot.) is the derivative of the activation function σ ():
12.6.2 calculating JnWeight to convolution kernelPartial derivatives of (a):
12.6.3 calculating JnFor bias in convolution kernelPartial derivatives of (a):
12.6.4, making cn ═ cn-1, pn ═ pn-1, dn ═ dn-1, judging cn ≥ 1, if not, it is said that propagation is completed, and going to the thirteenth step; if so, go to 12.6.5;
12.6.5 calculating JnTo pairPartial derivatives ofThe calculation method is as follows:
12.6.5.1 pairsCarrying out zero element filling;
12.6.5.2 will beRotate clockwise by 180 degrees to obtainRotated matrix
12.6.5.3 calculates:
12.6.5.4 to 12.4;
thirteenth, let n be n +1, judge BN be BN, if satisfy, go to 13.1; if not, 13.2 is switched;
13.1 judging that N is less than or equal to N, and if the N is less than or equal to N, turning to 12.1.2; if not, turning to 13.3;
13.2 judging that n is less than or equal to bn x in, if so, turning to 12.1.2; if not, turning to 13.3;
13.3 obtaining loss function for SAR images in the bn group vs. convolution kernel in convolution layerWeights of convolution kernels in convolutional layersBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias fb in the full connection layer1,…,fbfn,…,fbFN+1Respectively averaging partial derivatives of (A) by the following steps:
13.3.1, judging BN is BN, if so, turning to 13.3.2; if not, go to 13.3.3;
13.3.2
turning to the fourteenth step;
13.3.3
fourteenth, updating the convolution kernel using the output of the thirteenth stepWeights of convolution kernelsBias in convolution kernelsWeight matrix A in full connection layer1,…,Afn,…,AFN+1Bias fb in the full connection layer1,…,fbfn,…,fbFN+1The method specifically comprises the following steps:
14.1 order
Wherein beta is1Is a given first hyper-parameter;
14.2 order
Wherein beta is2Is a given second hyperparameter;
14.3 order
14.4 updateAfnAnd fbfnOrder:
wherein η is the initial learning rate;
the fifteenth step, making BN be BN +1, judging that BN is less than or equal to BN, and if so, turning to the seventh step; if not, turning to the sixteenth step;
sixthly, making EN equal to EN +1, judging that EN is less than or equal to EN, and if the EN is less than or equal to EN, turning to the sixth step; if not, indicating that the training is finished, and executing a seventeenth step;
seventeenth step, using the trained neural network model to identify the SAR image TG to be identified, wherein the method comprises the following steps:
17.1 forward-propagating the image in the network:
17.1.1 initializing the layer number variable dn of the discarded layer to 1;
17.1.2 orderAre all 0;
17.2 adopting the forward propagation method of the eighth step to forward propagate TG in the network to obtain
17.3 searchThe position cm of the medium maximum, i.e.:
and 17.4, outputting the cm-th category in the C categories in the label as the identification result.
2. The method of claim 1, wherein step 2.3 is performed by using GnW of (2)n×HnThe method for respectively converting each pixel into a real number comprises the following steps:
2.3.1 let the row variable p be 1;
2.3.2 let column variable q be 1;
2.3.3 orderGn(p, q) is GnThe value of the upper (p, q) point, a is Gn(p, q) the real part of the complex data, b being Gn(p, q) an imaginary part of the complex data;
2.3.4q=q+1;
2.3.5 determining whether q is equal to or less than WnIf yes, turning to 2.3.3; if not, go to 2.3.6;
2.3.6p=p+1;
2.3.7 determining whether p is equal to or less than HnIf yes, turning to 2.3.2; if not, G will be describednThe conversion is finished for a real image.
3. The method of claim 1, wherein 2.5 of the pairs G1,…Gn,…GNThe method for cutting comprises the following steps: get1.1 to 2 times the maximum value of (A) as WGGet it1.1 to 2 times the maximum value of (A) as HGWith the object as the center, press W on the imageG×HGAnd (5) cutting.
4. The method according to claim 1, wherein the random sequence generating function refers to randderm function in MATLAB; random functions refer to rand functions in MATLAB.
5. The synthetic aperture radar image recognition method of claim 1, wherein in the convolutional layer,has a value range ofAnd isAnd isKN when cn is 1cnHas a value range of KNcn∈[10,20]When cn ≠ 1, KNcnHas a value range of KNcn∈[KNcn-1,2×KNcn-1](ii) a Step size of convolution kernel during slidingSet to 1 or 2, zero element fill sizeIn the above-mentioned pooling layer the water is added,set to 2 or 3, step size
6. The synthetic aperture radar image recognition method of claim 1, wherein in e [1,64 ∈ is said](ii) a The discard layer outputs a feature mapIs set to 3000, the layer output profile is discardedSet to 1000; the loss function threshold JJM is less than 0.01; beta is the same as10.9; beta is the same as20.999; eta is less than or equal to 0.01.
7. The synthetic aperture radar image recognition method of claim 1, wherein the fourth step initializes model parameters of the neural network using an Xavier method by:
4.1 convolution kernels in pairs of convolution layersAnd (3) initializing:
4.1.1 initializing the number of layers of the convolutional layer variable cn to 1;
4.1.2 initializing the number variable kn of the convolution kernel of the cn-th convolution layercn=1;
4.1.3 Generation of random matrices Using random functionsWherein the value range of each element is within (0,1),has a dimension of
4.1.4 pairsInitializing to enable:
wherein, the formula (5) representsIs initialized toThe element at the corresponding position is subtracted by 0.5 and multiplied byAll matrix operations are in this sense;
4.1.5 order kncn=kncn+1, decision kncn≤KNcnIf yes, turning to 4.1.3, and if not, turning to 4.1.6;
4.1.6 making CN ═ CN +1, judging CN ≤ CN, if yes, turning to 4.1.2; if not, indicating the convolution kernelAfter the initialization is finished, 4.2 is switched;
4.2 weights for convolution kernels in convolutional layerAnd (3) initializing:
4.2.1 initializing the number of layers of the convolutional layer variable cn to 1;
4.2.2 initializing kncn=1;
4.2.3 Generation of random number matrix Using random functionThe value range of each element in the (1) is within (0);
4.2.4 pairsInitializing to enable:
4.2.5 order kncn=kncn+1, decision kncn≤KNcnIf yes, turning to 4.2.3; if not, 4.2.6 is executed;
4.2.6 making CN equal to CN +1, judging CN less than or equal to CN, if yes, turning to 4.2.2; if not, the initialization of the weight value of the convolution kernel is completed, and 4.3 is executed;
4.3 weight matrix A in full connection layer1,…,Afn,…,AFN+1And (3) initializing:
4.3.1 initialize fn to 1;
4.3.2 Generation of random number matrix RA Using random functionfn,RAfnThe value range of each element in the (1) is within (0);
4.3.3 pairs of AfnInitializing to enable:
4.3.4, making FN be FN +1, judging FN to be not more than FN +1, if yes, turning to 4.3.2, if not, turning to 4.4;
4.4 bias in the pairs of convolution kernelsAnd offset fb in the full connection layer1,…,fbfn,…,fbFN +1Initialization is performed to assign all biases to 0.
8. A synthetic aperture radar image recognition method according to claim 1, wherein the pair of steps 8.2.2The method for carrying out zero element filling comprises the following steps:
8.2.2.1 initialize a size of (CW)cn+2×χcn)×(CHcn+2×χcn)×CDcnIs/are as followsThe matrix is all zero;
8.2.2.2 initializing cdcn=1,cdcnCoordinates of a third dimension on the feature map;
8.2.2.3 initialize chcn=1,chcnCoordinates for a second dimension on the feature map;
8.2.2.4 initializing cwcn=1,cwcnCoordinates that are a first dimension on the feature map;
8.2.2.5 orderWhereinTo representIn the coordinate (cw)cncn,chcncn,cdcn) The value of (a);
8.2.2.6 order cwcn=cwcn+1, decision cwcn≤CWcnIf yes, go to 8.2.2.5, if not, go to 8.2.2.7;
8.2.2.7 order chcn=chcn+1, determining chcn≤CHcnIf yes, go to 8.2.2.4, if not, go to 8.2.2.8;
8.2.2.8 order cdcn=cdcn+1, decision cdcn≤CDcnIf yes, go to 8.2.2.3, otherwise, end.
9. The method of claim 1, wherein 8.2.7 said pairs are used for image recognitionWeighting to obtain the output of the convolution layerThe method comprises the following steps:
8.2.7.1 initializing kncn=1;
8.2.7.2 initialize chcn=1;
8.2.7.3Initializing cwcn=1;
8.2.7.4 order
8.2.7.5 order cwcn=cwcn+1, decision cwcn≤CWcn+2×χcnIf yes, go to 8.2.7.4, if not, go to 8.2.7.6;
8.2.7.6 order chcn=chcn+1, determining chcn≤CHcn+2×χcnIf yes, go to 8.2.7.3, if not, go to 8.2.7.7;
8.2.7.7 order kncn=kncn+1, decision kncn≤KNcnIf yes, go to 8.2.7.2, otherwise, end.
10. The method of claim 1, wherein the step of 8.4.3 is performed by combining Φ and DGdnThe multiplication method comprises the following steps:
8.4.3.1 initialize dddn=1;
8.4.3.2 initialize dhdn=1;
8.4.3.3 initialize dwdn=1;
8.4.3.4 order
8.4.3.5 order dwdn=dwdn+1, decision dwdn≤DWdnIf yes, go to 8.4.3.4, if not, go to 8.4.3.6;
8.4.3.6 order dhdn=dhdn+1, decision dhdn≤DHdnIf yes, go to 8.4.3.3, if not, go to 8.4.3.7;
8.4.3.7 order dddn=dddn+1, decision dddn≤DDdnIf yes, go to 8.4.3.2, otherwise, end.
11.The method of claim 1, wherein the step 8.6.1 is implemented by outputting the feature map of the layer dn-1 discarding layerThe method of converting into a one-dimensional feature vector is:
8.6.1.1 initialization
8.6.1.2 initialization
8.6.1.3 initialization
8.6.1.4 order:
8.6.1.5 orderDeterminationIf yes, go to 8.6.1.4, if not, execute 8.6.1.6;
8.6.1.6 orderDeterminationIf yes, go to 8.6.1.3, if not, execute 8.6.1.7;
8.6.1.7 orderDeterminationIf so, the routine proceeds to 8.6.1.2, and if not, the routine ends.
12. The method of claim 1, wherein step 12.3 is performed by using JnFor FGfn+1Partial derivatives ofConversion into three dimensionsThe method comprises the following steps:
12.3.1 initialization
12.3.2 initialization
12.3.3 initialization
12.3.4 order:
12.3.5determinationIf so, go to 12.3.4; if not, go to 12.3.6;
12.3.6determinationIf so, go to 12.3.3; if not, go to 12.3.7;
12.3.7determinationIf so, go to 12.3.2; if not, ending.
13. The method of claim 1, wherein 12.6.5.1 said pairs are used for image recognitionThe method for carrying out zero element filling comprises the following steps:
12.6.5.1.1 initialize a size ofAll-zero matrix JC ofcn+1
12.6.5.1.2 initialization Is composed ofCoordinates of a third dimension above;
12.6.5.1.3 initialization Is composed ofA second dimension of coordinates of;
12.6.5.1.4 initialization Is composed ofA coordinate of a first dimension above;
12.6.5.1.5 order
12.6.5.1.6DeterminationIf yes, go to 12.6.5.1.5, if not, go to 12.6.5.1.7;
12.6.5.1.7determinationIf yes, go to 12.6.5.1.4, if not, go to 12.6.5.1.8;
12.6.5.1.8determinationIf so, the routine proceeds to 12.6.5.1.3, and if not, the routine ends.
CN201811430191.XA 2018-11-28 2018-11-28 Synthetic aperture radar image identification method Active CN109993050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811430191.XA CN109993050B (en) 2018-11-28 2018-11-28 Synthetic aperture radar image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811430191.XA CN109993050B (en) 2018-11-28 2018-11-28 Synthetic aperture radar image identification method

Publications (2)

Publication Number Publication Date
CN109993050A CN109993050A (en) 2019-07-09
CN109993050B true CN109993050B (en) 2019-12-27

Family

ID=67128672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811430191.XA Active CN109993050B (en) 2018-11-28 2018-11-28 Synthetic aperture radar image identification method

Country Status (1)

Country Link
CN (1) CN109993050B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640066B (en) * 2020-06-02 2022-09-27 中国人民解放军国防科技大学 Image matrix column conversion acceleration method for target detection
CN112233042B (en) * 2020-11-05 2021-05-11 中国人民解放军国防科技大学 Method for rapidly generating large-scene SAR image containing non-cooperative target

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN108280412A (en) * 2018-01-12 2018-07-13 西安电子科技大学 High Resolution SAR image object detection method based on structure changes CNN

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139395B (en) * 2015-08-19 2018-03-06 西安电子科技大学 SAR image segmentation method based on small echo pond convolutional neural networks
CN105913076B (en) * 2016-04-07 2019-01-08 西安电子科技大学 Classification of Polarimetric SAR Image method based on depth direction wave network
CN107967484B (en) * 2017-11-14 2021-03-16 中国计量大学 Image classification method based on multi-resolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN108280412A (en) * 2018-01-12 2018-07-13 西安电子科技大学 High Resolution SAR image object detection method based on structure changes CNN

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive Learning Rate CNN for SAR ATR;Tian Zhuangzhuang et al.;《2016 CIE International Conference on Radar(RADAR)》;20161013;第1-5页 *
基于卷积神经网络的SAR图像目标识别研究;田壮壮 等;《雷达学报》;20160630;第5卷(第3期);第320-325页 *
基于卷积神经网络的SAR图像目标识别算法研究;张笑 等;《电子测量技术》;20180731;第41卷(第14期);第92-96页 *

Also Published As

Publication number Publication date
CN109993050A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
Mason et al. Deep learning for radar
CN109214452B (en) HRRP target identification method based on attention depth bidirectional cyclic neural network
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
Dong et al. Automatic design of CNNs via differentiable neural architecture search for PolSAR image classification
Awad An Unsupervised Artificial Neural Network Method for Satellite Image Segmentation.
CN111191718B (en) Small sample SAR target identification method based on graph attention network
CN113537031B (en) Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN109359525B (en) Polarized SAR image classification method based on sparse low-rank discrimination spectral clustering
CN112001270A (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN109190511B (en) Hyperspectral classification method based on local and structural constraint low-rank representation
CN110490894B (en) Video foreground and background separation method based on improved low-rank sparse decomposition
CN108388907B (en) Real-time updating method of polarized SAR data classifier based on multi-view learning
CN109993050B (en) Synthetic aperture radar image identification method
CN111126570A (en) SAR target classification method for pre-training complex number full convolution neural network
CN110766084A (en) Small sample SAR target identification method based on CAE and HL-CNN
CN113139513A (en) Hyperspectral classification method for active learning of space spectrum based on super-pixel contour and improved PSO-ELM
CN113111975A (en) SAR image target classification method based on multi-kernel scale convolutional neural network
CN116643246A (en) Deep clustering radar pulse signal sorting method based on inner product distance measurement
CN112036235A (en) Hyperspectral image target detection method and system
CN109145738B (en) Dynamic video segmentation method based on weighted non-convex regularization and iterative re-constrained low-rank representation
Du et al. Local aggregative attack on SAR image classification models
Chen et al. Feature fusion based on convolutional neural network for SAR ATR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant