CN106407986B - A kind of identification method of image target of synthetic aperture radar based on depth model - Google Patents

A kind of identification method of image target of synthetic aperture radar based on depth model Download PDF

Info

Publication number
CN106407986B
CN106407986B CN201610756338.9A CN201610756338A CN106407986B CN 106407986 B CN106407986 B CN 106407986B CN 201610756338 A CN201610756338 A CN 201610756338A CN 106407986 B CN106407986 B CN 106407986B
Authority
CN
China
Prior art keywords
convolution
depth model
training sample
filter
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610756338.9A
Other languages
Chinese (zh)
Other versions
CN106407986A (en
Inventor
曹宗杰
肖蒙
崔宗勇
皮亦鸣
闵锐
李晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610756338.9A priority Critical patent/CN106407986B/en
Publication of CN106407986A publication Critical patent/CN106407986A/en
Application granted granted Critical
Publication of CN106407986B publication Critical patent/CN106407986B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of identification method of image target of synthetic aperture radar based on depth model, including stream: image cutting;The design of depth model hierarchical structure, the design of feature extraction filter, number of parameters control prevent with over-fitting, and activation primitive and non-linear promotion, identification classification and parameter independently correct update;Depth model training;Target identification.Filter parameter can be realized autonomous iteration and update in training process of the present invention, expense needed for greatly reducing the selection of feature and extracting, depth model can extract the feature of target different levels in the present invention simultaneously, these features are obtained by the matching of height with training, therefore the height characterization that can be realized target, improves the accuracy of SAR image target identification.

Description

A kind of identification method of image target of synthetic aperture radar based on depth model
Technical field
The present invention relates to machine learning, deep learning application technology, in particular to deep learning method is in synthetic aperture thunder Up to the application in images steganalysis.
Background technique
Synthetic aperture radar (Synthetic Aperture Radar, hereinafter referred to as SAR) can be under full weather conditions Whole day provides high-definition picture.The main method of current SAR images steganalysis system is instructed using the feature of image object Practice classifier, SAR image target identification is realized by classifier, therefore the performance of classifier determines the identification energy of identifying system Power.
The selection and extraction of feature have a great impact to the design and performance of classifier.Pattern-recognition is exactly by specific thing Object is grouped into the process of specific a certain classification, that is, is first carried out with a certain number of samples according to the similitude between them Classifier design then carries out categorised decision to sample to be identified with designed classifier.The process of classification both can be It is carried out in original data space, initial data can also be converted, map the data into feature space and carry out.Compare and Speech, the latter make the design of decision machine become more to be easy, it improves decision machine by more stable character representation Performance, eliminate extra or incoherent information, and be more easier find research object between intrinsic connection.Cause And it is characterized in determining the key of the similitude and classifier design between sample.After the purpose of classification determines, how to find Suitably it is characterized in the key problem of identification.
SAR image imaging mode is very special, and for the optical imagery that compares, it shows as sparse scattering center point Cloth, and it is very sensitive to the azimuth of imaging, and ambient noise is complicated, is easy to appear different degrees of distortion.Therefore, for SAR The explicit features extraction of image is not easy to, also and not always reliable in some application problems.Meanwhile the quantity shadow of feature The complexity of identifying system is rung, a large amount of feature does not ensure that the recognition effect acquired because not being between being characterized It is independent, and there is correlation, bad feature may damage the work of system significantly;Meter can be reduced using less feature Evaluation time, this in real time using extremely important, but may cause the classifier training based on feature it is insufficient and not enough at It is ripe, substantially reduce recognition performance.For the feature selecting for needing just to can be carried out by the training stage, if the feature of selection is excessive It means that and removes fitting training sample with more complicated model.Since there may be various noises, complicated moulds for training sample Type may generate the over-fitting situation to training sample, and model is made to be sensitive to noise, lose abstract ability, also cannot be very Identify target well.The above various reasons, so that feature selecting and being extracted into as difficulty most in SAR target identification system Big problem.
Summary of the invention
Goal of the invention of the invention is: in order to overcome in the prior art to the shortcoming of SAR image target identification, with Reach SAR image target identification more high accuracy, provides a kind of SAR image target recognition method based on depth model.
Depth model is a kind of multilayer perceptron, it has multiple hierarchical structures, each spy level selection and extracted Sign is to be obtained by its own matched and training, therefore extracted feature can be regarded as the height characterization of target.This Based on its own powerful learning ability, by image pattern directly as input data, therefore kind altitude feature ability to express is Expense required for character selection and abstraction during image preprocessing can be greatly reduced in.Meanwhile depth model has Good parallel processing capability and learning ability can handle the problem of SAR image environmental information complexity, sample even allowed for have Biggish displacement, stretching, rotation etc..
Step 1: inputting the SAR image (note number of training is M) as training sample, wherein training sample should include Different classes of identification target, with the classification number of T mark training sample.
The acquisition modes of SAR image as training sample can be, to the original SAR image obtained at aperture radar with It is cut centered on target, the SAR image after each cutting includes complete identification target.SAR figure after guaranteeing segmentation Identify that the diversity of azimuth of target and pitch angle helps to obtain more mature depth model in the training stage as in.
Step 2: building depth model:
Step 201: building convolutional neural networks module:
Building is by convolution filter and the cascade convolutional neural networks module of pond filter, and wherein convolution filter is used for Sliding window convolution is carried out to input data and obtains convolution output;Pond filter is for dropping input data (i.e. convolution output) Dimension processing obtains the convolutional layer output of convolutional neural networks module, wherein dimension-reduction treatment are as follows: export to convolution and carry out maximum value filter Wave processing, it is whole with the local maximum substitution filter field of filter field;
Step 202: the depth model of H layers of setting, 1~H-1 layers of depth model are cascade H-1 convolutional Neural net Network module, the 1st layer of input are training sample, and 2~H-1 layer of input is that upper one layer of convolutional layer exports, and 1~H-1 The size of the convolution filter of layer gradually becomes smaller.The size of pond filter is preset value, can be carried out according to practical application request It is correspondingly arranged, may be sized to for every layer of pond filter is identical, may be alternatively provided as not identical.
H layers of depth model include convolution filter, for carrying out convolution (non-sliding window) filtering, H to input data The convolutional layer that the input of the convolution filter of layer is H-1 layers exports, and the size of convolution filter is equal to H-1 layers of convolution The output characteristic pattern size of neural network module.
Depth model hierarchical structure of the invention can extract the image object feature of different depth, mainly big by size The small convolution filter for ω carries out convolution algorithm to image and extracts the feature of input picture as output.
Convolution algorithm can be formulated as:I.e. with preset step-length s1 (formula In s1=1) sliding window mode to input Sij(i, j are image coordinate) carries out convolution and obtains the output S of corresponding positioni′j′', wnm Represent convolution filter line n m column parameter;Adjust the size of the size control convolution filter of ω;With depth structure Deeply, the size of characteristic image is gradually reduced, and the size of ω suitably reduces therewith, and the reasonable quantity of convolution filter increases.
Convolution filter is directly acted on into image, extracts characteristics of image.Since depth model hierarchical structure can be realized Autonomous study updates, and the output of system is by feedback effect in the network parameter (convolution filter parameter) of each level, Gauss The convolution filter of random initializtion under the feedback effect, independently corrected by parameter, and eventually becoming being capable of extraction height table The feature extraction filter of target signature is levied, the selection of this feature and extraction process are independently completed by system, eliminate traditional mesh The preprocessing process such as other image characteristics extraction are identified, the expense for realizing target identification is greatly reduced, it is moreover, all Feature extraction filter passes through matched training and obtains, and obtained further feature is more conducive to system and carries out target knowledge Not.
To each convolutional neural networks module, the characteristic pattern size of convolutional layer (convolution filter) output are as follows: ho1= (hi- ω)/s1+1, wherein h01、hi, ω respectively indicate output characteristic pattern size, input feature vector figure size and convolution filter Size.The convolution algorithm to repeat region can be effectively reduced in step-length s1, improves the operating rate of depth model.With depth Degree structure is goed deep into, and the size of characteristic image is gradually reduced, and the reasonable quantity of convolution filter increases to guarantee that hierarchical structure mentions Take the diversity of feature.
In order to extract the further feature of target, depth model has complicated hierarchical structure and the filter of a fairly large number of convolution Wave device, therefore a large amount of parameter can be generated, increase the burden of identifying processing.Moreover, different convolution filters is only right Specific feature-sensitive, this also means that the characteristic pattern after convolutional layer will generate a large amount of redundancy after feature extraction, Cause subsequent level that will expend vast resources in the processing of these redundancies.In order to realize to depth model number of parameters Control, and reject redundancy and needed by pond method in the way of sliding window in each convolutional neural networks module (step-length is set as s2) replaces the convolution output of convolution filter with the local maximum of current filter region (current window) For current filter region entirety as output:Wherein eijThe i-th row of representative image Jth column pixel value, ei+nj+nMeaning and eijIt communicates, eoFor output pixel value.
The characteristic pattern size of pond filter output are as follows: ho2=(hid)/s2+1, wherein ωdRepresent pond filter Size, stride are provided with the interval of adjacent pool filter, to each convolutional neural networks module, export the ruler of characteristic pattern Very little is the characteristic pattern size h of pond filter outputo2
Deep learning one specific target is exactly key factor to be extracted from initial data, and initial data usually twines Around highly dense feature vector, these feature vectors are to be mutually related, a key factor may be wrapped it is several even A large amount of feature vector, in order to further reject redundancy, and degree of approximation maximally retains data characteristics, can be by big The sparse matrix that most elements are 0 is realized, Sigmod activation primitive f (x)=(1+e is such as passed through-x)-1, hyperbolic tangent function f (x)=tanh (x), f (x)=| tanh (x) | and correction linear unit (Rectified Linear Unit) f (x)=max The correcting process function such as (0, x), wherein x indicates the individual element of convolution output.
The preferred f (x) of correcting process function of the invention=max (0, x), i.e. each element for convolution output, take it With 0 in maximal term as correction result.Thus introduce amendment linear unit, i.e., it is linear in every layer of setting amendment of depth model Unit is modified processing for the output to convolution filter.To convolutional neural networks module, then correcting process is first carried out Afterwards, then pondization filtering is carried out, to H layers of depth model, the convolution after correcting process is exported as the final of depth model Output.
(H layers) of bottom of depth model are using full connection output, i.e., each width characteristic pattern (ruler exported to H-1 layers Very little size is weighted summation for the element in L × W) and obtains the sum valueWherein xiSubscript for identifying The different characteristic figure (also corresponding to H layers of different convolution filters mark) of same training sample, knmFor H layers of convolutional filtering The parameter of device line n m column, enmIt is characterized the element of figure line n m column, the institute by obtained same training sample at H layers Have characteristic pattern and value xiCombination forms output eigenmatrix, i.e. the eigenvectors matrix X=[x of training sample1x2x3...xp]T, Wherein p indicates feature map number of each training sample at H layers.
Step 3: depth model training:
Step 301: initializationization the number of iterations d=0, initialization learning rate α are preset value;
Step 302: randomly choosing N width image from training sample sample set as sub- training sample set, be input to depth The 1st layer of model, the H layers of output based on depth model obtain the eigenvectors matrix X of each training sample;
The error amount of the δ: the H layers of convolution filter of error amount of step-by-step calculation convolution filters at different levels is F-X, desired output F is preset value;The error amount of 1~H-1 layers of convolution filter is the ginseng then based on upper one layer of error amount and convolution filter Number wnmProduct obtain, subscript n=1,2 ..., ω, m=1,2 ..., ω, ω indicate convolution filter size;According to volumes at different levels The error amount undated parameter w of product filternm: wnm=wnm-Δwnm, wherein
If being belonged to respectively in identifying processing using the eigenvectors matrix that Softmax regression model calculates images to be recognized In T classification target class probability.Then in step 302, it is also necessary to Softmax regression model be joined based on eigenvectors matrix X Number θj(j=1,2 ..., T) it is iterated renewal learning:
The category probability matrix h of each eigenvectors matrix X can be obtained based on Softmax regression modelθ(X):
Wherein vector θ=(θ12,…,θT), initial value is random initializtion, and y indicates that classification recognition result, e indicate The natural truth of a matter,It indicates about θjMatrix transposition.
N number of training sample of current iteration indicates are as follows: (X(1),y(1)),(X(2),y(2)),(X(3),y(3))...(X(N),y(N)), wherein X(i)Indicate the eigenvectors matrix (being obtained by the final output of depth model) of i-th of training sample, y(i)It indicates Corresponding X(i)Classification logotype, i.e. y(i)=1,2 ..., T are based on N number of (X(i),y(i)) calculate likelihood function and log-likelihood letter Number:
Likelihood functionWherein P (y(i)|X(i), θ) and it indicates X(i)It is classified as the probability of j;Log-likelihood functionWherein I { } For indicator function, if { } is true, I { }=1;If { } is true and false, I { }=0;
The cost function of log-likelihood function l (θ) The minimum of J (θ) is realized by gradient descent algorithm:
It willProduct with learning rate α is as Parameters in Regression Model correction amount:Exist When next iteration, using last correction amount as the Parameters in Regression Model of current iteration;
Finally, updating the number of iterations d=d+1.
Step 303: judging whether the number of iterations reaches end threshold value, if so, thening follow the steps 304;Otherwise, then judge to change Whether generation number reaches adjustment threshold value, if so, reducing learning rate α, and the parameter based on updated convolution filters at different levels wnmExecute step 302;If it is not, being then directly based upon updated convolution filter parameter w at different levelsnmExecute step 302;
Step 304: the parameter current w based on convolution filters at different levelsnmThe depth model of training completion is obtained, and will be last The correction amount of an iteration is as final Parameters in Regression Model, for the identifying processing to images to be recognized.
Step 4: inputting SAR image to be identified, image cutting is carried out centered on target to be identified, is obtained and training sample The images to be recognized of identical size;The depth model that will be completed with identification image input training, obtains the feature of images to be recognized Vector is belonging respectively to T classification target class probability, using the corresponding classification of maximum probability as target identification result.
Step 4: inputting SAR image to be identified, image cutting is carried out centered on target to be identified, is obtained and training sample The images to be recognized of identical size;
Images to be recognized is inputted into the depth model that training is completed, exports the eigenvectors matrix of images to be recognized;
Step 5: the eigenvectors matrix for calculating images to be recognized is belonging respectively to T classification target class probability, with most probably The corresponding classification of rate is as target identification result.According to Softmax regression model, then final returned based on what step 3 obtained Return model parameter, the eigenvectors matrix for calculating images to be recognized is belonging respectively to T classification target class probability, with maximum probability pair The classification answered is as target identification result.
In conclusion the beneficial effects of the present invention are: be capable of directly processing target identification SAR image, efficiently reduce The workload needed for the preprocessing process of character selection and abstraction in object recognition task, the depth of extraction height match cognization target Layer feature, promotes the accuracy of target identification.
Detailed description of the invention
Fig. 1 is depth model structural schematic diagram.
Fig. 2 is convolutional filtering schematic diagram.
Fig. 3 is maximum value pond schematic diagram.
Fig. 4 is MSTAR tank original image.
Fig. 5 is that the filter of depth model and each layer export characteristic pattern schematic diagram.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this hair It is bright to be described in further detail.
The present invention is realized using 4 layer depth model structures as shown in Figure 1, wherein the 1st~3 layer respectively includes convolutional filtering Device, amendment linear unit and pond filter, the 4th layer is full articulamentum comprising convolution filter and amendment linear unit.It is deep The input (input) for spending model is training sample, test sample.1st~3 layer of convolution filter is with the cunning of preset step-length s=1 Window mode carries out convolution to input data (as shown in Fig. 2-a) and obtains convolution output, as shown in Fig. 2-b;Pond filter is to volume Product output carries out dimension-reduction treatment: it is whole with the local maximum substitution filter field of filter field, as shown in Figure 3.
In the present embodiment, the extraction source of training sample is that MSTAR image data (can provide 72 differences to each car The sample at visual angle and different directions).Size as shown in Figure 4 is the SAR image of 128*128, includes 3 regions in figure: tank, Shade and background, and have more serious coherent speckle noise in image.
Training sample set based on acquisition, depth model as shown in connection with fig. 1, available model repetitive exercise mistake The characteristic pattern of Cheng Zhong, each layer filter of depth model and the output of each layer, as shown in figure 5, wherein 5-a is the of depth model Level 1 volume lamination filter, 5-b are the feature that SAR original image exports after the filtering of level 1 volume product, amendment and dimension-reduction treatment Figure, is defined as the 1st layer of character figure, and Fig. 5-c is the 1st layer of characteristic pattern after the filtering of level 2 volume product, amendment and dimension-reduction treatment The characteristic pattern of output, is defined as the 2nd layer of characteristic pattern, and Fig. 5-d is after the 2nd layer of pondization exports by the 3rd layer of convolutional filtering and amendment The characteristic pattern exported after processing.
Based on the depth model that training is completed, target identification processing is carried out to different test samples (referring to table 1) respectively, In the present embodiment, distinguished using the eigenvectors matrix that Softmax regression model calculates the images to be recognized of depth model output Belong to T classification target class probability, using the corresponding classification of maximum probability as target identification result.For MSTAR data set ten Class vehicle target: 2S1, BMP-2, BRDM_2, BTR60, BTR70, D7, T62, T72, ZIL_131 and ZSU_23_4 can reach 93.99% discrimination.
Table 1
Table 2 gives the method for the present invention and existing method IGT (iterative graph thickening on Discriminative graphical models), EMACH (extension maximum average relevant height filter), SVM (support to Amount machine), Cond Gauss (conditional Gaussian model) and Adaboost (feature fusion via boosting on Individual neural net classifies) average quasi- rate contrast table:
Table 2
Wherein, existing method IGT, EMACH, SVM, Cond Gauss and Adaboost use posture correction algorithm raising property Can, not platform correction in the case where, the accuracy rate of the above method drops to 88.60%, SVM83.90%, Cond Gauss86.10%, Adaboost87.20%, although convolutional neural networks of the invention are in the feelings of no any image preprocessing It is trained under condition, but compared to relatively above most of method by posture correction, still there is very outstanding performance, only There is the IGT method by posture correction higher by 1% than the recognition accuracy of the present invention (not needing posture correction), compare its other party Method, the present invention can save the resource largely spent in terms of image preprocessing and time cost, and work expense is low.
The above description is merely a specific embodiment, any feature disclosed in this specification, except non-specifically Narration, can be replaced by other alternative features that are equivalent or have similar purpose;Disclosed all features or all sides Method or in the process the step of, other than mutually exclusive feature and/or step, can be combined in any way.

Claims (3)

1. a kind of identification method of image target of synthetic aperture radar based on depth model, characterized in that it comprises the following steps:
Step 1: training sample acquisition:
Step 101: the different classes of original SAR image of input identification target, wherein classification number is T;
Step 102: image cutting being carried out to original SAR image goal-orientation, obtains the identical training sample of image size Collection, and category identifier is set for each training sample;
Step 2: building depth model:
Step 201: building is by convolution filter and the cascade convolutional neural networks module of pond filter, wherein convolution filter Convolution output is obtained for carrying out sliding window convolution to input data;Pond filter is used to carry out at sliding window dimensionality reduction input data Reason obtains the convolutional layer output of convolutional neural networks module, the dimension-reduction treatment are as follows: export and carried out at maximum value filtering to convolution Reason takes the local maximum of current window to export as the filtering of current window;
Step 202: the depth model of H layers of setting, 1~H-1 layers of depth model are cascade H-1 convolutional neural networks mould Block, the 1st layer of input are training sample, and 2~H-1 layer of input is that upper one layer of convolutional layer exports, and 1~H-1 layers The size of convolution filter gradually becomes smaller;
H layers of depth model include convolution filter, for the convolutional filtering to input data, H layers of convolution filter Input be H-1 layer convolutional layer export, and the size of convolution filter is equal to H-1 layers of convolutional neural networks module Export characteristic pattern size;
Step 3: depth model training:
Step 301: initialization the number of iterations d=0, initialization learning rate α are preset value;
Step 302: randomly choosing N width image from training sample sample set as sub- training sample set, be input to depth model The 1st layer, the H layers of output based on depth model obtain the eigenvectors matrix X of each training sample;
The error amount of the δ: the H layers of convolution filter of error amount of each layer convolution filter of step-by-step calculation is F-X, and desired output F is Preset value;The error amount of the 1~H-1 layers of convolution filter then parameter w based on upper one layer of error amount and convolution filternm's Product obtains, subscript n=1,2 ..., ω, m=1,2 ..., ω, and ω indicates the size of convolution filter;
According to the error amount undated parameter w of convolution filters at different levelsnm: wnm=wnm-Δwnm, wherein
Update the number of iterations d=d+1;
Step 303: judging whether the number of iterations reaches end threshold value, if so, thening follow the steps 304;Otherwise, then judge that iteration is secondary Whether number reaches adjustment threshold value, if so, reducing learning rate α, and the parameter w based on updated convolution filters at different levelsnmIt holds Row step 302;If it is not, being then directly based upon updated convolution filter parameter w at different levelsnmExecute step 302;
Step 304: the parameter current w based on convolution filters at different levelsnmObtain the depth model of training completion;
Step 4: inputting SAR image to be identified, image cutting is carried out centered on target to be identified, is obtained identical as training sample The images to be recognized of size;
Images to be recognized is inputted into the depth model that training is completed, exports the eigenvectors matrix of images to be recognized;
Step 5: the eigenvectors matrix for calculating images to be recognized is belonging respectively to T classification target class probability, with maximum probability pair The classification answered is as target identification result.
2. the method as described in claim 1, which is characterized in that step 302 further includes to Softmax Parameters in Regression Model vector θjIteration update, wherein θjFor random initializtion, j=1,2 ..., T:
According to formulaIt calculates cost function J (θ), wherein I { } is to refer to Show function, if { } is true, I { }=1;If { } is false, I { }=0;Expression formula y(i)=j indicates sub- training sample The recognition result y for i-th of the training sample concentrated(i)The nature truth of a matter is indicated for classification j, e,It indicates about θjMatrix transposition, X(i)Indicate the feature vector for i-th of training sample that sub- training sample is concentrated;
The minimum that cost function J (θ) is realized according to gradient descent algorithm, obtainsAnd according to formulaObtain updated Parameters in Regression Model vector θj
In step 5, it is based on current Parameters in Regression Model vector θjThe feature vector for calculating images to be recognized is belonging respectively to T class target Class probability.
3. method according to claim 1 or 2, which is characterized in that each layer of depth model further includes amending unit, described to repair Positive unit is used to be modified processing to the convolution output of convolution filter according to correcting process function, and correcting process function is f (x)=(1+e-x)-1, f (x)=tanh (x), f (x)=| tanh (x) | or f (x)=max (0, x), wherein x indicates convolution output Individual element.
CN201610756338.9A 2016-08-29 2016-08-29 A kind of identification method of image target of synthetic aperture radar based on depth model Expired - Fee Related CN106407986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610756338.9A CN106407986B (en) 2016-08-29 2016-08-29 A kind of identification method of image target of synthetic aperture radar based on depth model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610756338.9A CN106407986B (en) 2016-08-29 2016-08-29 A kind of identification method of image target of synthetic aperture radar based on depth model

Publications (2)

Publication Number Publication Date
CN106407986A CN106407986A (en) 2017-02-15
CN106407986B true CN106407986B (en) 2019-07-19

Family

ID=58002606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610756338.9A Expired - Fee Related CN106407986B (en) 2016-08-29 2016-08-29 A kind of identification method of image target of synthetic aperture radar based on depth model

Country Status (1)

Country Link
CN (1) CN106407986B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256396A (en) * 2017-06-12 2017-10-17 电子科技大学 Ship target ISAR characteristics of image learning methods based on convolutional neural networks
CN107341488B (en) * 2017-06-16 2020-02-18 电子科技大学 SAR image target detection and identification integrated method
CN107392122B (en) * 2017-07-07 2019-12-31 西安电子科技大学 Polarimetric SAR image target detection method based on multi-polarimetric feature and FCN-CRF fusion network
CN107516317B (en) * 2017-08-18 2021-04-27 上海海洋大学 SAR image sea ice classification method based on deep convolutional neural network
CN107563422B (en) * 2017-08-23 2019-08-27 西安电子科技大学 A kind of polarization SAR classification method based on semi-supervised convolutional neural networks
CN107728143B (en) * 2017-09-18 2021-01-19 西安电子科技大学 Radar high-resolution range profile target identification method based on one-dimensional convolutional neural network
CN107886123B (en) * 2017-11-08 2019-12-10 电子科技大学 synthetic aperture radar target identification method based on auxiliary judgment update learning
CN108564098B (en) * 2017-11-24 2021-09-03 西安电子科技大学 Polarization SAR classification method based on scattering complete convolution model
CN108038445B (en) * 2017-12-11 2020-09-15 电子科技大学 SAR automatic target identification method based on multi-view deep learning framework
CN108169745A (en) * 2017-12-18 2018-06-15 电子科技大学 A kind of borehole radar target identification method based on convolutional neural networks
CN107977683B (en) * 2017-12-20 2021-05-18 南京大学 Joint SAR target recognition method based on convolution feature extraction and machine learning
EP3756129A1 (en) * 2018-02-21 2020-12-30 Robert Bosch GmbH Real-time object detection using depth sensors
CN108280490A (en) * 2018-02-28 2018-07-13 北京邮电大学 A kind of fine granularity model recognizing method based on convolutional neural networks
CN108364000B (en) * 2018-03-26 2019-08-23 南京大学 A kind of similarity preparation method extracted based on neural network face characteristic
CN108681999B (en) * 2018-05-22 2022-05-31 浙江理工大学 SAR image target shape generation method based on deep convolutional neural network model
CN108846047A (en) * 2018-05-30 2018-11-20 百卓网络科技有限公司 A kind of picture retrieval method and system based on convolution feature
CA3040685C (en) * 2018-10-24 2020-07-28 Alibaba Group Holding Limited Fast computation of a convolutional neural network
CN109087337B (en) * 2018-11-07 2020-07-14 山东大学 Long-time target tracking method and system based on hierarchical convolution characteristics
CN109934237A (en) * 2019-02-18 2019-06-25 杭州电子科技大学 SAR image feature extracting method based on convolutional neural networks
CN110113277B (en) * 2019-03-28 2021-12-07 西南电子技术研究所(中国电子科技集团公司第十研究所) CNN combined L1 regularized intelligent communication signal modulation mode identification method
CN111797774B (en) * 2020-07-07 2023-09-22 金陵科技学院 Pavement target recognition method based on radar image and similarity weight
CN112819742B (en) * 2021-02-05 2022-05-13 武汉大学 Event field synthetic aperture imaging method based on convolutional neural network
CN113865682B (en) * 2021-09-29 2023-11-21 深圳市汉德网络科技有限公司 Truck tire load determining method, truck tire load determining device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139395A (en) * 2015-08-19 2015-12-09 西安电子科技大学 SAR image segmentation method based on wavelet pooling convolutional neural networks
CN105139028A (en) * 2015-08-13 2015-12-09 西安电子科技大学 SAR image classification method based on hierarchical sparse filtering convolutional neural network
CN105184309A (en) * 2015-08-12 2015-12-23 西安电子科技大学 Polarization SAR image classification based on CNN and SVM
CN105718957A (en) * 2016-01-26 2016-06-29 西安电子科技大学 Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network
CN105868793A (en) * 2016-04-18 2016-08-17 西安电子科技大学 Polarization SAR image classification method based on multi-scale depth filter

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184309A (en) * 2015-08-12 2015-12-23 西安电子科技大学 Polarization SAR image classification based on CNN and SVM
CN105139028A (en) * 2015-08-13 2015-12-09 西安电子科技大学 SAR image classification method based on hierarchical sparse filtering convolutional neural network
CN105139395A (en) * 2015-08-19 2015-12-09 西安电子科技大学 SAR image segmentation method based on wavelet pooling convolutional neural networks
CN105718957A (en) * 2016-01-26 2016-06-29 西安电子科技大学 Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network
CN105868793A (en) * 2016-04-18 2016-08-17 西安电子科技大学 Polarization SAR image classification method based on multi-scale depth filter

Also Published As

Publication number Publication date
CN106407986A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106407986B (en) A kind of identification method of image target of synthetic aperture radar based on depth model
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109977918B (en) Target detection positioning optimization method based on unsupervised domain adaptation
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
CN108229444B (en) Pedestrian re-identification method based on integral and local depth feature fusion
CN104182772B (en) A kind of gesture identification method based on deep learning
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
Lin et al. Hyperspectral image denoising via matrix factorization and deep prior regularization
CN105184312B (en) A kind of character detecting method and device based on deep learning
CN105512680B (en) A kind of more view SAR image target recognition methods based on deep neural network
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN111178432A (en) Weak supervision fine-grained image classification method of multi-branch neural network model
CN105678284B (en) A kind of fixed bit human body behavior analysis method
CN109461157A (en) Image, semantic dividing method based on multi-stage characteristics fusion and Gauss conditions random field
CN104463250B (en) A kind of Sign Language Recognition interpretation method based on Davinci technology
CN108388896A (en) A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks
CN110378208B (en) Behavior identification method based on deep residual error network
CN109817276A (en) A kind of secondary protein structure prediction method based on deep neural network
CN106651915B (en) The method for tracking target of multi-scale expression based on convolutional neural networks
CN110097029B (en) Identity authentication method based on high way network multi-view gait recognition
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110503613A (en) Based on the empty convolutional neural networks of cascade towards removing rain based on single image method
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN109034066A (en) Building identification method based on multi-feature fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190719