CN109508655A - The SAR target identification method of incomplete training set based on twin network - Google Patents

The SAR target identification method of incomplete training set based on twin network Download PDF

Info

Publication number
CN109508655A
CN109508655A CN201811263248.1A CN201811263248A CN109508655A CN 109508655 A CN109508655 A CN 109508655A CN 201811263248 A CN201811263248 A CN 201811263248A CN 109508655 A CN109508655 A CN 109508655A
Authority
CN
China
Prior art keywords
sample
input
network
classification
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811263248.1A
Other languages
Chinese (zh)
Other versions
CN109508655B (en
Inventor
张帆
唐嘉昕
赵鹏
尹嫱
胡伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN201811263248.1A priority Critical patent/CN109508655B/en
Publication of CN109508655A publication Critical patent/CN109508655A/en
Application granted granted Critical
Publication of CN109508655B publication Critical patent/CN109508655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the SAR target identification methods of the incomplete training set based on twin network, the present invention has used for reference k-NN algorithm in conventional machines study, n sample is extracted from each classification of training set as the representative of this classification sample forms a support collection, such as m classification is shared in a classification task, then supporting that concentrating total sample number is m*n.The support sample that sample to be sorted and support are concentrated is input in network together in classification, support each of collection sample and this sample composition input pair to be sorted, it is separately input to extract feature in two inputs of twin network, then by two sample extractions to feature ask poor, again the difference of feature is judged to obtain sample to be sorted and support the similarity degree of some classification sample of concentration, sample to be sorted is finally classified as the sample and the classification of the highest sample of similarity is concentrated in support.

Description

The SAR target identification method of incomplete training set based on twin network
Technical field
The present invention relates to a kind of SAR target identification methods of incomplete training set based on twin network, belong to computer Visual field.
Background technique
Synthetic aperture radar (SAR) is a kind of round-the-clock, round-the-clock, the acquisition ground with high-resolution, high-penetration The method of data has very high civilian and commercial value.Interpretation SAR image can obtain many useful information, therefore SAR The interpretation of image is the pith in SAR practical application.Traditional machine learning and deep learning is the two of SAR image interpretation Kind main method.SAR imaging is more stable than other sensors, it is not easy to be influenced by weather, light and other conditions.Together When SAR another advantage be that can generate a large amount of terrestrial information data.But it is very tired for handling manually so a large amount of data Difficult.
Computer visual image processing technique based on conventional machines study and deep learning can solve data well Measure big problem.Traditional machine learning method has strict mathematical theory as support, is lower than to the needs of computing resource Neural network, while classifying and the precision of identification can also meet demand to a certain extent.With the promotion of computer computation ability, Correlation process method neural network based yields unusually brilliant results, these methods are often much higher than machine in the precision classified and identified The method of study.But the classification of neural network and recognition methods depend on a large amount of training data, practical application and real item The training data of such magnanimity can not be often obtained in part, this needs a large amount of human cost to go to collect and mark.Very few instruction Practicing sample finally will lead to the generation of neural network over-fitting, that is, with very high classification or identification on training sample Precision, but effect is very poor in test and actual use.
Have the shortcomings that model interpretation is poor additionally, due to neural network, leads to the side for being difficult to find that directiveness in optimization To.Twin network has been effectively combined the advantage of conventional machines study and deep learning, replaces manually setting using neural network The feature extractor of meter is classified then in conjunction with conventional machines learning decision strategy.Such combination has both played neural network What the modeling result for making full use of the code capacity of computing resource, and partly having evaded previous neural network was difficult to explain asks Topic, improvement and optimization after allowing can more be followed added with mark.Under real world conditions, a large amount of manpowers are not only needed to mark sample, together When also face may partial category sample the case where lacking.The increasing covert for the peculiar training method of small sample of twin network The quantity of sample is added, so that the precision of classification is improved, the case where over-fitting is also weakened.Sample is utilized simultaneously " distance " between feature also allows optimum ideals to be more clear to distinguish the classification policy of sample class.
Summary of the invention
The main object of the present invention is to provide a kind of target identification side SAR of incomplete training set based on twin network Method.
The present invention is after the related direction to small sample target identification has carried out abundant investigation, and proposition is directed to real item The small sample identification of the truthful data of SAR under part.It is directly inputted when with the classification of conventional depth learning method defeated after sample to be sorted The prediction classification of sample is different out.The present invention has used for reference k-Nearest Neighbor (k-NN) algorithm in conventional machines study, N sample is extracted from each classification of training set as the representative of this classification sample and forms a support collection, such as at one M classification is shared in classification task, then supporting that concentrating total sample number is m*n.In classification by sample to be sorted and support The support sample of concentration is input in network together, supports each of collection sample and this sample composition input to be sorted It is right, be separately input to extract feature in two of twin network inputs, then by two sample extractions to feature ask poor, then it is right The difference of feature is judged to obtain sample to be sorted and supports to concentrate the similarity degree of some classification sample, finally by sample to be sorted Originally it is classified as the sample and supports to concentrate the classification of the highest sample of similarity.
Technical solution of the present invention mainly includes following technology contents specifically:
1, convolutional neural networks extract SAR target signature.It is mentioned using the convolutional neural networks of the different convolution kernels of multilayer Take SAR clarification of objective.Advanced features are obtained by the weighting of convolution kernel and carry out dimensionality reduction and enhancing using pond layer The robustness of network, while to introduce non-linear factor neural network is solved as activation primitive using ReLU function Linear classification task by no means.
2, k- nearest neighbor algorithm.When being classified using the principle of k- nearest neighbor algorithm, k- nearest neighbor algorithm is in small sample problem In it is simple and effective, the resource of training set can be made full use of, suitable k value can also enhance the robustness of model.
3, data enhance.To avoid causing under Small Sample Size model over-fitting, do not directly inputted when training single Sample is classified, but forms input pair with other samples in training set, and such combination obtains training data Very big raising, effectively avoids the generation of over-fitting, as shown in Figure 2.
4, back-propagation algorithm (BP algorithm).In the present invention, multilayer neural network is using BP algorithm update convolution kernel and entirely The weight and biasing of articulamentum.The basis of BP algorithm is that gradient descent method is propagated by excitation and right value update two parts form.It will Combined image carries out propagated forward and obtains a prediction result to being input in network, and by this prediction result and label pair Than obtaining error.Then by output error backpropagation, the error of the node of each hidden layer is obtained.Recycle chain rule and Gradient descent method updates convolution kernel and full articulamentum weight W and biasing b.
The SAR target identification method of incomplete training set based on twin network, the implementing procedure of this method are as follows:
Step 1, the specification of data set: uniform sizes divide SAR target image training set and test set, support the life of collection At.
Before the twin network of training, authority data collection is needed.
1) data set is cropped to unified size first, it is unified that SAR target image has been cut into the big of 128*128 Small size cannot directly be united using the mode in pond because SAR target image is different with the image-forming principle of natural image One picture size size.
2) it is then trained the division of collection and test set, data set is first divided into two parts of test set and training set. For the SAR target identification of incomplete training set, so sample size at most only has 50 in training set.
3) then each equally spaced extraction low volume data of classification collects as support from training set.Due to training set and survey Examination collection all contains the SAR image of same target different angle substantially, so can ensure that sample is concentrated in support using equal interval sampling The diversity of this angle.So the unified training set of size, test set are just generated and supports collection.
4) last that training set, test set and support collection sequence are melted into .pickle file convenient reading again.
Entire data set specification process is as shown in Figure 1.
Step 2, twin network being built and initialize.
Twin network is made of two parts of feature extractor and arbiter.Feature extractor is the double of a shared weight Road convolutional network, this two-way convolutional network have the two identical input structures in left and right, input having a size of 128*128 size Single channel gray scale picture, the structure of first convolutional layer are the convolution kernel of 64 6*6, and activation primitive is ReLU function, then is carried out The maximum pond of 2*2, second convolutional layer and first structure are identical, and activation primitive and pond layer are also identical.Third convolution The structure of layer is the convolution kernel of 128 3*3, and activation primitive remains as ReLU function, and pond layer is also the maximum pond of 2*2.The Four convolutional layers are identical with third convolutional layer structure, and activation primitive and pond layer structure are also identical.Then convolutional layer is extracted To characteristic expansion be that this tensor is further abstracted as one 1 dimension length with full articulamentum again is 4096 to one 1 dimension tensor Characteristic tensor.This length is that 4,096 1 dimension tensor is exactly the tensor that feature extractor finally extracts.Two-way convolutional network After the feature for extracting input pair, two characteristic tensors are input in arbiter, arbiter is first to the every of two feature vectors One is sought absolute difference, then this absolute difference is input in a full articulamentum, and full articulamentum is activated with Sigmoid function, defeated Two input targets are same category of probability out.The structure of twin network is as shown in the table:
Loss function is cross entropy, and optimizer uses Adam optimizer, learning rate 6e-5.Cross entropy is in deep learning One common concept, is generally used to ask the gap between predicted value and label.Cross entropy is used as loss function to measure The similarity degree of predicted value and label, then continued to optimize by optimizer, Lai Gengxin weight W and biasing b.Cross entropy is as damage Shown in the expression formula such as formula (1) for losing function loss, wherein y is label,For predicted value, n is that the sample of a trained batch is total Amount, i are the sample index from 1 to n.
Compared to mean square error (mean squared error, MSE), cross entropy is a convex function, optimization when Time is not easy to fall into Local Extremum.When using Sigmoid activation primitive, the slope decline of up-and-down boundary is serious, but Be cross entropy it is logarithmic function, still possesses higher gradient on boundary when as loss function.This makes biggish in error When model modification faster, avoid training time too long problem.
Adam optimizer is excellent based on stochastic gradient descent (stochastic gradient descent, SGD) algorithm Change method, is estimated the single order moments estimation and second moment of gradient the advantages of combining two kinds of optimization algorithms of AdaGrad and RMSProp Meter comprehensively consider to calculate update step-length.Adam optimizer can adjust automatically learning rate, work under default parameters Performance is also quite outstanding.The pseudocode of Adam optimizer is as shown in the table.
Wherein α is learning rate or is step-length, and weight updates ratio.β12For single order moments estimation and second order moments estimation Attenuation rate.ε is the mistake occurred in calculating in order to prevent divided by 0.F (θ) is random targets function.T is time step.
The structure for only building twin network can't directly be passed to data, will also to the network that this building is completed into Row initialization.Weight W and biasing b are initialized with the random function of Gaussian Profile, and wherein the initialization mean value of W is 0, Standard deviation is 1e-2.The initialization mean value of b is 0.5, standard deviation 1e-2.
The training of step 3, twin network.
After completing the building and initialization of twin network, begin to be trained the model of twin network.First will Training set, test set and the support collection of SAR image after serializing are loaded into video memory.Then before each iteration from training set For random selection 32 to SAR image to as a batch, first 16 pairs of this 32 pairs inputs are same kind of SAR target, after 16 pairs of inputs are different types of combination.The input of a batch is obtained to rear, after this batch is input to initialization Network starts to carry out propagated forward.
When carrying out propagated forward, by each image to being input in twin network, feature is extracted by convolutional layer, will be inputted SAR target image become SAR clarification of objective figure.Wherein each input neuron can elder generation and weight during convolution W is multiplied, and along with biasing b, is then activated again with activation primitive.As shown in formula (2).
WhereinIndicate the m row of kth layer convolutional neural networks, the n-th column output.It is and kth layer convolutional Neural net Network and m row, the n-th column export corresponding weight matrix.It is the m row with kth layer convolutional neural networks, the n-th column output Corresponding importation.bkIt is the m row with kth layer convolutional neural networks, the n-th column export corresponding bias matrix.f (x) it is activation primitive, generally there is ReLU function and Sigmoid function.
The activation primitive usually used after convolutional layer is ReLU activation primitive, shown in formula (3), compared to its separate excitation Function living, all negative values are all become 0 by ReLU activation primitive, on the occasion of constant.This operation is referred to as unilateral and inhibits, it makes to succeed in one's scheme Calculation becomes more simple, while neural network also being allowed to be provided with sparse activity.And ReLU activation primitive is with wider emerging It puts forth energy boundary, this can accelerate the training of neural network, the problem of disappearance there is no gradient.But if learning rate setting it is too high, It will lead to neuron irreversible death in the training process.It is asked so needing to be arranged learning rate appropriate to evade this Topic.
F (x)=max (0, x) is (3)
In two full articulamentums below, using Sigmoid function as activation primitive, the formula of Sigmoid function As shown in (4).
Since using Sigmoid function as activation primitive, with the increase of the neural network number of plies, error is in backpropagation When can deep fades, eventually lead to gradient disappearance, right value update stagnate the problem of.So reducing to the greatest extent using Sigmoid letter Number.But most latter two full articulamentum respectively represents the absolute difference and likelihood probability of feature, is activated using ReLU this when Function retains the positive integer part after weighting and is not so good as with Sigmoid function demapping between new codomain (0,1).And most One probability of later layer output also can ratio although the concept of the result of Sigmoid function cannot strictly be equal to probability More intuitively understood and is compared.
Propagated forward finally exports a predicted value, this predicted value and the loss function of true value definition are calculated Then error is carried out backpropagation by error, seek partial derivative to weight using chain rule, is then carried out more to each weight Newly.Such as (5), (6) are shown for the formula that chain rule and weight update.
Partial derivative of some weight to the overall error finally exported is acquired by chain rule in backpropagation.Institute To require this partial derivative to be because the size of renewal amount is associated when updating the weight below.
Some weight w for needing to update is being found out using chain ruleijTo overall error EtotalPartial derivative after, by this A partial derivative is multiplied with learning rate η, and obtained result is exactly the amount that the weight needs to change.As shown in formula (6), wijSubtract Remove renewal amountNew weighted value is just obtained.
The accuracy rate threshold value that the number of iterations and model that training mission is arranged save, then the continuous iteration of twin network is instructed Practice, updates weight.Each iteration all exports loss function, and every 50 iteration output current iteration number and loss function. Every 200 wheel iteration of completing carry out one-time authentication on test set, and preservation model updates threshold value if accuracy rate is higher than threshold value, otherwise Continue iteration.Finally training always, which reaches, meets iteration stopping condition, saves optimal model.
Step 4, SAR target identification.
After the completion of the training of twin network, optimal models after being trained load the model in test.It carries out When SAR target identification is tested, needs to use support and concentrate sample.When certain sample identifies in test set, first by the sample This and support concentrate all samples to form images pair.By these images to being input in trained twin network, it is calculated The sample and the similarity degree for supporting to concentrate all samples.Then likelihood probability maximum 5 are picked out using k- nearest neighbor algorithm It supports sample, supports the category vote of sample to select the classification of test sample according to this 5.The most classification of poll is the sample This classification directly selects the maximum classification for supporting sample of likelihood probability as to be identified when there is the identical situation of poll The classification of sample.
After the step of samples all in test set are sequentially completed above-mentioned identification, the accuracy rate of identification is counted and in order line Display.Flow chart is as shown in Figure 4.
1) model that load training is completed.
2) sample to be measured and support are concentrated into sample composition input pair.
3) input is obtained into analog result to being input in network.
4), as sample to be tested classification, identification is completed for the highest classification for supporting collection sample of sample to be tested similarity.
Detailed description of the invention
Fig. 1: the flow chart of data standard.
Fig. 2: data enhance schematic diagram.
Fig. 3: twin network training flow chart.
Fig. 4: SAR target identification flow chart.
Specific embodiment
The basic procedure of incomplete training set SAR target identification of the invention as shown in figure 4, specifically includes the following steps:
1) SAR target data is classified as under two files of training set and test set, and different classes of data exist Respectively there is a sub-folder under two files.Then data are pre-processed, SAR target image is first cut to unified ruler It is very little.The distribution for carrying out data set again, due to being the identification under Small Sample Size, so at most giving the sample of each classification of training set Quantity is 50, while supporting sample as support collection extraction of all categories in training set.Other samples are transferred to test Under the correspondence category file folder of collection.Then the data Unified Sequences of training set, test set and support collection are turned into .pickle file Convenient reading.
2) building and initializing to twin network.
The structure of twin network is built first, it is specified that the size of input picture is 128*128*1.Twin network is a two-way Convolutional network is divided into left and right two-way, while left and right two-way is that weight is shared, so two line structures are identical.First layer network layer It is convolutional layer, there is a convolution kernel of 64 6*6 sizes, activation primitive is ReLU function, there is initialization to weight, without bias term, L2 regularization is carried out to weight.The second layer is the maximum pond layer of a 2*2, and step-length is also 2*2.Third layer is convolutional layer, together Sample has the convolution kernel of 64 6*6 sizes, and activation primitive is also ReLU function, has initialization to weight and bias term, while right Weight carries out L2 regularization.4th layer is that a step-length is 2*2, maximum pond of the pond having a size of 2*2 as the second layer Layer.Layer 5 is a convolutional layer, and the convolution kernel containing 128 3*3 sizes, activation primitive is ReLU function, to weight and partially Setting item has initialization, while carrying out L2 regularization to weight.Layer 6 is also a maximum pond layer and pond layer before Parameter, structure are identical.Layer 7 is identical with structure, the parameter of layer 5 convolutional layer.8th layer is the feature that will entirely extract Figure expands into the tensor of one 1 dimension.9th layer is a full articulamentum, and activation primitive is Sigmoid function, to weight and biasing Item initialization, carries out L2 regularization to weight, finally exports the feature vector that a size is 4096.
Above structure is exactly the structure of left and right two-way convolutional network, their feature extractor that functions as extracts SAR The feature of target image.
L1 distance is asked to the characteristic value that two-way convolutional neural networks extract.A full articulamentum, activation primitive are added again For Sigmoid function, output size 1.
Above-mentioned asks L1 distance function to be equivalent to an arbiter plus this full articulamentum, utilizes two-way convolutional network The feature of the two SAR target images extracted judges the similarity degree between them.
It is above exactly the overall structure of twin network.It is 0.5 that the initialization of weight and bias term, which uses mean value, standard Difference is the random function of the Gaussian Profile of 1e-2.
3) training of twin network.
The size parameter batch_size that batch is arranged is 32, and maximum number of iterations n_iter is arranged, and model saves most Small accuracy rate best.
It extracts 32 inputs from training set at random before each iteration to be trained to as a batch, this 32 defeated First 16 groups entered pair are different classes of SAR target images pair, and latter 16 groups are the same categories.
The data in batch are input in twin network when being iterated and obtain predicted value, calculate loss, update power Weight w and biasing b.
The accuracy rate of "current" model is verified on test set, the preservation model if being higher than best updates best, continues to change Otherwise in generation, continues directly to iteration.
The number of iterations stops iteration after reaching n_iter, and training is completed.
The training process pseudocode of twin network is as follows:
4) SAR target identification
The model kept is loaded, using test set come the effect of the SAR target identification of test model, by sample in test set Sample composition input pair is concentrated in this and support, is obtained the similarity degree that every a kind of sample is concentrated in sample and support in test set, is used K- nearest neighbor algorithm votes to obtain the classification most like with sample to be identified.It is identical if there is poll, then select similarity highest Classification of the classification as sample to be identified.
The result of identification and true value are compared, count accuracy rate, and show in order line.

Claims (2)

1. the SAR target identification method of the incomplete training set based on twin network, it is characterised in that: the implementing procedure of this method It is as follows:
Step 1, the specification of data set: uniform sizes divide SAR target image training set and test set, support the generation of collection;
Before the twin network of training, authority data collection is needed;
1) data set is cropped to unified size, the unified size ruler that SAR target image has been cut into 128*128 first It is very little;
2) it is then trained the division of collection and test set, data set is first divided into two parts of test set and training set;For The SAR target identification of incomplete training set, so sample size at most only has 50 in training set;
3) then each equally spaced extraction low volume data of classification collects as support from training set;Due to training set and test set Substantially the SAR image of same target different angle is all contained, so can ensure that sample angle is concentrated in support using equal interval sampling The diversity of degree;So the unified training set of size, test set are just generated and supports collection;
4) last that training set, test set and support collection sequence are melted into .pickle file convenient reading again;
Step 2, twin network being built and initialize;
Twin network is made of two parts of feature extractor and arbiter;Feature extractor is the two-way volume of a shared weight Product network, this two-way convolutional network have the two identical input structures in left and right, input the single-pass having a size of 128*128 size Road gray scale picture, the structure of first convolutional layer are the convolution kernel of 64 6*6, and activation primitive is ReLU function, then carries out 2*2's Maximum pond, second convolutional layer and first structure are identical, and activation primitive and pond layer are also identical;The knot of third convolutional layer Structure is the convolution kernel of 128 3*3, and activation primitive remains as ReLU function, and pond layer is also the maximum pond of 2*2;4th volume Lamination is identical with third convolutional layer structure, and activation primitive and pond layer structure are also identical;Then spy convolutional layer extracted Sign expands into one 1 dimension tensor and this tensor is further abstracted as the feature that one 1 dimension length is 4096 with full articulamentum again Tensor;This length is that 4,096 1 dimension tensor is exactly the tensor that feature extractor finally extracts;Two-way convolutional network extracts After the feature of input pair, two characteristic tensors are input in arbiter, arbiter first asks two feature vectors each Absolute difference, then this absolute difference is input in a full articulamentum, full articulamentum is activated with Sigmoid function, exports two Input target is same category of probability;The structure of twin network is as shown in the table:
Loss function is cross entropy, and optimizer uses Adam optimizer, learning rate 6e-5;Cross entropy is one in deep learning Common concept is generally used to ask the gap between predicted value and label;Cross entropy is used as loss function to measure prediction The similarity degree of value and label, then continued to optimize by optimizer, Lai Gengxin weight W and biasing b;Cross entropy is as loss letter Shown in the expression formula such as formula (1) of number loss, wherein y is label,For predicted value, n is the sample total of a trained batch, i For the sample index from 1 to n;
Adam optimizer can adjust automatically learning rate, working performance is also quite outstanding under default parameters;Adam optimizer Pseudocode is as shown in the table;
Wherein α is learning rate or is step-length, and weight updates ratio;β12For declining for single order moments estimation and second order moments estimation Lapse rate;ε is the mistake occurred in calculating in order to prevent divided by 0;F (θ) is random targets function;T is time step;
The structure for only building twin network can't directly be passed to data, also carry out just to the network that this building is completed Beginningization;Weight W and biasing b are initialized with the random function of Gaussian Profile, and wherein the initialization mean value of W is 0, standard Difference is 1e-2;The initialization mean value of b is 0.5, standard deviation 1e-2;
The training of step 3, twin network;
After completing the building and initialization of twin network, begin to be trained the model of twin network;First by sequence Training set, test set and the support collection of SAR image after change are loaded into video memory;Then random from training set before each iteration 32 pairs of SAR images are selected to as a batch, first 16 pairs of this 32 pairs inputs are same kind of SAR target, and latter 16 pairs defeated Enter for different types of combination;The input of a batch is obtained to rear, the network being input to after initialization this batch is opened Begin to carry out propagated forward;
When carrying out propagated forward, by each image to being input in twin network, feature is extracted by convolutional layer, by input SAR target image becomes SAR clarification of objective figure;Wherein each input neuron during convolution can first with weight W It is multiplied, along with biasing b, is then activated again with activation primitive;As shown in formula (2);
WhereinIndicate the m row of kth layer convolutional neural networks, the n-th column output;Be with kth layer convolutional neural networks with M row, the n-th column export corresponding weight matrix;It is the m row with kth layer convolutional neural networks, the n-th column output phase pair The importation answered;bkIt is the m row with kth layer convolutional neural networks, the n-th column export corresponding bias matrix;F (x) is Activation primitive, activation primitive are ReLU function or Sigmoid function;
The activation primitive used after convolutional layer is ReLU activation primitive, and such as shown in (3), ReLU activation primitive will own formula Negative value all becomes 0, on the occasion of constant;
F (x)=max (0, x) is (3)
In two full articulamentums below, using Sigmoid function as activation primitive, the formula such as (4) of Sigmoid function It is shown;
Since using Sigmoid function as activation primitive, with the increase of the neural network number of plies, error is when backpropagation The problem of meeting deep fades eventually lead to gradient disappearance, and right value update is stagnated;Most latter two full articulamentum respectively represents feature Absolute difference and likelihood probability, this when using ReLU activation primitive retain weighting after positive integer part not as good as use Sigmoid function demapping is between new codomain (0,1);And a probability of the last layer output, Sigmoid function Although concept as a result cannot strictly be equal to probability, also more intuitively it can be understood and be compared;
Propagated forward finally exports a predicted value, and the loss function that this predicted value and true value define is missed to calculate Then error is carried out backpropagation by difference, seek partial derivative to weight using chain rule, be then updated to each weight; Such as (5), (6) are shown for the formula that chain rule and weight update;
Partial derivative of some weight to the overall error finally exported is acquired by chain rule in backpropagation;Why want Seeking this partial derivative is because the size of renewal amount is associated when updating the weight below;
Some weight w for needing to update is being found out using chain ruleijTo overall error EtotalPartial derivative after, by this local derviation Number is multiplied with learning rate η, and obtained result is exactly the amount that the weight needs to change;As shown in formula (6), wijSubtract renewal amountNew weighted value is just obtained;
The number of iterations of training mission and the accuracy rate threshold value of model preservation are set, then the continuous repetitive exercise of twin network, Update weight;Each iteration all exports loss function, and every 50 iteration output current iteration number and loss function;Per complete One-time authentication is carried out on test set at 200 wheel iteration, preservation model updates threshold value if accuracy rate is higher than threshold value, otherwise continues Iteration;Finally training always, which reaches, meets iteration stopping condition, saves optimal model;
Step 4, SAR target identification;
After the completion of the training of twin network, optimal models after being trained load the model in test;Carry out SAR mesh Mark not test when, need to use support concentration sample;When certain sample identifies in test set, first by the sample and branch It holds and concentrates all sample composition images pair;By these images to being input in trained twin network, the sample is calculated With the similarity degree for supporting all samples of concentration;Then the maximum 5 supports sample of likelihood probability is picked out using k- nearest neighbor algorithm This, supports the category vote of sample to select the classification of test sample according to this 5;The most classification of poll is the sample class Not, when there is the identical situation of poll, the maximum classification for supporting sample of likelihood probability is directly selected as sample to be identified Classification.
2. the SAR target identification method of the incomplete training set according to claim 1 based on twin network, feature exist In: after the step of samples all in test set are sequentially completed above-mentioned identification, count the accuracy rate of identification and shown in order line;
1) model that load training is completed;
2) sample to be measured and support are concentrated into sample composition input pair;
3) input is obtained into analog result to being input in network;
4), as sample to be tested classification, identification is completed for the highest classification for supporting collection sample of sample to be tested similarity.
CN201811263248.1A 2018-10-28 2018-10-28 SAR target recognition method based on incomplete training set of twin network Active CN109508655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811263248.1A CN109508655B (en) 2018-10-28 2018-10-28 SAR target recognition method based on incomplete training set of twin network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811263248.1A CN109508655B (en) 2018-10-28 2018-10-28 SAR target recognition method based on incomplete training set of twin network

Publications (2)

Publication Number Publication Date
CN109508655A true CN109508655A (en) 2019-03-22
CN109508655B CN109508655B (en) 2023-04-25

Family

ID=65746885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811263248.1A Active CN109508655B (en) 2018-10-28 2018-10-28 SAR target recognition method based on incomplete training set of twin network

Country Status (1)

Country Link
CN (1) CN109508655B (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993236A (en) * 2019-04-10 2019-07-09 大连民族大学 Few sample language of the Manchus matching process based on one-shot Siamese convolutional neural networks
CN110033785A (en) * 2019-03-27 2019-07-19 深圳市中电数通智慧安全科技股份有限公司 A kind of calling for help recognition methods, device, readable storage medium storing program for executing and terminal device
CN110147788A (en) * 2019-05-27 2019-08-20 东北大学 A kind of metal plate and belt Product labelling character recognition method based on feature enhancing CRNN
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network
CN110263863A (en) * 2019-06-24 2019-09-20 南京农业大学 Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2
CN110298397A (en) * 2019-06-25 2019-10-01 东北大学 The multi-tag classification method of heating metal image based on compression convolutional neural networks
CN110298391A (en) * 2019-06-12 2019-10-01 同济大学 A kind of iterative increment dialogue intention classification recognition methods based on small sample
CN110309729A (en) * 2019-06-12 2019-10-08 武汉科技大学 Tracking and re-detection method based on anomaly peak detection and twin network
CN110472667A (en) * 2019-07-19 2019-11-19 广东工业大学 Small object classification method based on deconvolution neural network
CN110490227A (en) * 2019-07-09 2019-11-22 武汉理工大学 A kind of few sample image classification method based on Feature Conversion
CN110503537A (en) * 2019-08-16 2019-11-26 南京云帐房网络科技有限公司 A kind of financial accounting data intelligence matching process and system
CN110516745A (en) * 2019-08-28 2019-11-29 北京达佳互联信息技术有限公司 Training method, device and the electronic equipment of image recognition model
CN110516735A (en) * 2019-08-27 2019-11-29 天津科技大学 A kind of natural gas line event category method based on LSTM network and Adam algorithm
CN110610191A (en) * 2019-08-05 2019-12-24 深圳优地科技有限公司 Elevator floor identification method and device and terminal equipment
CN110648320A (en) * 2019-09-19 2020-01-03 京东方科技集团股份有限公司 Bone age acquisition method and system, server, computer device and medium
CN110659591A (en) * 2019-09-07 2020-01-07 中国海洋大学 SAR image change detection method based on twin network
CN110728217A (en) * 2019-09-29 2020-01-24 五邑大学 SAR image recognition method, device, equipment and storage medium
CN110781928A (en) * 2019-10-11 2020-02-11 西安工程大学 Image similarity learning method for extracting multi-resolution features of image
CN110909814A (en) * 2019-11-29 2020-03-24 华南理工大学 Classification method based on feature separation
CN111091144A (en) * 2019-11-27 2020-05-01 云南电网有限责任公司电力科学研究院 Image feature point matching method and device based on depth pseudo-twin network
CN111160268A (en) * 2019-12-30 2020-05-15 北京化工大学 Multi-angle SAR target recognition method based on multi-task learning
CN111208759A (en) * 2019-12-30 2020-05-29 中国矿业大学(北京) Digital twin intelligent monitoring system for unmanned fully mechanized coal mining face of mine
CN111368909A (en) * 2020-03-03 2020-07-03 温州大学 Vehicle logo identification method based on convolutional neural network depth features
CN111382791A (en) * 2020-03-07 2020-07-07 北京迈格威科技有限公司 Deep learning task processing method, image recognition task processing method and device
CN111462817A (en) * 2020-03-25 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN111626197A (en) * 2020-05-27 2020-09-04 陕西理工大学 Human behavior recognition network model and recognition method
CN111814813A (en) * 2019-04-10 2020-10-23 北京市商汤科技开发有限公司 Neural network training and image classification method and device
CN111856578A (en) * 2020-07-31 2020-10-30 电子科技大学 Wide-azimuth prestack seismic reflection mode analysis method of tensor depth self-coding network
CN111858642A (en) * 2020-07-31 2020-10-30 科大讯飞股份有限公司 Data set updating method, related device and readable storage medium
CN111950596A (en) * 2020-07-15 2020-11-17 华为技术有限公司 Training method for neural network and related equipment
CN112016679A (en) * 2020-09-09 2020-12-01 平安科技(深圳)有限公司 Method and device for determining test sample class of twin network and terminal equipment
CN112308148A (en) * 2020-11-02 2021-02-02 创新奇智(青岛)科技有限公司 Defect category identification and twin neural network training method, device and storage medium
CN112465045A (en) * 2020-12-02 2021-03-09 东莞理工学院 Supply chain exception event detection method based on twin neural network
CN112631216A (en) * 2020-12-11 2021-04-09 江苏晶度半导体科技有限公司 Semiconductor test packaging production line performance prediction control system based on DQN and DNN twin neural network algorithm
CN112633104A (en) * 2020-12-15 2021-04-09 西安理工大学 Multi-subject motor imagery identification model and method of twin cascade flexible maximum network
CN112801037A (en) * 2021-03-01 2021-05-14 山东政法学院 Face tampering detection method based on continuous inter-frame difference
CN113030902A (en) * 2021-05-08 2021-06-25 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN113052295A (en) * 2021-02-27 2021-06-29 华为技术有限公司 Neural network training method, object detection method, device and equipment
TWI732467B (en) * 2019-05-23 2021-07-01 耐能智慧股份有限公司 Method of training sparse connected neural network
CN113177521A (en) * 2021-05-26 2021-07-27 电子科技大学 Intelligent radiation source identification method based on combined twin network
CN113361654A (en) * 2021-07-12 2021-09-07 广州天鹏计算机科技有限公司 Image identification method and system based on machine learning
CN113361645A (en) * 2021-07-03 2021-09-07 上海理想信息产业(集团)有限公司 Target detection model construction method and system based on meta-learning and knowledge memory
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression
CN113612733A (en) * 2021-07-07 2021-11-05 浙江工业大学 Twin network-based few-sample false data injection attack detection method
CN113673553A (en) * 2021-07-05 2021-11-19 浙江工业大学 Method and system for rapidly detecting and identifying few-sample target
CN114049507A (en) * 2021-11-19 2022-02-15 国网湖南省电力有限公司 Distribution network line insulator defect identification method, equipment and medium based on twin network
CN114399763A (en) * 2021-12-17 2022-04-26 西北大学 Single-sample and small-sample micro-body ancient biogenetic fossil image identification method and system
CN114550840A (en) * 2022-02-25 2022-05-27 杭州电子科技大学 Fentanyl substance detection method and device based on twin network
CN114900406A (en) * 2022-04-22 2022-08-12 深圳市人工智能与机器人研究院 Blind modulation signal identification method based on twin network
CN115294381A (en) * 2022-05-06 2022-11-04 兰州理工大学 Small sample image classification method and device based on feature migration and orthogonal prior
CN115345259A (en) * 2022-10-14 2022-11-15 北京睿企信息科技有限公司 Optimization method, equipment and storage medium for training named entity recognition model
CN116524282A (en) * 2023-06-26 2023-08-01 贵州大学 Discrete similarity matching classification method based on feature vectors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186794A (en) * 2013-03-27 2013-07-03 西安电子科技大学 Polarized SAT (synthetic aperture radar) image classification method based on improved affinity propagation clustering
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network
CN108388927A (en) * 2018-03-26 2018-08-10 西安电子科技大学 Small sample polarization SAR terrain classification method based on the twin network of depth convolution
CN108447057A (en) * 2018-04-02 2018-08-24 西安电子科技大学 SAR image change detection based on conspicuousness and depth convolutional network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186794A (en) * 2013-03-27 2013-07-03 西安电子科技大学 Polarized SAT (synthetic aperture radar) image classification method based on improved affinity propagation clustering
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network
CN108388927A (en) * 2018-03-26 2018-08-10 西安电子科技大学 Small sample polarization SAR terrain classification method based on the twin network of depth convolution
CN108447057A (en) * 2018-04-02 2018-08-24 西安电子科技大学 SAR image change detection based on conspicuousness and depth convolutional network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEI GAO 等: "Visual Saliency Modeling for River Detection in High-Resolution SAR Imagery", 《IEEE ACCESS》 *
张腊梅等: "基于3D卷积神经网络的PolSAR图像精细分类", 《红外与激光工程》 *
汪润等: "DeepRD:基于Siamese LSTM网络的Android重打包应用检测方法", 《通信学报》 *

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033785A (en) * 2019-03-27 2019-07-19 深圳市中电数通智慧安全科技股份有限公司 A kind of calling for help recognition methods, device, readable storage medium storing program for executing and terminal device
CN109993236A (en) * 2019-04-10 2019-07-09 大连民族大学 Few sample language of the Manchus matching process based on one-shot Siamese convolutional neural networks
CN111814813A (en) * 2019-04-10 2020-10-23 北京市商汤科技开发有限公司 Neural network training and image classification method and device
TWI732467B (en) * 2019-05-23 2021-07-01 耐能智慧股份有限公司 Method of training sparse connected neural network
CN110147788A (en) * 2019-05-27 2019-08-20 东北大学 A kind of metal plate and belt Product labelling character recognition method based on feature enhancing CRNN
CN110309729A (en) * 2019-06-12 2019-10-08 武汉科技大学 Tracking and re-detection method based on anomaly peak detection and twin network
CN110298391A (en) * 2019-06-12 2019-10-01 同济大学 A kind of iterative increment dialogue intention classification recognition methods based on small sample
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network
CN110263863A (en) * 2019-06-24 2019-09-20 南京农业大学 Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2
CN110263863B (en) * 2019-06-24 2021-09-10 南京农业大学 Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2
CN110298397A (en) * 2019-06-25 2019-10-01 东北大学 The multi-tag classification method of heating metal image based on compression convolutional neural networks
CN110490227A (en) * 2019-07-09 2019-11-22 武汉理工大学 A kind of few sample image classification method based on Feature Conversion
CN110490227B (en) * 2019-07-09 2023-02-03 武汉理工大学 Feature conversion-based few-sample image classification method
CN110472667A (en) * 2019-07-19 2019-11-19 广东工业大学 Small object classification method based on deconvolution neural network
CN110472667B (en) * 2019-07-19 2024-01-09 广东工业大学 Small target classification method based on deconvolution neural network
CN110610191A (en) * 2019-08-05 2019-12-24 深圳优地科技有限公司 Elevator floor identification method and device and terminal equipment
CN110503537B (en) * 2019-08-16 2023-05-26 云帐房网络科技有限公司 Intelligent matching method and system for financial accounting data
CN110503537A (en) * 2019-08-16 2019-11-26 南京云帐房网络科技有限公司 A kind of financial accounting data intelligence matching process and system
CN110516735B (en) * 2019-08-27 2023-05-26 天津科技大学 Natural gas pipeline event classification method based on LSTM network and Adam algorithm
CN110516735A (en) * 2019-08-27 2019-11-29 天津科技大学 A kind of natural gas line event category method based on LSTM network and Adam algorithm
CN110516745A (en) * 2019-08-28 2019-11-29 北京达佳互联信息技术有限公司 Training method, device and the electronic equipment of image recognition model
CN110516745B (en) * 2019-08-28 2022-05-24 北京达佳互联信息技术有限公司 Training method and device of image recognition model and electronic equipment
CN110659591A (en) * 2019-09-07 2020-01-07 中国海洋大学 SAR image change detection method based on twin network
CN110659591B (en) * 2019-09-07 2022-12-27 中国海洋大学 SAR image change detection method based on twin network
CN110648320A (en) * 2019-09-19 2020-01-03 京东方科技集团股份有限公司 Bone age acquisition method and system, server, computer device and medium
CN110728217B (en) * 2019-09-29 2023-06-20 五邑大学 SAR image recognition method, SAR image recognition device, SAR image recognition equipment and storage medium
CN110728217A (en) * 2019-09-29 2020-01-24 五邑大学 SAR image recognition method, device, equipment and storage medium
CN110781928A (en) * 2019-10-11 2020-02-11 西安工程大学 Image similarity learning method for extracting multi-resolution features of image
CN111091144A (en) * 2019-11-27 2020-05-01 云南电网有限责任公司电力科学研究院 Image feature point matching method and device based on depth pseudo-twin network
CN111091144B (en) * 2019-11-27 2023-06-27 云南电网有限责任公司电力科学研究院 Image feature point matching method and device based on depth pseudo-twin network
CN110909814A (en) * 2019-11-29 2020-03-24 华南理工大学 Classification method based on feature separation
CN110909814B (en) * 2019-11-29 2023-05-26 华南理工大学 Classification method based on feature separation
CN111160268A (en) * 2019-12-30 2020-05-15 北京化工大学 Multi-angle SAR target recognition method based on multi-task learning
CN111208759B (en) * 2019-12-30 2021-02-02 中国矿业大学(北京) Digital twin intelligent monitoring system for unmanned fully mechanized coal mining face of mine
CN111160268B (en) * 2019-12-30 2024-03-29 北京化工大学 Multi-angle SAR target recognition method based on multi-task learning
CN111208759A (en) * 2019-12-30 2020-05-29 中国矿业大学(北京) Digital twin intelligent monitoring system for unmanned fully mechanized coal mining face of mine
CN111368909A (en) * 2020-03-03 2020-07-03 温州大学 Vehicle logo identification method based on convolutional neural network depth features
CN111382791B (en) * 2020-03-07 2023-12-26 北京迈格威科技有限公司 Deep learning task processing method, image recognition task processing method and device
CN111382791A (en) * 2020-03-07 2020-07-07 北京迈格威科技有限公司 Deep learning task processing method, image recognition task processing method and device
CN111462817B (en) * 2020-03-25 2023-06-20 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN111462817A (en) * 2020-03-25 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN111626197B (en) * 2020-05-27 2023-03-10 陕西理工大学 Recognition method based on human behavior recognition network model
CN111626197A (en) * 2020-05-27 2020-09-04 陕西理工大学 Human behavior recognition network model and recognition method
WO2022012407A1 (en) * 2020-07-15 2022-01-20 华为技术有限公司 Neural network training method and related device
CN111950596A (en) * 2020-07-15 2020-11-17 华为技术有限公司 Training method for neural network and related equipment
CN111858642B (en) * 2020-07-31 2022-12-06 科大讯飞股份有限公司 Data set updating method, related device and readable storage medium
CN111856578A (en) * 2020-07-31 2020-10-30 电子科技大学 Wide-azimuth prestack seismic reflection mode analysis method of tensor depth self-coding network
CN111858642A (en) * 2020-07-31 2020-10-30 科大讯飞股份有限公司 Data set updating method, related device and readable storage medium
CN112016679B (en) * 2020-09-09 2024-02-13 平安科技(深圳)有限公司 Test sample category determining method and device for twin network and terminal equipment
CN112016679A (en) * 2020-09-09 2020-12-01 平安科技(深圳)有限公司 Method and device for determining test sample class of twin network and terminal equipment
CN112308148A (en) * 2020-11-02 2021-02-02 创新奇智(青岛)科技有限公司 Defect category identification and twin neural network training method, device and storage medium
CN112465045A (en) * 2020-12-02 2021-03-09 东莞理工学院 Supply chain exception event detection method based on twin neural network
CN112631216A (en) * 2020-12-11 2021-04-09 江苏晶度半导体科技有限公司 Semiconductor test packaging production line performance prediction control system based on DQN and DNN twin neural network algorithm
CN112633104A (en) * 2020-12-15 2021-04-09 西安理工大学 Multi-subject motor imagery identification model and method of twin cascade flexible maximum network
CN112633104B (en) * 2020-12-15 2023-04-07 西安理工大学 Multi-subject motor imagery identification model and method of twin cascade flexible maximum network
CN113052295B (en) * 2021-02-27 2024-04-12 华为技术有限公司 Training method of neural network, object detection method, device and equipment
CN113052295A (en) * 2021-02-27 2021-06-29 华为技术有限公司 Neural network training method, object detection method, device and equipment
CN112801037A (en) * 2021-03-01 2021-05-14 山东政法学院 Face tampering detection method based on continuous inter-frame difference
CN113030902B (en) * 2021-05-08 2022-05-17 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN113030902A (en) * 2021-05-08 2021-06-25 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN113177521A (en) * 2021-05-26 2021-07-27 电子科技大学 Intelligent radiation source identification method based on combined twin network
CN113361645A (en) * 2021-07-03 2021-09-07 上海理想信息产业(集团)有限公司 Target detection model construction method and system based on meta-learning and knowledge memory
CN113361645B (en) * 2021-07-03 2024-01-23 上海理想信息产业(集团)有限公司 Target detection model construction method and system based on meta learning and knowledge memory
CN113673553A (en) * 2021-07-05 2021-11-19 浙江工业大学 Method and system for rapidly detecting and identifying few-sample target
CN113673553B (en) * 2021-07-05 2024-03-29 浙江工业大学 Method and system for rapidly detecting and identifying few sample targets
CN113612733B (en) * 2021-07-07 2023-04-07 浙江工业大学 Twin network-based few-sample false data injection attack detection method
CN113612733A (en) * 2021-07-07 2021-11-05 浙江工业大学 Twin network-based few-sample false data injection attack detection method
CN113361654A (en) * 2021-07-12 2021-09-07 广州天鹏计算机科技有限公司 Image identification method and system based on machine learning
CN113589937B (en) * 2021-08-04 2024-04-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression
CN114049507A (en) * 2021-11-19 2022-02-15 国网湖南省电力有限公司 Distribution network line insulator defect identification method, equipment and medium based on twin network
CN114399763B (en) * 2021-12-17 2024-04-16 西北大学 Single-sample and small-sample micro-body paleobiological fossil image identification method and system
CN114399763A (en) * 2021-12-17 2022-04-26 西北大学 Single-sample and small-sample micro-body ancient biogenetic fossil image identification method and system
CN114550840A (en) * 2022-02-25 2022-05-27 杭州电子科技大学 Fentanyl substance detection method and device based on twin network
CN114900406B (en) * 2022-04-22 2023-08-08 深圳市人工智能与机器人研究院 Blind modulation signal identification method based on twin network
CN114900406A (en) * 2022-04-22 2022-08-12 深圳市人工智能与机器人研究院 Blind modulation signal identification method based on twin network
CN115294381A (en) * 2022-05-06 2022-11-04 兰州理工大学 Small sample image classification method and device based on feature migration and orthogonal prior
CN115345259A (en) * 2022-10-14 2022-11-15 北京睿企信息科技有限公司 Optimization method, equipment and storage medium for training named entity recognition model
CN115345259B (en) * 2022-10-14 2022-12-23 北京睿企信息科技有限公司 Optimization method, equipment and storage medium for training named entity recognition model
CN116524282B (en) * 2023-06-26 2023-09-05 贵州大学 Discrete similarity matching classification method based on feature vectors
CN116524282A (en) * 2023-06-26 2023-08-01 贵州大学 Discrete similarity matching classification method based on feature vectors

Also Published As

Publication number Publication date
CN109508655B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109508655A (en) The SAR target identification method of incomplete training set based on twin network
Jahanbakhshi et al. Classification of sour lemons based on apparent defects using stochastic pooling mechanism in deep convolutional neural networks
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
Suryawati et al. Deep structured convolutional neural network for tomato diseases detection
CN107169956B (en) Color woven fabric defect detection method based on convolutional neural network
Akshai et al. Plant disease classification using deep learning
CN106845401B (en) Pest image identification method based on multi-space convolution neural network
CN108596327B (en) Seismic velocity spectrum artificial intelligence picking method based on deep learning
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN106651830A (en) Image quality test method based on parallel convolutional neural network
CN110363253A (en) A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
CN111695466B (en) Semi-supervised polarization SAR terrain classification method based on feature mixup
CN111582397B (en) CNN-RNN image emotion analysis method based on attention mechanism
CN111914728B (en) Hyperspectral remote sensing image semi-supervised classification method and device and storage medium
CN111160268A (en) Multi-angle SAR target recognition method based on multi-task learning
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
Alimboyong et al. An improved deep neural network for classification of plant seedling images
CN109741341A (en) A kind of image partition method based on super-pixel and long memory network in short-term
CN111695640B (en) Foundation cloud picture identification model training method and foundation cloud picture identification method
CN111639719A (en) Footprint image retrieval method based on space-time motion and feature fusion
CN113344045B (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN117152503A (en) Remote sensing image cross-domain small sample classification method based on false tag uncertainty perception
Rethik et al. Attention Based Mapping for Plants Leaf to Classify Diseases using Vision Transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant