CN107330405A - Remote sensing images Aircraft Target Recognition based on convolutional neural networks - Google Patents

Remote sensing images Aircraft Target Recognition based on convolutional neural networks Download PDF

Info

Publication number
CN107330405A
CN107330405A CN201710526744.0A CN201710526744A CN107330405A CN 107330405 A CN107330405 A CN 107330405A CN 201710526744 A CN201710526744 A CN 201710526744A CN 107330405 A CN107330405 A CN 107330405A
Authority
CN
China
Prior art keywords
mrow
msub
neural networks
convolutional neural
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710526744.0A
Other languages
Chinese (zh)
Inventor
刘坤
晁安娜
任蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201710526744.0A priority Critical patent/CN107330405A/en
Publication of CN107330405A publication Critical patent/CN107330405A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of remote sensing images Aircraft Target Recognition based on convolutional neural networks, comprising:S1, set up aircraft brake disc storehouse, including training image and test image;S2, Initialize installation convolutional neural networks, and the training process of the convolutional neural networks is set;Parameter in S3, Initialize installation convolutional neural networks;S4, reading training image data, operate according to the training that the training process of convolutional neural networks carries out convolution and pond to training image data, obtain the reality output of training image;S5, the parameter for adjusting convolutional neural networks so that the error amount between the specified target output of training image data and reality output meets required precision;S6, read test image, using test network, export remote sensing images Aircraft Target Identification result.The present invention has stability to rotation, scaling or other deformation, strengthens versatility, improves accuracy of identification and noise immunity.

Description

Remote sensing images Aircraft Target Recognition based on convolutional neural networks
Technical field
The present invention relates to a kind of remote sensing images Aircraft Target Recognition, specifically refer to a kind of based on convolutional neural networks Remote sensing images Aircraft Target Recognition, belongs to depth learning technology field.
Background technology
The aircraft identification technology of remote sensing images, is all significant on civilian and military field.Nowadays, with number There are a large amount of interference according to the increase of amount and target similitude, and the remote sensing aeroplane image that collects, such as block, noise, visual angle The factors such as change, complex background;Therefore, the type of each aircraft how is accurately identified in complex environment and calculating is reduced Complexity, research emphasis and focus as computer vision.
Traditional Aircraft Target Identification generally uses template matching algorithm, and it has the characteristics of algorithm is simple, amount of calculation is small. But because calculating process is simple, the overall shape for extracting aircraft in image is extremely difficult in actual environment, and is not applied for The change of scale of Aircraft Targets.
At present, in Aircraft Target Identification field most widely used method be using not bending moment, it is representative not Characteristics of variables extracting method has Hu squares, Zernike away from, small pitch of waves etc..It is now main that Aircraft Targets are carried out using optimum organization square Identification, using the multidimensional of extraction not bending moment as identification feature, reuse SVM (Support Vector Machine, ) or BP (Back Propagation, backpropagation) neutral nets recognize Aircraft Targets SVM.Although this method can overcome The characteristics of single features description information is indifferent, but multiple features fusion is difficult, and noise immunity is poor.In addition, though BP nerve nets Network has good learning ability and generalization ability, but because learning rate is fixed, Learning Step and factor of momentum are difficult to determine, So that network convergence speed is slow, limited error recovery capability even results in algorithmic statement in local minimum.
Based on above-mentioned, existing Aircraft Target Recognition is all needed first using complicated feature extraction, in conjunction with SVM or shallow The mode of layer neutral net.These methods are realized extremely difficult when in face of large-scale data, and accuracy of identification is relatively low.Cause This, for the remote sensing aeroplane image collected big data quantity and complex background environment the problems such as, the present invention propose one kind be based on The remote sensing images Aircraft Target Recognition of convolutional neural networks.
The content of the invention
It is an object of the invention to provide a kind of remote sensing images Aircraft Target Recognition based on convolutional neural networks, to rotation Turn, scaling or other deformation have stability, for the big data quantity and the complicated back of the body of the remote sensing aeroplane image collected The problems such as scape environment, strengthen the versatility of algorithm, improve algorithm accuracy of identification and noise immunity.
To achieve the above object, the present invention provides a kind of remote sensing images Aircraft Target Identification side based on convolutional neural networks Method, is comprised the steps of:
S1, aircraft brake disc storehouse is set up, be made up of experimental aeroplane model library and actual remote sensing aeroplane model library, including training is schemed Picture and test image;
S2, Initialize installation convolutional neural networks, and the training process of the convolutional neural networks is set;
Parameter in S3, Initialize installation convolutional neural networks;
Training image data are carried out convolution by S4, reading training image data according to the training process of convolutional neural networks Training with pond is operated, and obtains the reality output of training image;
S5, adjust convolutional neural networks parameter so that training image data specified target output and reality output it Between error amount meet required precision, complete the training of convolutional neural networks;
S6, read test image, using the convolutional neural networks for completing training as test network, export remote sensing images and fly Machine target identification result.
In described S1, the experimental aeroplane image of n class aircraft models is included in experimental aeroplane model library, those experiments are flown Machine image is normalized and binary conversion treatment.
In described S1, included in actual remote sensing aeroplane model library:The remote sensing of n class aircraft models in aircraft tomb flies Those remote sensing aeroplane images are carried out gray processing processing by machine image.
In described S1, to actual acquisition to experimental aeroplane image and remote sensing aeroplane image pre-process, including:Chi Degree scaling, rotation, affine transformation, plus noise, motion blur and brightness change and optional position are blocked, by pretreated figure As being divided into test image and training image.
It is specially in described S2, the step of Initialize installation convolutional neural networks:Convolutional neural networks set 5 layer networks Layer, respectively 2 convolutional layers, 2 full articulamentums and 1 Softmax classification layer;Also, each convolutional layer includes pond layer, Pond window is 2 × 2 maximum pond.
In described S2, the step of setting the training process of convolutional neural networks specifically includes:
S21, training image inputted into convolutional neural networks, convolution is carried out to input picture using convolution kernel, first is obtained The characteristic pattern of convolutional layer;
S22, pond is carried out to the characteristic pattern of the first convolutional layer, be 2 × 2 maximum pond by pond window, obtain the The characteristic pattern of one pond layer;
S23, using convolution kernel convolution is carried out to the characteristic pattern of the first pond layer, obtain the characteristic pattern of the second convolutional layer
S24, pond is carried out to the characteristic pattern of the second convolutional layer, be 2 × 2 maximum pond by pond window, obtain the The characteristic pattern of two pond layers;
The first full articulamentum that S25, setting are connected with the second pond layer, and second be connected with the first full articulamentum are complete Articulamentum;
The Softmax classification layers that S26, setting are connected with the second full articulamentum, it is n, correspondence n to set output neuron number The classification results of class aircraft model.
It is specially in described S3, the step of parameter in Initialize installation convolutional neural networks:By training image structure Into training set in, the weights V of input block i to hidden unit j under each pattern is setij;Set hidden unit j single to output First k weights Wjk;Output unit k threshold θ is setk;Hidden unit j threshold value is setAccuracy Controlling Parameter ε is set;If Put learning rate α;Setting often just adjusts a weights using batchsize training sample;Iteration cycle epoch is set.
In described S4, specifically comprise the steps of:
S41, forward propagation stage:Arbitrary training image data X is read from training setk, and it is input to convolution In neutral net, target output O is specifiedk
S42, convolution process:Successively by the training image of input convolutional neural networks, the training image in the first convolutional layer In characteristic pattern in the second convolutional layer of characteristic pattern, the training image respectively with wave filter k can be trainedjConvolution is carried out, and is added Upper offset bj, obtain each convolutional layer;Specifically convolution form is:
Wherein, j represents jth characteristic pattern;L represents the convolution number of plies;MjRepresent the set of input feature vector figure;xiRepresent in l-1 The input feature vector figure selected in convolutional layer, and belong to input feature vector figure combination MjIn;F (x) represents linear amending unit activation letter Number RReLU, and have:
Wherein, aji~U (l, u), l<U and l, u ∈ [0,1), U (l, u) is is evenly distributed, ajiFor one from being uniformly distributed U The random number of sampling in (l, u);
S43, pond process:Maximum in the domain of pond is taken as the feature behind sub-sampling pond using maximum pond model Figure, i.e.,:
Sij=maxI=1, j=1(Fij)+b;
Wherein, FijFor input feature vector figure matrix, i, j represent the line number and row number of the matrix respectively;Sub-sampling pond domain is 2 × 2 matrix, b is biasing, SijFor the characteristic pattern behind sub-sampling pond;maxI=1, j=1(Fij) represent from input feature vector figure matrix FijSize be the maximum taken out in 2 × 2 pond domain;
S44, the corresponding weight matrix W of n every layer of input feature vector figure dot product of calculating, obtain the reality output of training image Yk
Yk=Fn(…(F2(F1(xkW(1))W(2))…)W(n))。
In described S5, specifically comprise the steps of:
S51, back-propagation phase:The reality output Y obtained according to training image after convolutional neural networks are trainedkWith Specify target output OkBetween error amount, calculate k-th of training image output error value Ek, i.e.,:
Wherein, M represents the unit number of output layer;ykRepresent each unit output of output layer;hjRepresent intermediate layer each unit Output;L represents the unit number in intermediate layer;N represents the unit number of input layer;F (x) activation primitives RReLU;
S52, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment of weights is calculated Amount:
δk=(ok-yk)yk(1-yk);
Wherein, δkRepresent the error term of output layer each unit;
S53, according to weighed value adjusting amount, adjust weights:
Wjk(n+1)=Wjk(n)+ΔWjk(n);
ΔVij(n+1)=Vij(n)+ΔVij(n);
S54, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment of threshold value is calculated Amount:
Wherein:δjRepresent the error term of the hidden unit in intermediate layer:
S55, according to adjusting thresholds amount, adjust threshold value:
θk(n+1)=θk(n)+Δθk(n);
S56, the total output error E of calculating:
E=∑s Ek
Wherein, k=1,2 ... ..., M;
S57, judge whether total output error E value meets E≤ε;In this way, S6 is continued executing with;Such as no, return execution S3.
In described S6, the output layer of test network is classified layer, and dividing for n class aircraft models using Softmax Class result, the number for setting output neuron is n.
In summary, the remote sensing images Aircraft Target Recognition provided by the present invention based on convolutional neural networks, right Rotation, scaling or other deformation have stability, for the big data quantity of remote sensing aeroplane image and complexity collected The problems such as background environment, strengthen the versatility of algorithm, improve algorithm accuracy of identification and noise immunity.
Brief description of the drawings
Fig. 1 for the present invention in the remote sensing images Aircraft Target Recognition based on convolutional neural networks flow chart;
The schematic diagram that Fig. 2 learns for the training image based on convolutional neural networks in the present invention.
Embodiment
Below in conjunction with Fig. 1 and Fig. 2, a preferred embodiment of the present invention is described in detail.
As shown in figure 1, be the remote sensing images Aircraft Target Recognition provided by the present invention based on convolutional neural networks, Comprise the steps of:
S1, aircraft brake disc storehouse is set up, be made up of experimental aeroplane model library and actual remote sensing aeroplane model library, including training is schemed Picture and test image;
S2, Initialize installation convolutional neural networks, and the training process of the convolutional neural networks is set;
Parameter in S3, Initialize installation convolutional neural networks;
Training image data are carried out convolution by S4, reading training image data according to the training process of convolutional neural networks Training with pond is operated, and obtains the reality output of training image;
S5, adjust convolutional neural networks parameter so that training image data specified target output and reality output it Between error amount meet required precision, complete the training of convolutional neural networks;
S6, read test image, using the convolutional neural networks for completing training as test network, export remote sensing images and fly Machine target identification result.
Wherein, described convolutional neural networks (Convolution Neural Networks, CNN) are ANN The combination of network and deep learning.Traditional multilayer neural network only includes input layer, hidden layer and output layer, and hides Layer is more difficult to be determined.And convolutional neural networks add convolutional layer and the pond of part connection on the basis of original multilayer neural network Change layer, for imitating human brain to the classification on signal transacting.Convolutional neural networks are shared and pond by local receptor field, weights Operate the complexity to reduce training parameter and calculate.This network structure is to the rotation of aircraft, scaling or other shapes Becoming has stability.In the case of in face of big data quantity, image can be directly inputted to network by convolutional neural networks, it is to avoid Complicated feature extraction and the process of data reconstruction, improve discrimination.
In described S1, included in experimental aeroplane model library:A-10 attack planes, B-2 bombers, B-52 bombers, E-A are pre- Those experimental aeroplane images are carried out by the experimental aeroplane image of the n class aircraft models such as alert machine, F-15 fighter planes and F-16 fighter planes Normalization and binary conversion treatment.
In described S1, included in actual remote sensing aeroplane model library:A-10 attack planes, B-2 in aircraft tomb are bombed The remote sensing aeroplane image of the n class aircraft models such as machine, B-52 bombers, E-A early warning planes, F-15 fighter planes and F-16 fighter planes, it is right Those remote sensing aeroplane images carry out gray processing processing.
In described S1, in order to expand data volume, it is necessary to experimental aeroplane under the complex background environment arrived to actual acquisition Image and remote sensing aeroplane image are pre-processed, including:Scaling, rotation, affine transformation, plus noise, motion blur and bright The processing such as is blocked in degree change and optional position so that every kind of machine in experimental aeroplane model library and actual remote sensing aeroplane model library Each angle picture of aircraft of type reaches 11000 width;Wherein, the image for choosing the different angles that 10000 width cover every kind of type is made For training image (sample image), each image size is 64 × 64;Choose the different angles that 1000 width cover every kind of type Image is as test image, and each image size is 64 × 64.Therefore, training image and test image are not repeated completely.
It is specially in described S2, the step of Initialize installation convolutional neural networks:Convolutional neural networks set 5 layer networks Layer, respectively 2 convolutional layers, 2 full articulamentums and 1 Softmax classification layer;The convolution kernel size of wherein each convolutional layer is equal For 5 × 5, convolution kernel number is respectively 4 and 8;Also, each convolutional layer includes pond layer, pond window is 2 × 2 maximum Chi Hua.
As shown in Fig. 2 in described S2, the step of setting the training process of convolutional neural networks specifically includes:
S21, by training image with 64 × 64 size input convolutional neural networks, using the convolution kernel of 45 × 5 to defeated Enter image and carry out convolution, obtain C1 layers of the first convolutional layer that 4 characteristic pattern sizes are 60 × 60;
S22, pond is carried out to the characteristic pattern in C1 layers of the first convolutional layer, by pond window for 2 × 2 maximum pond, Obtain layer S1 layers of the first pond that 4 characteristic pattern sizes are 30 × 30;
S23, using the convolution kernels of 85 × 5 convolution is carried out to the characteristic pattern in layer S1 layers of the first pond, obtain 8 features Figure size is 26 × 26 C2 layers of the second convolutional layer;
S24, pond is carried out to the characteristic pattern in C2 layers of the second convolutional layer, by pond window for 2 × 2 maximum pond, Obtain layer S2 layers of the second pond that 8 characteristic pattern sizes are 13 × 13;
S25, in order to obtain more preferable fitting effect, the first full articulamentum F5 being connected with layer S2 layers of the second pond is set Layer, and second full articulamentum F6 layers be connected with first full articulamentum F5 layers;DropConnect is set in every layer of full articulamentum Function pair weight is set to 0 at random, and it sets probability to be 0.5;
Wherein, described DropConnect functions are the optimization to Regularization function Dropout, in activation primitive The random operation for removing connection is carried out before, so as to reduce amount of calculation;
S26, setting and the second full articulamentum F6 layers Softmax being connected classification layers, it is n to set output neuron number, The classification results of correspondence n class aircraft models.
It is specially in described S3, the step of parameter in Initialize installation convolutional neural networks:By training image structure Into training set in, the weights V of input block i to hidden unit j under each pattern is setij;Hidden unit j to output unit k Weights Wjk;Output unit k threshold θk;By hidden unit j threshold valueIt is disposed proximate to the random value in 0;By precision control Parameter ε processed initial value is set to 0.01, and learning rate α initial value is set into 0.1;Batchsize is set to 50, also It is that 50 training samples of every use just adjust a weights;Iteration cycle epoch is set to 20.
In described S4, specifically comprise the steps of:
S41, forward propagation stage:Arbitrary training image data X is read from training setk, and it is input to convolution In neutral net, target output O is specifiedk, i.e. training image data XkAffiliated aircraft model, is the training obtained under supervision As a result;
S42, convolution process:Successively by the training image of input convolutional neural networks, the training image in the first convolutional layer In characteristic pattern in the second convolutional layer of characteristic pattern, the training image respectively with wave filter k can be trainedjConvolution is carried out, and is added Upper offset bj, obtain each convolutional layer Cj;Specifically convolution form is:
Wherein, j represents jth characteristic pattern;L represents the convolution number of plies;MjRepresent the set of input feature vector figure;xiRepresent in l-1 The input feature vector figure selected in convolutional layer, and belong to input feature vector figure combination MjIn;F (x) represents linear amending unit activation letter Number RReLU, and have:
Wherein, aji~U (l, u), l<U and l, u ∈ [0,1), U (l, u) is is uniformly distributed, ajiFor one from being uniformly distributed U The random number of sampling in (l, u);
S43, pond process:Maximum in the domain of pond is taken as the feature behind sub-sampling pond using maximum pond model Figure, i.e.,:
Sij=maxI=1, j=1(Fij)+b;
Wherein, FijFor input feature vector figure matrix, i, j represent the line number and row number of the matrix respectively;Sub-sampling pond domain is 2 × 2 matrix, b is biasing, SijFor the characteristic pattern behind sub-sampling pond;maxI=1, j=1(Fij) represent from input feature vector figure matrix FijSize be the maximum taken out in 2 × 2 pond domain;
S44, the corresponding weight matrix W of n every layer of input feature vector figure dot product of calculating, obtain the reality output of training image Yk
Yk=Fn(…(F2(F1(xkW(1))W(2))…)W(n))。
In described S5, specifically comprise the steps of:
S51, back-propagation phase:The reality output Y obtained according to training image after convolutional neural networks are trainedkWith Specify target output OkBetween error amount, calculate k-th of training image output error value Ek, i.e.,:
Wherein, M represents the unit number of output layer;ykRepresent each unit output of output layer;hjRepresent intermediate layer each unit Output;L represents the unit number in intermediate layer;N represents the unit number of input layer;F (x) activation primitives RReLU;
S52, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment of weights is calculated Amount:
δk=(ok-yk)yk(1-yk);
Wherein, δkRepresent the error term of output layer each unit;
S53, according to weighed value adjusting amount, adjust weights:
Wjk(n+1)=Wjk(n)+ΔWjk(n);
ΔVij(n+1)=Vij(n)+ΔVij(n);
S54, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment of threshold value is calculated Amount:
Wherein:δjRepresent the error term of the hidden unit in intermediate layer:
S55, according to adjusting thresholds amount, adjust threshold value:
θk(n+1)=θk(n)+Δθk(n);
S56, the total output error E of calculating:
E=∑s Ek
Wherein, k=1,2 ... ..., M;
S57, judge whether total output error E value meets E≤ε;In this way, S6 is continued executing with;Such as no, return execution S3.
In described S6, the output layer of test network is classified layer, and dividing for n class aircraft models using Softmax Class result, the number for setting output neuron is n.
In summary, it is for flying that single visual angle under special scenes is got mostly for Aircraft Targets in the prior art Machine image is identified.The remote sensing aeroplane image that actual acquisition is arrived is more complicated, visual angle scene changes, noise, cloud cover etc. Disturbing factor can cause higher misclassification rate;And for large-scale data set, feature extraction is more difficult.
Therefore, the present invention uses based on convolutional neural networks to carry out the identification of remote sensing images Aircraft Targets, due to convolution Neutral net is that this network structure is to rotation by local receptor field and the shared complexity for reducing training parameter and calculating of weights Turn, scaling or other deformation have stability;For the big data quantity and the complicated back of the body of the remote sensing aeroplane image collected The problems such as scape environment, while strengthening the versatility of algorithm, improve algorithm accuracy of identification and noise immunity.
Although present disclosure is discussed in detail by above preferred embodiment, but it should be appreciated that above-mentioned Description is not considered as limitation of the present invention.After those skilled in the art have read the above, for the present invention's A variety of modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited to the appended claims.

Claims (9)

1. a kind of remote sensing images Aircraft Target Recognition based on convolutional neural networks, it is characterised in that comprise the steps of:
S1, set up aircraft brake disc storehouse, be made up of experimental aeroplane model library and actual remote sensing aeroplane model library, including training image with Test image;
S2, Initialize installation convolutional neural networks, and the training process of the convolutional neural networks is set;
Parameter in S3, Initialize installation convolutional neural networks;
Training image data are carried out convolution and pond by S4, reading training image data according to the training process of convolutional neural networks The training operation of change, obtains the reality output of training image;
S5, the parameter for adjusting convolutional neural networks so that between the specified target output of training image data and reality output Error amount meets required precision, completes the training of convolutional neural networks;
S6, read test image, using the convolutional neural networks for completing training as test network, export remote sensing images aircraft mesh Mark recognition result.
2. the remote sensing images Aircraft Target Recognition as claimed in claim 1 based on convolutional neural networks, it is characterised in that In described S1, the experimental aeroplane image of n class aircraft models is included in experimental aeroplane model library, those experimental aeroplane images are entered Row normalization and binary conversion treatment;Included in actual remote sensing aeroplane model library:The remote sensing of n class aircraft models in aircraft tomb Those remote sensing aeroplane images are carried out gray processing processing by aircraft brake disc.
3. the remote sensing images Aircraft Target Recognition as claimed in claim 2 based on convolutional neural networks, it is characterised in that In described S1, to actual acquisition to experimental aeroplane image and remote sensing aeroplane image pre-process, including:Scaling, Rotation, affine transformation, plus noise, motion blur and brightness change and optional position are blocked, and pretreated image is divided into Test image and training image.
4. the remote sensing images Aircraft Target Recognition as claimed in claim 3 based on convolutional neural networks, it is characterised in that It is specially in described S2, the step of Initialize installation convolutional neural networks:Convolutional neural networks set 5 layer networks layer, respectively For 2 convolutional layers, 2 full articulamentums and 1 Softmax classification layer;Also, each convolutional layer includes pond layer, pond window Mouth is 2 × 2 maximum pond.
5. the remote sensing images Aircraft Target Recognition as claimed in claim 4 based on convolutional neural networks, it is characterised in that In described S2, the step of setting the training process of convolutional neural networks specifically includes:
S21, training image inputted into convolutional neural networks, convolution is carried out to input picture using convolution kernel, the first convolution is obtained The characteristic pattern of layer;
S22, the characteristic pattern to the first convolutional layer carry out pond, by the maximum pond that pond window is 2 × 2, obtain the first pond Change the characteristic pattern of layer;
S23, using convolution kernel convolution is carried out to the characteristic pattern of the first pond layer, obtain the characteristic pattern of the second convolutional layer
S24, the characteristic pattern to the second convolutional layer carry out pond, by the maximum pond that pond window is 2 × 2, obtain the second pond Change the characteristic pattern of layer;
The first full articulamentum that S25, setting are connected with the second pond layer, and second be connected with the first full articulamentum connect entirely Layer;
The Softmax classification layers that S26, setting are connected with the second full articulamentum, it is n to set output neuron number, and correspondence n classes fly The classification results of machine type.
6. the remote sensing images Aircraft Target Recognition as claimed in claim 5 based on convolutional neural networks, it is characterised in that It is specially in described S3, the step of parameter in Initialize installation convolutional neural networks:In the training being made up of training image Concentrate, the weights V of input block i to hidden unit j under each pattern is setij;Hidden unit j to output unit k power is set Value Wjk;Output unit k threshold θ is setk;Hidden unit j threshold value is setAccuracy Controlling Parameter ε is set;Learning rate is set α;Setting often just adjusts a weights using batchsize training sample;Iteration cycle epoch is set.
7. the remote sensing images Aircraft Target Recognition as claimed in claim 6 based on convolutional neural networks, it is characterised in that In described S4, specifically comprise the steps of:
S41, forward propagation stage:Arbitrary training image data X is read from training setk, and it is input to convolutional Neural net In network, target output O is specifiedk
S42, convolution process:Successively by the training image of input convolutional neural networks, the training image in the first convolutional layer The characteristic pattern of characteristic pattern, the training image in the second convolutional layer is respectively with that can train wave filter kjConvolution is carried out, and plus inclined Put bj, obtain each convolutional layer;Specially:
<mrow> <msubsup> <mi>x</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>M</mi> <mi>j</mi> </msub> </mrow> </munder> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>&amp;times;</mo> <msubsup> <mi>k</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>+</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
Wherein, j represents jth characteristic pattern;L represents the convolution number of plies;MjRepresent the set of input feature vector figure;xiRepresent in l-1 convolution The input feature vector figure selected in layer, and belong to input feature vector figure combination MjIn;F (x) represents linear amending unit activation primitive RReLU, and have:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>a</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <msub> <mi>x</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>x</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>x</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;GreaterEqual;</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Wherein, aji~U (l, u), l<U and l, u ∈ [0,1), U (l, u) is is evenly distributed, ajiFor one from being uniformly distributed U (l, u) The random number of middle sampling;
S43, pond process:Maximum in the domain of pond is taken as the characteristic pattern behind sub-sampling pond using maximum pond model, I.e.:
Sij=maxI=1, j=1(Fij)+b;
Wherein, FijFor input feature vector figure matrix, i, j represent the line number and row number of the matrix respectively;Sub-sampling pond domain is 2 × 2 Matrix, b for biasing, SijFor the characteristic pattern behind sub-sampling pond;maxI=1, j=1(Fij) represent from input feature vector figure matrix Fij's Size is the maximum taken out in 2 × 2 pond domain;
S44, the corresponding weight matrix W of n every layer of input feature vector figure dot product of calculating, obtain the reality output Y of training imagek
Yk=Fn(…(F2(F1(xkW(1))W(2))…)W(n))。
8. the remote sensing images Aircraft Target Recognition as claimed in claim 7 based on convolutional neural networks, it is characterised in that In described S5, specifically comprise the steps of:
S51, back-propagation phase:The reality output Y obtained according to training image after convolutional neural networks are trainedkWith specifying Target exports OkBetween error amount, calculate k-th of training image output error value Ek, i.e.,:
<mrow> <msub> <mi>E</mi> <mi>k</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>o</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
<mrow> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>=</mo> <msubsup> <mi>f&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msub> <mi>h</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mi>k</mi> </msub> <mo>;</mo> </mrow>
Wherein, M represents the unit number of output layer;ykRepresent each unit output of output layer;hjRepresent the defeated of intermediate layer each unit Go out;L represents the unit number in intermediate layer;N represents the unit number of input layer;F (x) activation primitives RReLU;
S52, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment amount of weights is calculated:
<mrow> <msub> <mi>&amp;Delta;W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mi>&amp;alpha;</mi> <mrow> <mn>1</mn> <mo>+</mo> <mi>L</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msub> <mi>&amp;Delta;W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>&amp;delta;</mi> <mi>k</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mo>;</mo> </mrow>
<mrow> <msub> <mi>&amp;Delta;V</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mi>&amp;alpha;</mi> <mrow> <mn>1</mn> <mo>+</mo> <mi>N</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msub> <mi>&amp;Delta;V</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>&amp;delta;</mi> <mi>k</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mo>;</mo> </mrow>
δk=(ok-yk)yk(1-yk);
Wherein, δkRepresent the error term of output layer each unit;
S53, according to weighed value adjusting amount, adjust weights:
Wjk(n+1)=Wjk(n)+ΔWjk(n);
ΔVij(n+1)=Vij(n)+ΔVij(n);
S54, according to error value Ek, convolutional neural networks are fed back to by the method for minimization error, the adjustment amount of threshold value is calculated:
<mrow> <msub> <mi>&amp;Delta;&amp;theta;</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <mi>&amp;alpha;</mi> <mrow> <mn>1</mn> <mo>+</mo> <mi>L</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msub> <mi>&amp;Delta;&amp;theta;</mi> <mi>k</mi> </msub> <mo>(</mo> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>&amp;delta;</mi> <mi>k</mi> </msub> <mo>;</mo> </mrow>
<mrow> <msub> <mi>&amp;delta;</mi> <mi>j</mi> </msub> <mo>=</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>&amp;delta;</mi> <mi>k</mi> </msub> <msub> <mi>W</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>;</mo> </mrow>
Wherein:δjRepresent the error term of the hidden unit in intermediate layer:
S55, according to adjusting thresholds amount, adjust threshold value:
θk(n+1)=θk(n)+Δθk(n);
S56, the total output error E of calculating:
E=∑s Ek
Wherein, k=1,2 ... ..., M;
S57, judge whether total output error E value meets E≤ε;In this way, S6 is continued executing with;Such as no, return execution S3.
9. the remote sensing images Aircraft Target Recognition as claimed in claim 8 based on convolutional neural networks, it is characterised in that In described S6, the output layer of test network is directed to the classification results of n class aircraft models using Softmax classification layers, if The number for putting output neuron is n.
CN201710526744.0A 2017-06-30 2017-06-30 Remote sensing images Aircraft Target Recognition based on convolutional neural networks Pending CN107330405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526744.0A CN107330405A (en) 2017-06-30 2017-06-30 Remote sensing images Aircraft Target Recognition based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526744.0A CN107330405A (en) 2017-06-30 2017-06-30 Remote sensing images Aircraft Target Recognition based on convolutional neural networks

Publications (1)

Publication Number Publication Date
CN107330405A true CN107330405A (en) 2017-11-07

Family

ID=60198602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526744.0A Pending CN107330405A (en) 2017-06-30 2017-06-30 Remote sensing images Aircraft Target Recognition based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107330405A (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN108470185A (en) * 2018-02-12 2018-08-31 北京佳格天地科技有限公司 The atural object annotation equipment and method of satellite image
CN108509986A (en) * 2018-03-16 2018-09-07 上海海事大学 Based on the Aircraft Target Recognition for obscuring constant convolutional neural networks
CN108549866A (en) * 2018-04-12 2018-09-18 上海海事大学 Remote sensing aeroplane recognition methods based on intensive convolutional neural networks
CN108764316A (en) * 2018-05-18 2018-11-06 河海大学 Remote sensing images scene classification method based on depth convolutional neural networks and Multiple Kernel Learning
CN108805861A (en) * 2018-04-28 2018-11-13 中国人民解放军国防科技大学 Remote sensing image cloud detection method based on deep learning
CN109100710A (en) * 2018-06-26 2018-12-28 东南大学 A kind of Underwater targets recognition based on convolutional neural networks
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN109444863A (en) * 2018-10-23 2019-03-08 广西民族大学 A kind of estimation method of the narrowband ultrasonic echo number based on convolutional neural networks
CN109801234A (en) * 2018-12-28 2019-05-24 南京美乐威电子科技有限公司 Geometric image correction method and device
CN109959911A (en) * 2019-03-25 2019-07-02 清华大学 Multiple target autonomic positioning method and device based on laser radar
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN110197233A (en) * 2019-06-05 2019-09-03 四川九洲电器集团有限责任公司 A method of aircraft classification is carried out using track
CN110276269A (en) * 2019-05-29 2019-09-24 西安交通大学 A kind of Remote Sensing Target detection method based on attention mechanism
CN110288035A (en) * 2019-06-28 2019-09-27 海南树印网络科技有限公司 A kind of online autonomous learning method and system of intelligent garbage bin
CN110390244A (en) * 2019-03-18 2019-10-29 中国人民解放军61540部队 Remote sensing images element recognition methods based on deep learning
CN110415709A (en) * 2019-06-26 2019-11-05 深圳供电局有限公司 Transformer working condition recognition methods based on Application on Voiceprint Recognition model
CN110569889A (en) * 2019-08-21 2019-12-13 广西电网有限责任公司电力科学研究院 Convolutional neural network image classification method based on L2 normalization
CN110728627A (en) * 2018-07-16 2020-01-24 宁波舜宇光电信息有限公司 Image noise reduction method, device, system and storage medium
CN111091132A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111126189A (en) * 2019-12-10 2020-05-08 北京航天世景信息技术有限公司 Target searching method based on remote sensing image
WO2020097837A1 (en) * 2018-11-15 2020-05-22 Lingdong Technology (Beijing) Co.Ltd System and method for real-time supervised machine learning in on-site environment
CN111681228A (en) * 2020-06-09 2020-09-18 创新奇智(合肥)科技有限公司 Flaw detection model, training method, detection method, apparatus, device, and medium
CN112115973A (en) * 2020-08-18 2020-12-22 吉林建筑大学 Convolutional neural network based image identification method
CN112651927A (en) * 2020-12-03 2021-04-13 北京信息科技大学 Raman spectrum intelligent identification method based on convolutional neural network and support vector machine
CN112668454A (en) * 2020-12-25 2021-04-16 南京华格信息技术有限公司 Bird micro-target identification method based on multi-sensor fusion
CN112906523A (en) * 2021-02-04 2021-06-04 上海航天控制技术研究所 Hardware accelerated deep learning target machine type identification method
CN112966555A (en) * 2021-02-02 2021-06-15 武汉大学 Remote sensing image airplane identification method based on deep learning and component prior
CN113052106A (en) * 2021-04-01 2021-06-29 重庆大学 Airplane take-off and landing runway identification method based on PSPNet network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096655A (en) * 2016-06-14 2016-11-09 厦门大学 A kind of remote sensing image airplane detection method based on convolutional neural networks
CN106557814A (en) * 2016-11-15 2017-04-05 成都通甲优博科技有限责任公司 A kind of road vehicle density assessment method and device
US9665799B1 (en) * 2016-01-29 2017-05-30 Fotonation Limited Convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9665799B1 (en) * 2016-01-29 2017-05-30 Fotonation Limited Convolutional neural network
CN106096655A (en) * 2016-06-14 2016-11-09 厦门大学 A kind of remote sensing image airplane detection method based on convolutional neural networks
CN106557814A (en) * 2016-11-15 2017-04-05 成都通甲优博科技有限责任公司 A kind of road vehicle density assessment method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SHEN W, ZHOU M, YANG F, ET AL.: "Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification", 《 PATTERN RECOGNITION》 *
WAN L, ZEILER M, ZHANG S, ET AL: "Regularization of neural networks using dropconnect", 《INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 *
XU B, WANG N, CHEN T, ET AL.: "Empirical evaluation of rectified activations in convolutional network", 《ARXIV PREPRINT》 *
刘荣荣: "基于卷积神经网络的手写数字识别软件的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
杨眷玉: "基于卷积神经网络的物体识别研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈向震: "基于深度学习的人脸表情识别算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525B (en) * 2017-12-25 2018-10-12 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN108334814B (en) * 2018-01-11 2020-10-30 浙江工业大学 Gesture recognition method of AR system
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image
CN108470185A (en) * 2018-02-12 2018-08-31 北京佳格天地科技有限公司 The atural object annotation equipment and method of satellite image
CN108509986A (en) * 2018-03-16 2018-09-07 上海海事大学 Based on the Aircraft Target Recognition for obscuring constant convolutional neural networks
CN108549866A (en) * 2018-04-12 2018-09-18 上海海事大学 Remote sensing aeroplane recognition methods based on intensive convolutional neural networks
CN108805861A (en) * 2018-04-28 2018-11-13 中国人民解放军国防科技大学 Remote sensing image cloud detection method based on deep learning
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN108764316A (en) * 2018-05-18 2018-11-06 河海大学 Remote sensing images scene classification method based on depth convolutional neural networks and Multiple Kernel Learning
CN108764316B (en) * 2018-05-18 2022-08-26 河海大学 Remote sensing image scene classification method based on deep convolutional neural network and multi-core learning
CN109100710A (en) * 2018-06-26 2018-12-28 东南大学 A kind of Underwater targets recognition based on convolutional neural networks
CN110728627A (en) * 2018-07-16 2020-01-24 宁波舜宇光电信息有限公司 Image noise reduction method, device, system and storage medium
CN109444863A (en) * 2018-10-23 2019-03-08 广西民族大学 A kind of estimation method of the narrowband ultrasonic echo number based on convolutional neural networks
US11092968B2 (en) 2018-11-15 2021-08-17 Lingdong Technology (Beijing) Co. Ltd System and method for real-time supervised machine learning in on-site environment
WO2020097837A1 (en) * 2018-11-15 2020-05-22 Lingdong Technology (Beijing) Co.Ltd System and method for real-time supervised machine learning in on-site environment
US11682193B2 (en) 2018-11-15 2023-06-20 Lingdong Technology (Beijing) Co. Ltd. System and method for real-time supervised machine learning in on-site environment
CN109801234B (en) * 2018-12-28 2023-09-22 南京美乐威电子科技有限公司 Image geometry correction method and device
CN109801234A (en) * 2018-12-28 2019-05-24 南京美乐威电子科技有限公司 Geometric image correction method and device
CN110390244A (en) * 2019-03-18 2019-10-29 中国人民解放军61540部队 Remote sensing images element recognition methods based on deep learning
CN109959911A (en) * 2019-03-25 2019-07-02 清华大学 Multiple target autonomic positioning method and device based on laser radar
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN110276269B (en) * 2019-05-29 2021-06-29 西安交通大学 Remote sensing image target detection method based on attention mechanism
CN110276269A (en) * 2019-05-29 2019-09-24 西安交通大学 A kind of Remote Sensing Target detection method based on attention mechanism
CN110197233A (en) * 2019-06-05 2019-09-03 四川九洲电器集团有限责任公司 A method of aircraft classification is carried out using track
CN110415709A (en) * 2019-06-26 2019-11-05 深圳供电局有限公司 Transformer working condition recognition methods based on Application on Voiceprint Recognition model
CN110415709B (en) * 2019-06-26 2022-01-25 深圳供电局有限公司 Transformer working state identification method based on voiceprint identification model
CN110288035A (en) * 2019-06-28 2019-09-27 海南树印网络科技有限公司 A kind of online autonomous learning method and system of intelligent garbage bin
CN110569889A (en) * 2019-08-21 2019-12-13 广西电网有限责任公司电力科学研究院 Convolutional neural network image classification method based on L2 normalization
CN111126189A (en) * 2019-12-10 2020-05-08 北京航天世景信息技术有限公司 Target searching method based on remote sensing image
CN111091132B (en) * 2020-03-19 2021-01-15 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111091132A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111681228A (en) * 2020-06-09 2020-09-18 创新奇智(合肥)科技有限公司 Flaw detection model, training method, detection method, apparatus, device, and medium
CN112115973A (en) * 2020-08-18 2020-12-22 吉林建筑大学 Convolutional neural network based image identification method
CN112115973B (en) * 2020-08-18 2022-07-19 吉林建筑大学 Convolutional neural network based image identification method
CN112651927A (en) * 2020-12-03 2021-04-13 北京信息科技大学 Raman spectrum intelligent identification method based on convolutional neural network and support vector machine
CN112668454A (en) * 2020-12-25 2021-04-16 南京华格信息技术有限公司 Bird micro-target identification method based on multi-sensor fusion
CN112966555B (en) * 2021-02-02 2022-06-14 武汉大学 Remote sensing image airplane identification method based on deep learning and component prior
CN112966555A (en) * 2021-02-02 2021-06-15 武汉大学 Remote sensing image airplane identification method based on deep learning and component prior
CN112906523A (en) * 2021-02-04 2021-06-04 上海航天控制技术研究所 Hardware accelerated deep learning target machine type identification method
CN113052106A (en) * 2021-04-01 2021-06-29 重庆大学 Airplane take-off and landing runway identification method based on PSPNet network
CN113052106B (en) * 2021-04-01 2022-11-04 重庆大学 Airplane take-off and landing runway identification method based on PSPNet network

Similar Documents

Publication Publication Date Title
CN107330405A (en) Remote sensing images Aircraft Target Recognition based on convolutional neural networks
Zhang et al. Can deep learning identify tomato leaf disease?
CN108875674B (en) Driver behavior identification method based on multi-column fusion convolutional neural network
CN107273845B (en) Facial expression recognition method based on confidence region and multi-feature weighted fusion
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN104598890B (en) A kind of Human bodys&#39; response method based on RGB D videos
JP4083469B2 (en) Pattern recognition method using hierarchical network
CN104182772A (en) Gesture recognition method based on deep learning
CN108021947A (en) A kind of layering extreme learning machine target identification method of view-based access control model
CN103996056A (en) Tattoo image classification method based on deep learning
US20200097766A1 (en) Multi-scale text filter conditioned generative adversarial networks
CN107492121A (en) A kind of two-dimension human body bone independent positioning method of monocular depth video
CN104636732B (en) A kind of pedestrian recognition method based on the deep belief network of sequence
CN103065158B (en) The behavior recognition methods of the ISA model based on relative gradient
CN106651915A (en) Target tracking method of multi-scale expression based on convolutional neural network
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN106296734B (en) Method for tracking target based on extreme learning machine and boosting Multiple Kernel Learnings
US20200304729A1 (en) Video processing using a spectral decomposition layer
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
Gao et al. An end-to-end broad learning system for event-based object classification
Ren et al. Ship recognition based on Hu invariant moments and convolutional neural network for video surveillance
Reddy et al. Handwritten Hindi character recognition using deep learning techniques
CN106127230A (en) Image-recognizing method based on human visual perception
CN114492634A (en) Fine-grained equipment image classification and identification method and system
Cai et al. Cloud classification of satellite image based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171107