CN107316004A - Space Target Recognition based on deep learning - Google Patents

Space Target Recognition based on deep learning Download PDF

Info

Publication number
CN107316004A
CN107316004A CN201710417736.2A CN201710417736A CN107316004A CN 107316004 A CN107316004 A CN 107316004A CN 201710417736 A CN201710417736 A CN 201710417736A CN 107316004 A CN107316004 A CN 107316004A
Authority
CN
China
Prior art keywords
mrow
msubsup
data
layers
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710417736.2A
Other languages
Chinese (zh)
Inventor
夏勇
曾皓月
张艳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710417736.2A priority Critical patent/CN107316004A/en
Publication of CN107316004A publication Critical patent/CN107316004A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Space Target Recognition based on deep learning, the technical problem for solving existing space target identification method poor practicability.Technical scheme is to build a 9 layer depth convolutional network models first, then on the network foundation, find out optimal data augmentation method, and the data for obtaining several preferably augmentation methods are combined, optimal data splitting is used in into model training and test simultaneously during, Space object identification is completed.Deep learning model automatically finds distributed character representation from data, the feature more conducively classified, and can increase substantially recognition accuracy.Meanwhile, limited for extraterrestrial target data set by its imaging circumstances, be typical small sample problem this feature, generate virtual data using data augmentation, depth model can be solved on small sample the problem of easy over-fitting, practicality is good.

Description

Space Target Recognition based on deep learning
Technical field
The present invention relates to a kind of Space Target Recognition, more particularly to a kind of extraterrestrial target based on deep learning is known Other method.
Background technology
Space object identification is used as a mission critical for ensureing SPACE SECURITY and space flight exploration, it is intended to detection and tracking point Cloth terrestrial space aerolite and man-made target (including space station, re-entry space vehicle, effectively with invalid artificial satellite, delivery fire Arrow, fuel tank and its fragment etc.).In recent years, this task is widely studied, and has emerged in large numbers many related solutions.Example As F.Wu document " Research on method of space target recognition in digital image, in:Image and Signal Processing(CISP),2012 5th International Congress on,2012, Pp.1303-1306. scale invariant feature conversion is calculated using based on objective contour in ", and is carried out according to Feature Points Matching situation Know method for distinguishing.At present, various relevant programmes and technology are mostly based on wavelet decomposition, singular value decomposition (Singular Value Decomposition, SVD) feature is extracted, then using core principle component analysis (Kernel Principal Component Analysis, KPCA) carry out Feature Dimension Reduction, finally with various SVMs (Support Vector Machine, SVM) or Person's k nearest neighbor (K-Nearest Neighbors, KNN) grader carries out target identification.However, these methods assume that feature is carried It is step independent of one another to take with tagsort, and the quality of feature is the bottleneck of whole system performance.Therefore substantial amounts of work is all Be put to find with best resolving ability feature above, and between visual signature and target semantic gap presence, So that this work is difficult to obtain good effect.
Since Hinton obtains quantum jump, particularly deep learning, depth convolutional network on ImageNet2012 (Deep Convolutional Neural Network, DCNN) has become most successful Image Classfication Technology.Compared to biography System technology, DCNN provides the Unified frame of a combination learning feature extraction and classification, so as to avoid cumbersome manual spy Levy extraction and Feature Engineering.But, although the depth model improved in classification accuracy, many practical applications can not but be obtained Hinton good classifying qualities like that, it is to avoid the over-fitting produced by training data deficiency is solve this problem main Approach.
The content of the invention
In order to overcome the shortcomings of existing space target identification method poor practicability, the present invention provides a kind of based on deep learning Space Target Recognition.This method builds a 9 layer depth convolutional network models first, then on the network foundation, Optimal data augmentation method is found out, and the data that several preferably augmentation methods are obtained are combined, by optimal combination Data are used in model training and test simultaneously during, Space object identification is completed.Deep learning model is automatically from data Middle to find distributed character representation, the feature more conducively classified can increase substantially recognition accuracy.Meanwhile, pin Extraterrestrial target data set is limited by its imaging circumstances, is typical small sample problem this feature, utilizes data augmentation Virtual data is generated, depth model can be solved on small sample the problem of easy over-fitting, practicality is good.
The technical solution adopted for the present invention to solve the technical problems is:A kind of Space object identification based on deep learning Method, is characterized in comprising the following steps:
Step 1: build one 9 layers of depth convolutional network according to data set scale, wherein comprising 3 layers of convolutional layer, 3 layers Pond layer and 3 layers of full articulamentum.In each convolutional layer, input picture and a linear filter carry out convolution, Ran Houjia A upper bias term, the Feature Mapping figure of this layer is obtained by a nonlinear activation function, formula is expressed as:
Now, MjInput feature vector figure number is represented,A convolution kernel in l-th layers is represented,It is l-th layers The bias term of middle jth-th convolution kernels,It is-th the characteristic patterns of jth generated in l-th layers, f is activation primitive.
And then a pond layer is down-sampled to carry out after each convolutional layer, is expressed as:
Herein, down () represents down-sampled operation,It is-th the characteristic patterns of jth generated in l-th layers,WithThe biasing of multiplying property and additivity biasing are represented respectively.
Step 2: on constructed depth convolutional network, 5 kinds of single order transform methods for data augmentation are tested respectively And its 26 kinds of multistage transform methods that superposition is produced, find optimal transformation method and the training number of 8 times of initial data is generated with it According to the test data with 4 times.
Step 3: from 8 times of training datas of maximally effective five kinds conversion generations optional three kinds be combined, select optimal Combined transformation, produce 24 times of original training data Augmented Data, for training DCNN models.
Test data is equally applied to Step 4: carrying out testing obtained optimal mapping combination on the training data On, the test data of 12 times of generation calculates each generation test sample StClassification scoreFor the probability of every class, And score is counted with following Softmax functions, draw the last classification results of each original test data.
The beneficial effects of the invention are as follows:This method builds a 9 layer depth convolutional network models first, then in the network On the basis of, optimal data augmentation method is found out, and the data that several preferably augmentation methods are obtained are combined, will be optimal Data splitting simultaneously used in model training and test during, complete Space object identification.Deep learning model is automatically Distributed character representation is found from data, the feature more conducively classified can increase substantially recognition accuracy.Together When, limited for extraterrestrial target data set by its imaging circumstances, be typical small sample problem this feature, utilize data Augmentation generates virtual data, can solve depth model on small sample the problem of easy over-fitting, practicality is good.
The layered characteristic that the inventive method learns image by depth convolutional neural networks is represented.Due to using depth convolution Neutral net is trained end-to-endly, the more abstract high-rise expression (feature or classification) of combination low-level image feature formation, is passed through Successively eigentransformation, significantly reduces data volume, and remains useful structural information, so as to avoid manual features extraction Time loss, and make classification or prediction be more prone to, higher accuracy of identification is achieved, in the situation without data augmentation Under, recognition correct rate has reached 95.06% on STK extraterrestrial target databases.Simultaneously as employing the method for data augmentation Training dataset scale is increased, the over-fitting that small sample problem may be brought is solved, further increases recognition effect, most Achieve afterwards 99.90% recognition accuracy.
The present invention is elaborated with reference to embodiment.
Embodiment
Space Target Recognition of the invention based on deep learning is comprised the following steps that:
The Space object identification problem that the present invention is solved is based on STK extraterrestrial target data sets, and the data set is by STK (System Tool Kit) satellite tool box emulates generation, and artwork is degraded with out of focus obscure by range of motion is fuzzy, To simulate real space imaging circumstances.Data set totally 400 width image, including the different gray scale satellite image of four classes, is every per class Plant the different postures of 100 width of satellite.
The inventive method is divided into two parts, respectively builds one 9 layers of depth convolutional neural networks and selects optimal Data augmentation method.
Step 1: building depth convolutional network model.
The present invention constructs one 9 layers of depth convolutional network, including 3 convolutional layers on the basis of classical LeNet-5, 3 pond layers and 3 full articulamentums.
In each convolutional layer, input picture will carry out convolution with a linear filter, then plus a bias term, The Feature Mapping figure of this layer is obtained by a nonlinear activation function, formula is expressed as:
Now, MjInput feature vector figure number is represented,A convolution kernel in l-th layers is represented,It is l-th layers The bias term of middle jth-th convolution kernels,It is-th the characteristic patterns of jth generated in l-th layers, f is activation primitive.At this In the DCNN models of invention, first layer convolutional layer rolls up size for 32 × 32 × 3 input with the convolution nuclear phase of 32 5 × 5 × 3 Product, second layer convolutional layer and the convolution kernel convolution of 32 5 × 5 × 32, third layer and the convolution kernel convolution of 32 4 × 4 × 32, institute Some convolutional layer step-lengths are all 1 pixel, and the activation primitive used is Relu function.
And then a pond layer is down-sampled to carry out after each convolutional layer, is expressed as:
Herein, down () represents down-sampled operation,It is-th the characteristic patterns of jth generated in l-th layers,WithThe biasing of multiplying property and additivity biasing are represented respectively.Pond layer can reduce computation complexity and provide robust to space-invariance Property.
The full articulamentum of the first two all comprising 64 neurons, employs Dropout and avoids over-fitting between them.Finally One layer is 4 Softmax layers of dimensions, and the probability that each image belongs to each class in four classes is provided by it.
Step 2: data augmentation.
Data augmentation is solved the problems, such as by the not enough caused over-fitting of training data by artificially increasing training dataset.For Each image, conversion of the present invention by preserving label can generate N enhanced images, and the conversion used here includes rotation, Scaling, cuts, homography conversion, adds noise.
A) rotate:In order to imitate the conversion of camera direction and the movement of extraterrestrial target, the present invention by by image around it Central rotationN width images are generated, (k (∈ { 1,2,3 ..., N }) indicates picture numbers after enhancing to k..
B) scale:Various sizes of sample is generated using bilinear interpolation.For every piece image, it is randomly generated N number of Zoom factor S in interval [0.8,1.2], generates the image of S times of original image and is cropped to original image size.
C) cut:The image block of present invention selection N number of 32 × 32 random from 45 × 45 image, and directly instructed with it Practice network.
D) homography conversion:The present invention, come the conversion at analog video camera visual angle, is randomly selected N number of comprising figure with perspective mapping As the quadrilateral area of main part, in the square image blocks for then mapping that to artwork size.
E) noise is added:Noise is added to image and is seen as regularization to a certain degree, thus is a kind of conventional subtract The method of few over-fitting.The different degrees of spiced salt and Gaussian noise generation N width images is added at random.
In true space environment, there may be many factors influence to cause degrading for image simultaneously, thus on piece image K kinds conversion (multifactor), the i.e. conversion of K ranks can be successively carried out to generate image.Five kind of 1 rank (single factor test) conversion base more than On plinth, 10 2 ranks conversion, 10 3 ranks conversion, 54 ranks conversion, 10 5 ranks conversion, altogether 31 kinds of conversion can also be obtained.Cause This, if generating N number of enhancing image by each training sample, then 31 kinds can be obtained and instructed by different convert generation N times Practice data.In order to find out maximally effective conversion, 8 times of training datas of each conversion generation are all used the 9 of structure by the present invention On layer DCNN.Recognition effect first five conversion it is as shown in the table:
Sequence Conversion Accuracy rate
1 T1:Rotation+cutting 98.87%
2 T2:Rotation+homography+noise 98.71%
3 T3:Rotation 98.63%
4 T4:Rotation+homography 98.56%
5 T5:Rotation+homography+cutting 98.47%
It is of the invention also to carry out training pattern using the combinations of various conversion generation enhancing samples except only with a certain conversion. But it is upper infeasible that the various combination of exhaustive 31 kinds of conversion is undoubtedly calculating.Therefore the present invention only gives birth to the conversion of first five in upper table Into training sample be combined, be in following table effect first five conversion combination, the data augmentation strategy of final choice is, with most The training data of good 24 times of combination producing of conversion, including by T1 (rotation+cutting), T2 (rotation+homography+noise) and T3 (rotations Turn) generation 3 groups of 8 times of test datas.
Data augmentation on the training data, may be also used in test data except application, and the present invention further exists simultaneously Data augmentation is used when training and test.With above-mentioned 12 times of enhanced test images of optimal mapping combination producing, be respectively by T1 (rotation+cutting), T2 (rotation+homography+noise) and 3 groups of 4 times of test datas of T3 (rotation) generations, calculate each generation Test sample StClassification score(being the probability per class), and score is counted with following Softmax functions, draw each The last classification results of original test data.

Claims (1)

1. a kind of Space Target Recognition based on deep learning, it is characterised in that comprise the following steps:
Step 1: one 9 layers of depth convolutional network is built according to data set scale, wherein including 3 layers of convolutional layer, 3 layers of pond Layer and 3 layers of full articulamentum;In each convolutional layer, input picture and a linear filter carry out convolution, then plus one Individual bias term, the Feature Mapping figure of this layer is obtained by a nonlinear activation function, formula is expressed as:
<mrow> <msubsup> <mi>X</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>M</mi> <mi>j</mi> </msub> </mrow> </msub> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>*</mo> <msubsup> <mi>k</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>l</mi> </msubsup> <mo>+</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Now, MjInput feature vector figure number is represented,A convolution kernel in l-th layers is represented,Be jth in l-th layers- The bias term of th convolution kernel,It is-th the characteristic patterns of jth generated in l-th layers, f is activation primitive;
And then a pond layer is down-sampled to carry out after each convolutional layer, is expressed as:
<mrow> <msubsup> <mi>X</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;beta;</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Herein, down () represents down-sampled operation,It is-th the characteristic patterns of jth generated in l-th layers,WithPoint Biao Shi not the biasing of multiplying property and additivity biasing;
Step 2: on constructed depth convolutional network, test respectively for data augmentation 5 kinds of single order transform methods and its The 26 kinds of multistage transform methods produced are superimposed, optimal transformation method is found and the training data and 4 of 8 times of initial data is generated with it Test data again;
Step 3: from maximally effective five kinds conversion generation 8 times of training datas in optional three kinds be combined, select optimal group Conversion is closed, the Augmented Data of 24 times of original training data is produced, for training DCNN models;
It is equally applied to Step 4: carrying out testing obtained optimal mapping combination on the training data in test data, it is raw Into 12 times of test data, each generation test sample S is calculatedtClassification score For the probability of every class, and it is used to Under Softmax functions statistics score, draw the last classification results of each original test data;
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>s</mi> <mi>max</mi> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>:</mo> <mo>=</mo> <msubsup> <mi>log&amp;Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msubsup> <mi>expS</mi> <mi>t</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow> 1
CN201710417736.2A 2017-06-06 2017-06-06 Space Target Recognition based on deep learning Pending CN107316004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710417736.2A CN107316004A (en) 2017-06-06 2017-06-06 Space Target Recognition based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710417736.2A CN107316004A (en) 2017-06-06 2017-06-06 Space Target Recognition based on deep learning

Publications (1)

Publication Number Publication Date
CN107316004A true CN107316004A (en) 2017-11-03

Family

ID=60183606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710417736.2A Pending CN107316004A (en) 2017-06-06 2017-06-06 Space Target Recognition based on deep learning

Country Status (1)

Country Link
CN (1) CN107316004A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977677A (en) * 2017-11-27 2018-05-01 深圳市唯特视科技有限公司 A kind of multi-tag pixel classifications method in the reconstruction applied to extensive city
CN108363478A (en) * 2018-01-09 2018-08-03 北京大学 For wearable device deep learning application model load sharing system and method
CN108573284A (en) * 2018-04-18 2018-09-25 陕西师范大学 Deep learning facial image extending method based on orthogonal experiment analysis
CN108921070A (en) * 2018-06-22 2018-11-30 北京旷视科技有限公司 Image processing method, model training method and corresponding intrument
CN110321864A (en) * 2019-07-09 2019-10-11 西北工业大学 Remote sensing images explanatory note generation method based on multiple dimensioned cutting mechanism
CN111160481A (en) * 2019-12-31 2020-05-15 苏州安智汽车零部件有限公司 Advanced learning-based adas target detection method and system
CN111722220A (en) * 2020-06-08 2020-09-29 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111783806A (en) * 2019-04-04 2020-10-16 千寻位置网络有限公司 Deep learning model optimization method and device and server
CN111832666A (en) * 2020-09-15 2020-10-27 平安国际智慧城市科技股份有限公司 Medical image data amplification method, device, medium, and electronic apparatus
CN112580407A (en) * 2019-09-30 2021-03-30 南京理工大学 Space target component identification method based on lightweight neural network model
WO2021164066A1 (en) * 2020-02-18 2021-08-26 中国电子科技集团公司第二十八研究所 Convolutional neural network-based target group distribution mode determination method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030002731A1 (en) * 2001-05-28 2003-01-02 Heiko Wersing Pattern recognition with hierarchical networks
US20150032449A1 (en) * 2013-07-26 2015-01-29 Nuance Communications, Inc. Method and Apparatus for Using Convolutional Neural Networks in Speech Recognition
US20150161995A1 (en) * 2013-12-06 2015-06-11 Nuance Communications, Inc. Learning front-end speech recognition parameters within neural network training
CN104978580A (en) * 2015-06-15 2015-10-14 国网山东省电力公司电力科学研究院 Insulator identification method for unmanned aerial vehicle polling electric transmission line
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN106570467A (en) * 2016-10-25 2017-04-19 南京南瑞集团公司 Convolutional neutral network-based worker absence-from-post detection method
CN106570474A (en) * 2016-10-27 2017-04-19 南京邮电大学 Micro expression recognition method based on 3D convolution neural network
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030002731A1 (en) * 2001-05-28 2003-01-02 Heiko Wersing Pattern recognition with hierarchical networks
US20150032449A1 (en) * 2013-07-26 2015-01-29 Nuance Communications, Inc. Method and Apparatus for Using Convolutional Neural Networks in Speech Recognition
US20150161995A1 (en) * 2013-12-06 2015-06-11 Nuance Communications, Inc. Learning front-end speech recognition parameters within neural network training
CN104978580A (en) * 2015-06-15 2015-10-14 国网山东省电力公司电力科学研究院 Insulator identification method for unmanned aerial vehicle polling electric transmission line
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN105809121A (en) * 2016-03-03 2016-07-27 电子科技大学 Multi-characteristic synergic traffic sign detection and identification method
CN106570467A (en) * 2016-10-25 2017-04-19 南京南瑞集团公司 Convolutional neutral network-based worker absence-from-post detection method
CN106570474A (en) * 2016-10-27 2017-04-19 南京邮电大学 Micro expression recognition method based on 3D convolution neural network
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAN CIRESAN等: "Multi-column Deep Neural Networks for Image Classification", 《TECHNICAL REPORT》 *
叶浪: "基于卷积神经网络的人脸识别研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977677A (en) * 2017-11-27 2018-05-01 深圳市唯特视科技有限公司 A kind of multi-tag pixel classifications method in the reconstruction applied to extensive city
CN108363478A (en) * 2018-01-09 2018-08-03 北京大学 For wearable device deep learning application model load sharing system and method
CN108573284A (en) * 2018-04-18 2018-09-25 陕西师范大学 Deep learning facial image extending method based on orthogonal experiment analysis
CN108921070A (en) * 2018-06-22 2018-11-30 北京旷视科技有限公司 Image processing method, model training method and corresponding intrument
CN111783806A (en) * 2019-04-04 2020-10-16 千寻位置网络有限公司 Deep learning model optimization method and device and server
CN110321864A (en) * 2019-07-09 2019-10-11 西北工业大学 Remote sensing images explanatory note generation method based on multiple dimensioned cutting mechanism
CN112580407A (en) * 2019-09-30 2021-03-30 南京理工大学 Space target component identification method based on lightweight neural network model
CN112580407B (en) * 2019-09-30 2023-06-20 南京理工大学 Space target part identification method based on lightweight neural network model
CN111160481A (en) * 2019-12-31 2020-05-15 苏州安智汽车零部件有限公司 Advanced learning-based adas target detection method and system
CN111160481B (en) * 2019-12-31 2024-05-10 苏州安智汽车零部件有限公司 Adas target detection method and system based on deep learning
WO2021164066A1 (en) * 2020-02-18 2021-08-26 中国电子科技集团公司第二十八研究所 Convolutional neural network-based target group distribution mode determination method and device
CN111722220A (en) * 2020-06-08 2020-09-29 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111722220B (en) * 2020-06-08 2022-08-26 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111832666A (en) * 2020-09-15 2020-10-27 平安国际智慧城市科技股份有限公司 Medical image data amplification method, device, medium, and electronic apparatus

Similar Documents

Publication Publication Date Title
CN107316004A (en) Space Target Recognition based on deep learning
Zhu et al. Data Augmentation using Conditional Generative Adversarial Networks for Leaf Counting in Arabidopsis Plants.
Zhang et al. Hyperspectral classification based on lightweight 3-D-CNN with transfer learning
CN105740799B (en) Classification of hyperspectral remote sensing image method and system based on the selection of three-dimensional Gabor characteristic
CN112149504B (en) Motion video identification method combining mixed convolution residual network and attention
Zhang et al. Scene classification via a gradient boosting random convolutional network framework
CN107944442B (en) Based on the object test equipment and method for improving convolutional neural networks
Dai et al. Crop leaf disease image super-resolution and identification with dual attention and topology fusion generative adversarial network
CN109886881B (en) Face makeup removal method
CN107506740A (en) A kind of Human bodys&#39; response method based on Three dimensional convolution neutral net and transfer learning model
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN110472627A (en) One kind SAR image recognition methods end to end, device and storage medium
CN107247930A (en) SAR image object detection method based on CNN and Selective Attention Mechanism
CN108509910A (en) Deep learning gesture identification method based on fmcw radar signal
CN104751420B (en) A kind of blind restoration method based on rarefaction representation and multiple-objection optimization
Zhang et al. Symmetric all convolutional neural-network-based unsupervised feature extraction for hyperspectral images classification
CN103996047A (en) Hyperspectral image classification method based on compression spectrum clustering integration
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN113627472A (en) Intelligent garden defoliating pest identification method based on layered deep learning model
CN105894013A (en) Method for classifying polarized SAR image based on CNN and SMM
CN117079098A (en) Space small target detection method based on position coding
Jeny et al. FoNet-Local food recognition using deep residual neural networks
CN110503157B (en) Image steganalysis method of multitask convolution neural network based on fine-grained image
Qiao et al. LiteSCANet: An efficient lightweight network based on spectral and channel-wise attention for hyperspectral image classification
Wang et al. CNN Hyperparameter optimization based on CNN visualization and perception hash algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171103