CN105426919B - The image classification method of non-supervisory feature learning is instructed based on conspicuousness - Google Patents

The image classification method of non-supervisory feature learning is instructed based on conspicuousness Download PDF

Info

Publication number
CN105426919B
CN105426919B CN201510821480.2A CN201510821480A CN105426919B CN 105426919 B CN105426919 B CN 105426919B CN 201510821480 A CN201510821480 A CN 201510821480A CN 105426919 B CN105426919 B CN 105426919B
Authority
CN
China
Prior art keywords
image
feature
pixel
value
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510821480.2A
Other languages
Chinese (zh)
Other versions
CN105426919A (en
Inventor
陈霜霜
刘惠义
曾晓勤
孟志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201510821480.2A priority Critical patent/CN105426919B/en
Publication of CN105426919A publication Critical patent/CN105426919A/en
Application granted granted Critical
Publication of CN105426919B publication Critical patent/CN105426919B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a kind of image classification method that non-supervisory feature learning is instructed based on conspicuousness, belong to machine learning and computer vision field.Pixel collection that the conspicuousness that the image classification method includes instructs, the normalization of non-supervisory feature learning, image convolution, local contrast, spatial pyramid pond, merge central priori and image classification.Using the sorting technique, conspicuousness is detected and concentrates representational pixel for gathering view data, train representational pixel to obtain the characteristics of image of high quality by this non-supervisory feature learning method of sparse self-encoding encoder.The feature for obtaining training sample and test set is operated by image convolution, convolution feature is subjected to local contrast normalization and spatial pyramid pond, Chi Huahou feature is merged with central priori features, image classified using liblinear graders.The method can obtain efficiently and the characteristics of image of robust, can significantly improve the classification accuracy of multiclass image.

Description

The image classification method of non-supervisory feature learning is instructed based on conspicuousness
Technical field
The present invention relates to a kind of image classification method that non-supervisory feature learning is instructed based on conspicuousness, belong to machine learning With technical field of computer vision.
Background technology
With the development of multimedia technology, image classification has turned into the emphasis of computer vision field research, image classification Be certain attribute having according to image and be divided into it is set in advance it is different classes of in, how image to be carried out effective Expression is to improve the key of image classification accuracy rate, and selection and the extraction problem of feature are that the difficult point that image classification presently, there are is asked Topic.With developing rapidly for mobile Internet, human society has been enter into the big data epoch.These traditional features such as SIFT, HOG Though study can extract some features of image, preferable effect, this engineer's feature are also achieved in image classification The defects of certain be present in method.And feature learning method is supervised in traditional having, by learning artificial labeled data, in big data Seem outdated under generation.
The content of the invention
To solve the deficiencies in the prior art, non-supervisory feature is instructed based on conspicuousness it is an object of the invention to provide one kind The image classification method of study, certain methods and theory in computer vision will be incorporated in deep learning network structure, to improve Effective expression of characteristics of image, so as to reach preferable classification results.
In order to realize above-mentioned target, the present invention adopts the following technical scheme that:
A kind of image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterized in that, comprise the following steps:
1) the pixel collection that conspicuousness instructs:Image pixel point is acquired using conspicuousness detection algorithm, passed through The notable figure of image is obtained, collects the pixel of representative expression in image;
2) non-supervisory feature learning:Representational pixel is trained using sparse own coding, obtains characteristics of image;
3) image convolution:Training sample and test sample is concentrated to enter the characteristics of image in step 2) with view data respectively Row convolution operation;
4) local contrast normalizes:The training sample and the convolution feature of test sample obtained in step 3) is carried out local Subtraction and division normalization;
5) spatial pyramid pond:The convolved image feature obtained in step 4) is put down from three different spaces yardsticks Equal pondization operation;
6) central priori is merged:View data is calculated respectively concentrates training sample and the central priori value of test sample, will The value carries out mixing operation respectively with the multiple dimensioned pond feature of step 5);
7) image classification:Grader is trained with the characteristic value that training sample is obtained in step 6), will be obtained in step 6) Image classification is realized in the grader that the characteristic value input of training sample has been trained.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 1) concretely comprise the following steps:
1.1) context-aware conspicuousness detection algorithm is used, calculates the notable figure that view data concentrates training set, data Concentrate each image that there is identical resolution ratio;
1.2) pixel in every width notable figure is subjected to descending arrangement according to the size of gray value;
1.3) 64 pixels are chosen from every width notable figure, according to the size of pixel gray value, are selected from top 5% Take 50 erect image vegetarian refreshments and 14 negative-appearing image vegetarian refreshments are chosen from bottom 30%;
1.4) coordinate information [X, Y] for the pixel for meeting requirement in every width notable figure is calculated, this is found out according to this coordinate Notable figure corresponds to the positive and negative pixel in original RGB image.Each pixel is set as a sample, so as to obtain non-prison Superintend and direct the sample set of feature learning.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 2) concretely comprise the following steps:
2.1) from sparse self-encoding encoder as non-supervisory feature learning instrument, visual node layer that setting network uses for M, it is N to hide node layer;
2.2) input using the positive and negative pixel samples collection of gained as sparse own coding, passes through non-supervisory study pre-training The network;By the weight between continuous iterative network input layer and hidden layer, the study and feature for realizing data characteristics carry Take, obtain feature and be designated as W, W is N rows, the matrix of M row sizes.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 3) concretely comprising the following steps in:
3.1) W is switched into 8*8 convolution kernels, produces N*3 convolution kernel;
3.2) view data is obtained respectively concentrate each image R, G, B triple channel value in training sample, test sample;
3.3) the triple channel value of each image is subjected to two-dimensional convolution with 3 convolution kernels in current signature respectively and obtains spy Value indicative;
3.4) sum operation is carried out to the triple channel characteristic value of acquisition, is designated as x;
3.5) x activation value y is calculated using LRel activation primitives;
LRel activation primitives are as follows:
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 4) concretely comprised the following steps in:
4.1) K*K Gaussian kernels are set, as weight window;
4.2) activation value of training image data and test image data is subjected to local subtraction operation respectively;
4.3) activation value is carried out to the result divided by standard deviation of local subtraction operation, finally draws training sample and test specimens This is by convolution and the characteristic value after local contrast normalization operation.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 5) concretely comprised the following steps in:
5.1) it is respectively 1,2,3 to set spatial pyramid three kinds of yardsticks of pondization;
5.2) size of pond window and pond step-length under current scale is calculated;
5.3) all two dimensional character blocks of feature obtained by being obtained as circulating, the size that characteristic block is often tieed up are pond window Size;
5.4) to current characteristic block averaged.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 6) concretely comprised the following steps in:
6.1) setting view data concentrates the size of each image to calculate the center point coordinate position of image as H*H [midpointX,midpointY];
6.2) distance of each pixel and central point in image is calculated, distance value is stored in the matrix that size is H rows H row In;
6.3) by being sized operation, by the column vector that the gained matrix conversion in step 6.2) is H*H rows;
6.4) operated by dot product, by gained column vector in step 6.3) and multi-Scale Pyramid pond Hua Te obtained by step 5) Sign is merged respectively, obtains the fusion feature of three yardsticks;
6.5) fusion feature of obtain in step 6.4) three yardsticks is entered into ranks to be connected, obtains training sample and test The expression of the final validity feature of sample.
The foregoing image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterised in that the step 7) concretely comprise the following steps:
7.1) from the liblinear to increase income as grader, by the feature set of obtained training sample and corresponding sample This label carries out ten folding cross-trainings to liblinear, seeks and takes optimized parameter C;
7.2) test sample is predicted with the sorter model trained.
The beneficial effect that the present invention is reached:Conspicuousness detection algorithm is applied to unsupervised features training sample by the present invention Collection, representative sample can be obtained.Traditional engineer's feature and supervision feature learning are different from, employs depth Sparse autoencoder network trains the sample of no label in degree study.For the convolved image feature of acquisition, computer has been used The technology such as local contrast normalization, central priori and method carry out subsequent operation processing in vision, are represented so that acquisition is more essential, To improve nicety of grading.Pond unit has translation invariance, can guarantee that image can also extract feature after translating on the whole Matched;Feature carries out dimension-reduction treatment after Chi Huaneng normalizes to local contrast, prevents grader over-fitting;Multiple dimensioned gold Word tower basin, the pond feature of multiple yardsticks can be obtained, the nicety of grading of image can be significantly increased.Present invention incorporates Liblinear graders, classification time loss is reduced, improves nicety of grading.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following examples are only used for clearly illustrating the present invention Technical scheme, and can not be limited the scope of the invention with this.
For the present embodiment by taking STL-10 databases as an example, the database includes 10 class RGB images, and the size of each image is 96*96.Without label image number it is 100000 when wherein, for unsupervised training, the number of training for Training is 5000, test sample number is 8000.
With reference to the step of the present invention, as shown in figure 1, detailed process is as follows:
1.1) context-aware conspicuousness detection algorithm is used, calculates the notable of each image in no label data sample Figure.
1.2) pixel in every width notable figure is subjected to descending arrangement according to the size of gray value.
1.3) 64 pixels are chosen from every width notable figure, according to the size of pixel gray value, wherein from top 5% 50 erect image vegetarian refreshments of middle selection and 14 negative-appearing image vegetarian refreshments of selection from bottom 30%.
1.4) coordinate information [X, Y] for the pixel for meeting requirement in every width notable figure is calculated, this is found out according to this coordinate Notable figure corresponds to the positive and negative pixel in original RGB image.Each pixel is set as a sample, so as to obtain no prison Superintend and direct the sample set of feature learning.
2.1) for the visual node layer used for M, M=64*3, it is N=900 to hide node layer.
2.2) network described in non-supervisory learning training, the power between continuous iterative network input layer and hidden layer is passed through Weight, iterations are set to 600, and data acquired feature is designated as W, and W is the matrix of the row size of 900 row 192.
3.1) each image R, G, B triple channel value in training sample, test sample is obtained in image data base respectively.
3.2) non-supervisory feature learning data acquired feature W is switched into 8*8 convolution kernels, each feature can produce 3 convolution Core, it is corresponding with each image R, G, B triple channel value, two-dimensional convolution operation is carried out respectively to obtain characteristic value.
3.3) sum operation is carried out to the triple channel characteristic value of acquisition, it is required and be designated as x.
3.4) the activation value y for the x that summed in step 3 is calculated using LRel activation primitives.
4.1) Gaussian kernel of 9*9 sizes is set, as weight window.
4.2) activation value y is subjected to local subtraction operation.
4.3) by the result divided by standard deviation of step 4.2).
4.4) it can finally show that all training samples are right by convolution and part with test sample by searching loop operation Than the characteristic value after normalization operation.
5.1) it is respectively 1,2,3 to set spatial pyramid three kinds of yardsticks of pondization.
5.2) pond window win and pond step-length stride size under current scale are calculated.
5.3) all two dimensional character blocks of characteristic value after local contrast normalization operation are obtained by circulating, characteristic block is often tieed up Size be pond window size.
5.4) the characteristic block averaged to currently choosing.
5.5) all pond yardsticks are traveled through, the pond feature of three yardsticks of training sample is obtained, is designated as respectively pooledFeaturesTrain1、pooledFeaturesTrain2、pooledFeaturesTrain3;By test sample three The pond feature of yardstick, be designated as respectively pooledFeaturesTest1, pooledFeaturesTest2, pooledFeaturesTest3。
6.1) size of each image is 96*96 in STL-10 data sets, calculates the center point coordinate position of image [midpointX, midpointY], wherein, midpointX=floor (96/2)=48, midpointY=floor (96/2) =48.
6.2) distance of each pixel and central point in image is calculated, distance value is stored in the square that size is [96,96] In battle array;
6.3) it is column vector [96*96,1] by the gained matrix conversion in step 2 by being sized operation;
6.4) by gained column vector in step 6.3) and pooledFeaturesTrain1, PooledFeaturesTrain2, pooledFeaturesTrain3 and pooledFeaturesTest1, PooledFeaturesTest2, pooledFeaturesTest3 carry out dot product operation, so as to respectively obtain training set and test Collect the fusion feature of three yardsticks;
6.5) fusion feature of three yardsticks of the training set obtained in step 6.4) and test set is entered into ranks to be connected, from And obtain the expression of training sample and the final validity feature of test sample.
7.1) the above-mentioned final feature set of training sample and the corresponding sample label of obtaining is subjected to ten foldings to liblinear Cross-training, seek and take optimized parameter C.
Pass through experimental verification in the present embodiment:When C is taken as 65, grader can reach best classification results;
7.2) test sample is predicted with the sorter model trained.
Three prior art comparison-of-pair sorting methods difference used in the present invention is as follows:
1) Liefeng Bo et al. exist
“Unsupervised Feature Learning for RGB-D Based Object Recognition.In ISER,2012.”
The image classification method of middle proposition, abbreviation method 1.
2) Julien Mairal et al. are in " Convolutional Kernel Networks.arXiv:1406.3332, The image classification method proposed in 2014. ", abbreviation method 2.
3) Adriana Romero et al. "
No more meta-parameter tuning in unsupervised sparse feature learning.arXiv:The image classification method proposed in 1402.5766,2014. ", abbreviation method 3.
The comparison of each method classification performance of table 1:
As shown in Table 1, compared with existing method in recent years, classification accuracy has significantly image classification method of the invention Improve.The inventive method can make full use of the advantages of unsupervised feature learning, and the technology of a variety of computer visions and method are melted Enter into existing deep learning network, make characteristics of image more representative and the substantive significance of extraction, be a kind of very useful Image classification method.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, some improvement and deformation can also be made, these are improved and deformation Also it should be regarded as protection scope of the present invention.

Claims (2)

1. a kind of image classification method that non-supervisory feature learning is instructed based on conspicuousness, it is characterized in that, comprise the following steps:
1) the pixel collection that conspicuousness instructs:Image pixel point is acquired using conspicuousness detection algorithm, passes through acquisition The notable figure of image, collect the pixel of representative expression in image;
The step 1) concretely comprises the following steps:
1.1) context-aware conspicuousness detection algorithm is used, calculates the notable figure that view data concentrates training sample, data set Middle each image has identical resolution ratio;
1.2) pixel in every width notable figure is subjected to descending arrangement according to the size of gray value;
1.3) 64 pixels are chosen from every width notable figure, according to the size of pixel gray value, 50 are chosen from top 5% Individual erect image vegetarian refreshments and 14 negative-appearing image vegetarian refreshments of selection from bottom 30%;
1.4) coordinate information [X, Y] for the pixel for meeting requirement in every width notable figure is calculated, it is notable to find out this according to this coordinate Positive and negative pixel in the corresponding original RGB image of figure, is set as a sample, so as to obtain non-supervisory spy by each pixel Levy the sample set of study;
2) non-supervisory feature learning:Using sparse own coding come the pixel of representative expression in training image, acquisition figure As feature;
The step 2) concretely comprises the following steps:
2.1) from sparse self-encoding encoder as non-supervisory feature learning instrument, network inputs node layer that setting network uses for M, it is N to hide node layer;
2.2) input using the positive and negative pixel samples collection of gained as sparse self-encoding encoder, non-supervisory study pre-training institute is passed through State network;By the weight between continuous iterative network input layer and hidden layer, the study and feature extraction of data characteristics are realized, Obtained feature is designated as W, and W is N rows, the matrix of M row sizes;
3) image convolution:Training sample and test sample is concentrated to roll up the characteristics of image in step 2) with view data respectively Product operation;
Concretely comprising the following steps in the step 3):
3.1) W is switched into 8*8 convolution kernels, produces N*3 convolution kernel;
3.2) view data is obtained respectively concentrate each image R, G, B triple channel value in training sample, test sample;
3.3) the triple channel value of each image is subjected to two-dimensional convolution with 3 convolution kernels in current signature respectively and obtains feature Value;
3.4) sum operation is carried out to the triple channel characteristic value of acquisition, is designated as x;
3.5) x activation value Y is calculated using LRel activation primitives;
LRel activation primitives are as follows:
4) local contrast normalizes:The training sample and the convolved image feature of test sample obtained in step 3) is carried out local Subtraction and division normalization;Concretely comprised the following steps in the step 4):
4.1) K*K Gaussian kernels are set, as weight window;
4.2) activation value of training image data and test image data is subjected to local subtraction operation respectively;
4.3) activation value is carried out to the result divided by standard deviation of local subtraction operation, finally show that training sample passes through with test sample Cross convolution and the characteristic value after local contrast normalization operation;
5) spatial pyramid pond:Average pond is carried out to the convolved image feature obtained in step 4) from three different spaces yardsticks Change operation;
Concretely comprised the following steps in the step 5):
5.1) it is respectively 1,2,3 to set spatial pyramid three kinds of yardsticks of pondization;
5.2) size of pond window and pond step-length under current scale is calculated;
5.3) all two dimensional character blocks of feature obtained by being obtained as circulating, the size that characteristic block is often tieed up are the chi of pond window It is very little;
5.4) to current characteristic block averaged;
6) central priori is merged:View data is calculated respectively and concentrates training sample and the central priori value of test sample, by the value Feature behind the average pond of three space scales obtained with step 5) carries out mixing operation respectively;
Concretely comprised the following steps in the step 6):
6.1) setting view data concentrates the size of each image to calculate the center point coordinate position [midpoint of image as H*H X,midpoint Y];
6.2) distance of each pixel and central point in image is calculated, distance value is stored in matrix of the size for H rows H row;
6.3) by being sized operation, by the column vector that the gained matrix conversion in step 6.2) is H*H rows;
6.4) operated by dot product, by gained column vector in step 6.3) and multi-Scale Pyramid pond feature point obtained by step 5) Do not merged, obtain the fusion feature of three yardsticks;
6.5) fusion feature of obtain in step 6.4) three yardsticks is entered into ranks to be connected, obtains training sample and test sample The expression of final validity feature;
7) image classification:Grader is trained with the characteristic value that test sample is obtained in step 6), training will be obtained in step 6) Image classification is realized in the grader that the characteristic value input of sample has been trained.
2. the image classification method according to claim 1 that non-supervisory feature learning is instructed based on conspicuousness, its feature are existed In the step 7) concretely comprises the following steps:
7.1) from the liblinear to increase income as grader, by the feature set of obtained training sample and corresponding sample mark Label carry out ten folding cross-trainings to liblinear, seek and take optimized parameter C;
7.2) test sample is predicted with the sorter model trained.
CN201510821480.2A 2015-11-23 2015-11-23 The image classification method of non-supervisory feature learning is instructed based on conspicuousness Expired - Fee Related CN105426919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510821480.2A CN105426919B (en) 2015-11-23 2015-11-23 The image classification method of non-supervisory feature learning is instructed based on conspicuousness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510821480.2A CN105426919B (en) 2015-11-23 2015-11-23 The image classification method of non-supervisory feature learning is instructed based on conspicuousness

Publications (2)

Publication Number Publication Date
CN105426919A CN105426919A (en) 2016-03-23
CN105426919B true CN105426919B (en) 2017-11-14

Family

ID=55505117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510821480.2A Expired - Fee Related CN105426919B (en) 2015-11-23 2015-11-23 The image classification method of non-supervisory feature learning is instructed based on conspicuousness

Country Status (1)

Country Link
CN (1) CN105426919B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107305694A (en) * 2016-04-14 2017-10-31 武汉科技大学 A kind of video greyness detection method of view-based access control model significant characteristics
CN107729901B (en) * 2016-08-10 2021-04-27 阿里巴巴集团控股有限公司 Image processing model establishing method and device and image processing method and system
CN106778773B (en) * 2016-11-23 2020-06-02 北京小米移动软件有限公司 Method and device for positioning target object in picture
CN106919980B (en) * 2017-01-24 2020-02-07 南京大学 Incremental target identification system based on ganglion differentiation
CN110313017B (en) * 2017-03-28 2023-06-20 赫尔实验室有限公司 Machine vision method for classifying input data based on object components
US10783393B2 (en) 2017-06-20 2020-09-22 Nvidia Corporation Semi-supervised learning for landmark localization
CN107578055B (en) * 2017-06-20 2020-04-14 北京陌上花科技有限公司 Image prediction method and device
CN107451604A (en) * 2017-07-12 2017-12-08 河海大学 A kind of image classification method based on K means
CN107563430A (en) * 2017-08-28 2018-01-09 昆明理工大学 A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension
CN108460379B (en) * 2018-02-06 2021-05-04 西安电子科技大学 Salient object detection method based on refined space consistency two-stage graph
CN108647695A (en) * 2018-05-02 2018-10-12 武汉科技大学 Soft image conspicuousness detection method based on covariance convolutional neural networks
CN109446990B (en) * 2018-10-30 2020-02-28 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109376787B (en) * 2018-10-31 2021-02-26 聚时科技(上海)有限公司 Manifold learning network and computer vision image set classification method based on manifold learning network
CN109583499B (en) * 2018-11-30 2021-04-16 河海大学常州校区 Power transmission line background object classification system based on unsupervised SDAE network
CN109620244B (en) * 2018-12-07 2021-07-30 吉林大学 Infant abnormal behavior detection method based on condition generation countermeasure network and SVM
CN110706205B (en) * 2019-09-07 2021-05-14 创新奇智(重庆)科技有限公司 Method for detecting cloth hole-breaking defect by using computer vision technology
CN111401434B (en) * 2020-03-12 2024-03-08 西北工业大学 Image classification method based on unsupervised feature learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810699A (en) * 2013-12-24 2014-05-21 西安电子科技大学 SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning
CN104573731A (en) * 2015-02-06 2015-04-29 厦门大学 Rapid target detection method based on convolutional neural network
CN104809469A (en) * 2015-04-21 2015-07-29 重庆大学 Indoor scene image classification method facing service robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810699A (en) * 2013-12-24 2014-05-21 西安电子科技大学 SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning
CN104573731A (en) * 2015-02-06 2015-04-29 厦门大学 Rapid target detection method based on convolutional neural network
CN104809469A (en) * 2015-04-21 2015-07-29 重庆大学 Indoor scene image classification method facing service robot

Also Published As

Publication number Publication date
CN105426919A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN105426919B (en) The image classification method of non-supervisory feature learning is instructed based on conspicuousness
Kavitha et al. Benchmarking on offline Handwritten Tamil Character Recognition using convolutional neural networks
Saleh et al. Arabic sign language recognition through deep neural networks fine-tuning
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN104866810B (en) A kind of face identification method of depth convolutional neural networks
CN104317902B (en) Image search method based on local holding iterative quantization Hash
CN107945153A (en) A kind of road surface crack detection method based on deep learning
Basu et al. An MLP based Approach for Recognition of HandwrittenBangla'Numerals
Aghamaleki et al. Multi-stream CNN for facial expression recognition in limited training data
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN105913053B (en) A kind of facial expression recognizing method for singly drilling multiple features based on sparse fusion
CN109086405A (en) Remote sensing image retrieval method and system based on conspicuousness and convolutional neural networks
CN105550712B (en) Aurora image classification method based on optimization convolution autocoding network
CN106529586A (en) Image classification method based on supplemented text characteristic
CN111652171B (en) Construction method of facial expression recognition model based on double branch network
Chandio et al. Cursive character recognition in natural scene images using a multilevel convolutional neural network fusion
Arora et al. Application of statistical features in handwritten devnagari character recognition
CN112069900A (en) Bill character recognition method and system based on convolutional neural network
CN105956610B (en) A kind of remote sensing images classification of landform method based on multi-layer coding structure
CN111709443B (en) Calligraphy character style classification method based on rotation invariant convolution neural network
Arora et al. Study of different features on handwritten Devnagari character
Zheng et al. Segmentation-free multi-font printed Manchu word recognition using deep convolutional features and data augmentation
Sasipriyaa et al. Design and simulation of handwritten detection via generative adversarial networks and convolutional neural network
Singh et al. A comprehensive survey on Bangla handwritten numeral recognition
Ashoka et al. Feature extraction technique for neural network based pattern recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171114

Termination date: 20201123