CN105426919A - Significant guidance and unsupervised feature learning based image classification method - Google Patents
Significant guidance and unsupervised feature learning based image classification method Download PDFInfo
- Publication number
- CN105426919A CN105426919A CN201510821480.2A CN201510821480A CN105426919A CN 105426919 A CN105426919 A CN 105426919A CN 201510821480 A CN201510821480 A CN 201510821480A CN 105426919 A CN105426919 A CN 105426919A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- feature learning
- classification method
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Abstract
The invention discloses a significant guidance and unsupervised feature learning based image classification method and belongs to the field of machine learning and computer vision. The image classification method comprises significant guidance based pixel point collection, unsupervised feature learning, image convolution, local comparison normalization, spatial pyramid pooling, central prior fusion and image classification. With the adoption of the classification method, representative pixel points in an image data set are collected through significant detection; the representative pixel points are trained with the sparse self-coding unsupervised feature learning method to obtain high-quality image features; features of a training set and a test set are obtained through image convolution operation; convolution features are subjected to local comparison normalization and spatial pyramid pooling; pooled features are fused with central prior features; and images are classified by adopting a liblinear classifier. According to the method, efficient and robust image features can be obtained and the classification accuracy of various images can be significantly improved.
Description
Technical field
The present invention relates to a kind of image classification method instructing non-supervisory feature learning based on conspicuousness, belong to machine learning and technical field of computer vision.
Background technology
Along with the development of multimedia technology, Images Classification has become the emphasis of computer vision field research, Images Classification be certain attribute of having according to image and be divided into preset different classes of in, how image being carried out effectively expressing is the key improving Images Classification accuracy rate, and the selection of feature and extraction problem are the difficulties that Images Classification exists at present.Along with developing rapidly of mobile Internet, human society has entered large data age.Though these traditional feature learnings such as SIFT, HOG can extract some feature of image, in Images Classification, also achieving good effect, there is certain defect in this engineer's characterization method.And feature learning method is supervised in traditional having, by learning artificial labeled data, under large data age, seem outdated.
Summary of the invention
For solving the deficiencies in the prior art, the object of the present invention is to provide a kind of image classification method instructing non-supervisory feature learning based on conspicuousness, certain methods and theory in computer vision will be incorporated in degree of depth learning network structure, represent to improve the effective of characteristics of image, thus reach desirable classification results.
In order to realize above-mentioned target, the present invention adopts following technical scheme:
Instruct an image classification method for non-supervisory feature learning based on conspicuousness, it is characterized in that, comprise the steps:
1) the pixel collection of conspicuousness guidance: adopting conspicuousness detection algorithm to gather image slices vegetarian refreshments, by obtaining the remarkable figure of image, collecting the pixel of representative expression in image;
2) non-supervisory feature learning: adopt sparse own coding to train representational pixel, obtains characteristics of image;
3) image convolution: concentrate training sample and test sample book to step 2 by view data respectively) in characteristics of image carry out convolution operation;
4) local contrast normalization: to step 3) in the training sample that obtains and the convolution feature of test sample book carry out local subtraction and division normalization;
5) spatial pyramid pond: from three different spaces yardsticks to step 4) obtain convolved image feature be averaged pondization operate;
6) central priori is merged: the respectively central priori value of computed image data centralization training sample and test sample book, by this value and step 5) multiple dimensioned pond feature carries out mixing operation respectively;
7) Images Classification: by step 6) in institute's eigenwert of obtaining training sample carry out training classifier, by step 6) realize Images Classification in middle the eigenwert input the obtaining training sample sorter of having trained.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 1) concrete steps are:
1.1) adopt context-aware conspicuousness detection algorithm, the remarkable figure of computed image data centralization training set, data centralization every width image has identical resolution;
1.2) pixel in remarkable for every width figure is carried out descending sort according to the size of gray-scale value;
1.3) from the remarkable figure of every width, choose 64 pixels, according to the size of pixel gray-scale value, from top 5%, choose 50 erect image vegetarian refreshments and from bottom 30%, choose 14 negative-appearing image vegetarian refreshments;
1.4) calculate the coordinate information [X, Y] of the pixel met the demands in the remarkable figure of every width, find out the positive and negative pixel in the corresponding original RGB image of this remarkable figure according to this coordinate.Each pixel is set as a sample, thus obtains the sample set of non-supervisory feature learning.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 2) concrete steps be:
2.1) select sparse own coding device as non-supervisory feature learning instrument, the visual layers node that setting network adopts is M, and hidden layer node is N;
2.2) using the input of the positive and negative pixel samples collection of gained as sparse own coding, by network described in non-supervisory study pre-training; By the weight between continuous iterative network input layer and hidden layer, realize study and the feature extraction of data characteristics, the feature that obtains is designated as W, and W is that N is capable, the matrix of M row size.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 3) in concrete steps be:
3.1) transfer W to 8*8 convolution kernel, produce N*3 convolution kernel;
3.2) obtain view data respectively and concentrate every width image R, G, B triple channel value in training sample, test sample book;
3.3) the triple channel value of every width image is carried out two-dimensional convolution with the convolution kernel of 3 in current signature respectively and obtain eigenwert;
3.4) sum operation is carried out to the triple channel eigenwert obtained, be designated as x;
3.5) LRel activation function is utilized to calculate the activation value y of x;
LRel activation function is as follows:
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 4) in concrete steps be:
4.1) K*K gaussian kernel is set, it can be used as weight window;
4.2) activation value of training image data and test pattern data is carried out local subtraction respectively to operate;
4.3) result of activation value being carried out local subtraction operation, divided by standard deviation, finally draws training sample and the eigenwert of test sample book after convolution and local contrast normalization operate.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 5) in concrete steps be:
5.1) setting space pyramid pondization three kinds of yardsticks are respectively 1,2,3;
5.2) size of pond window and pond step-length under current scale is calculated;
5.3) obtained all two dimensional character blocks of gained feature by circulation, the size that characteristic block is often tieed up is the size of pond window;
5.4) to current characteristic block averaged.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 6) in concrete steps be:
6.1) setting view data concentrates the size of every width image to be H*H, the center point coordinate position [midpointX, midpointY] of computed image;
6.2) distance of each pixel and central point in computed image, distance value being stored in size is in the matrix of the capable H row of H;
6.3) by adjustment size operation, by step 6.2) in gained matrix conversion be the column vector that H*H is capable;
6.4) operated by dot product, by step 6.3) in gained column vector and step 5) gained multi-Scale Pyramid pond feature merges respectively, obtains the fusion feature of three yardsticks;
6.5) by step 6.4) in the fusion feature of three yardsticks that obtains carry out row and be connected, obtain the expression of training sample and the final validity feature of test sample book.
The aforesaid image classification method instructing non-supervisory feature learning based on conspicuousness, is characterized in that, described step 7) concrete steps be:
7.1) select the liblinear increased income as sorter, the feature set of training sample obtained and the sample label of correspondence are carried out ten folding cross-trainings to liblinear, seeks and get optimized parameter C;
7.2) with the sorter model trained, test sample book is predicted.
The beneficial effect that the present invention reaches: conspicuousness detection algorithm is applied to the collection without supervision features training sample by the present invention, can obtain representative sample.Be different from traditional engineer's feature and supervision feature learning, have employed sparse autoencoder network in degree of depth study and train the sample without label.For the convolved image feature obtained, to employ in computer vision the techniques and methods such as local contrast normalization, central priori and carry out subsequent operation process, to obtain more essential representing, to improve nicety of grading.Pond unit has translation invariance, also can extract feature and mate after ensureing that translation occurs integral image; Chi Huaneng carries out dimension-reduction treatment to feature after local contrast normalization, prevents sorter over-fitting; Multiple dimensioned pyramid pond, can obtain the pond feature of multiple yardstick, can improve the nicety of grading of image significantly.Present invention incorporates liblinear sorter, reduce classification time loss, improve nicety of grading.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.Following examples only for technical scheme of the present invention is clearly described, and can not limit the scope of the invention with this.
The present embodiment is for STL-10 database, and this database comprises 10 class RGB images, and the size of every width image is 96*96.Wherein, for without being 100000 without label image number during supervised training, the number of training for Training is 5000, and test sample book number is 8000.
In conjunction with step of the present invention, detailed process is as follows:
1.1) use context-aware conspicuousness detection algorithm, calculate the remarkable figure without width image every in label data sample.
1.2) pixel in remarkable for every width figure is carried out descending sort according to the size of gray-scale value.
1.3) from the remarkable figure of every width, choose 64 pixels, according to the size of pixel gray-scale value, from top 5%, wherein choose 50 erect image vegetarian refreshments and from bottom 30%, choose 14 negative-appearing image vegetarian refreshments.
1.4) calculate the coordinate information [X, Y] of the pixel met the demands in the remarkable figure of every width, find out the positive and negative pixel in the corresponding original RGB image of this remarkable figure according to this coordinate.Each pixel is set as a sample, thus obtains the sample set without supervision feature learning.
2.1) the visual layers node adopted is M, M=64*3, and hidden layer node is N=900.
2.2) with network described in non-supervisory learning training, by the weight between continuous iterative network input layer and hidden layer, iterations is set to 600, and the data characteristics that obtains is designated as W, and W is the matrix of 900 row 192 row sizes.
3.1) every width image R, G, B triple channel value in training sample in image data base, test sample book is obtained respectively.
3.2) transfer data characteristics W that non-supervisory feature learning obtains to 8*8 convolution kernel, each feature can produce 3 convolution kernels, is worth corresponding with every width image R, G, B triple channel, carries out two-dimensional convolution operation respectively and obtains eigenwert.
3.3) sum operation is carried out to the triple channel eigenwert obtained, required and be designated as x.
3.4) utilize LRel activation function to calculate in step 3 the activation value y of the x that sues for peace.
4.1) gaussian kernel of 9*9 size is set, it can be used as weight window.
4.2) activation value y is carried out local subtraction operation.
4.3) by step 4.2) result divided by standard deviation.
4.4) all training samples and the eigenwert of test sample book after convolution and local contrast normalization operate can finally be drawn by searching loop operation.
5.1) setting space pyramid pondization three kinds of yardsticks are respectively 1,2,3.
5.2) size of pond window win and pond step-length stride under current scale is calculated.
5.3) obtain the rear all two dimensional character blocks of eigenwert of local contrast normalization operation by circulation, the size that characteristic block is often tieed up is the size of pond window.
5.4) to the current characteristic block averaged chosen.
5.5) travel through all pond yardsticks, obtain the pond feature of training sample three yardsticks, be designated as pooledFeaturesTrain1, pooledFeaturesTrain2, pooledFeaturesTrain3 respectively; By the pond feature of test sample book three yardsticks, be designated as pooledFeaturesTest1, pooledFeaturesTest2, pooledFeaturesTest3 respectively.
6.1) size of the every width image of STL-10 data centralization is 96*96, the center point coordinate position [midpointX, midpointY] of computed image, wherein, midpointX=floor (96/2)=48, midpointY=floor (96/2)=48.
6.2) distance of each pixel and central point in computed image, is stored in the matrix that size is [96,96] by distance value;
6.3) by the operation of adjustment size, be column vector [96*96,1] by the gained matrix conversion in step 2;
6.4) by step 6.3) in gained column vector and pooledFeaturesTrain1, pooledFeaturesTrain2, pooledFeaturesTrain3 and pooledFeaturesTest1, pooledFeaturesTest2, pooledFeaturesTest3 carry out dot product operation, thus obtain the fusion feature of training set and test set three yardsticks respectively;
6.5) by step 6.4) in the fusion feature of the training set that obtains and test set three yardsticks carry out row and be connected, thus obtain the expression of training sample and the final validity feature of test sample book.
7.1) the above-mentioned sample label obtaining the final feature set of training sample and correspondence is carried out ten folding cross-trainings to liblinear, seek and get optimized parameter C.
Verify by experiment in the present embodiment: when C is taken as 65, sorter can reach best classification results;
7.2) with the sorter model trained, test sample book is predicted.
The present invention's three prior art comparison-of-pair sorting methods used are as follows respectively:
1) people such as LiefengBo exists
The image classification method proposed in " UnsupervisedFeatureLearningforRGB-DBasedObjectRecognitio n.InISER, 2012. ", abbreviation method 1.
2) image classification method that proposes in " ConvolutionalKernelNetworks.arXiv:1406.3332,2014. " of the people such as JulienMairal, abbreviation method 2.
3) people such as AdrianaRomero "
Nomoremeta-parametertuninginunsupervisedsparsefeaturelea rning.arXiv:1402.5766,2014. " the middle image classification method proposed, abbreviation method 3.
The comparison of each method classification performance of table 1:
As shown in Table 1, image classification method of the present invention is compared with existing method in recent years, and classification accuracy is significantly improved.The inventive method can make full use of the advantage without supervision feature learning, the techniques and methods of multiple computer vision is dissolved in existing degree of depth learning network, making the characteristics of image of extraction have more representative and substantive significance, is a kind of very practical image classification method.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from the technology of the present invention principle; can also make some improvement and distortion, these improve and distortion also should be considered as protection scope of the present invention.
Claims (8)
1. instruct an image classification method for non-supervisory feature learning based on conspicuousness, it is characterized in that, comprise the steps:
1) the pixel collection of conspicuousness guidance: adopting conspicuousness detection algorithm to gather image slices vegetarian refreshments, by obtaining the remarkable figure of image, collecting the pixel of representative expression in image;
2) non-supervisory feature learning: adopt sparse own coding to train representational pixel, obtains characteristics of image;
3) image convolution: concentrate training sample and test sample book to step 2 by view data respectively) in characteristics of image carry out convolution operation;
4) local contrast normalization: to step 3) in the training sample that obtains and the convolution feature of test sample book carry out local subtraction and division normalization;
5) spatial pyramid pond: from three different spaces yardsticks to step 4) obtain convolved image feature be averaged pondization operate;
6) central priori is merged: the respectively central priori value of computed image data centralization training sample and test sample book, by this value and step 5) multiple dimensioned pond feature carries out mixing operation respectively;
7) Images Classification: by step 6) in institute's eigenwert of obtaining training sample carry out training classifier, by step 6) realize Images Classification in middle the eigenwert input the obtaining training sample sorter of having trained.
2. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 1, is characterized in that, described step 1) concrete steps are:
1.1) adopt context-aware conspicuousness detection algorithm, the remarkable figure of computed image data centralization training set, data centralization every width image has identical resolution;
1.2) pixel in remarkable for every width figure is carried out descending sort according to the size of gray-scale value;
1.3) from the remarkable figure of every width, choose 64 pixels, according to the size of pixel gray-scale value, from top 5%, choose 50 erect image vegetarian refreshments and from bottom 30%, choose 14 negative-appearing image vegetarian refreshments;
1.4) calculate the coordinate information [X, Y] of the pixel met the demands in the remarkable figure of every width, find out the positive and negative pixel in the corresponding original RGB image of this remarkable figure according to this coordinate.Each pixel is set as a sample, thus obtains the sample set of non-supervisory feature learning.
3. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 2, is characterized in that, described step 2) concrete steps be:
2.1) select sparse own coding device as non-supervisory feature learning instrument, the visual layers node that setting network adopts is M, and hidden layer node is N;
2.2) using the input of the positive and negative pixel samples collection of gained as sparse own coding, by network described in non-supervisory study pre-training; By the weight between continuous iterative network input layer and hidden layer, realize study and the feature extraction of data characteristics, the feature that obtains is designated as W, and W is that N is capable, the matrix of M row size.
4. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 3, is characterized in that, described step 3) in concrete steps be:
3.1) transfer W to 8*8 convolution kernel, produce N*3 convolution kernel;
3.2) obtain view data respectively and concentrate every width image R, G, B triple channel value in training sample, test sample book;
3.3) the triple channel value of every width image is carried out two-dimensional convolution with the convolution kernel of 3 in current signature respectively and obtain eigenwert;
3.4) sum operation is carried out to the triple channel eigenwert obtained, be designated as x;
3.5) LRel activation function is utilized to calculate the activation value y of x;
LRel activation function is as follows:
5. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 4, is characterized in that, described step 4) in concrete steps be:
4.1) K*K gaussian kernel is set, it can be used as weight window;
4.2) activation value of training image data and test pattern data is carried out local subtraction respectively to operate;
4.3) result of activation value being carried out local subtraction operation, divided by standard deviation, finally draws training sample and the eigenwert of test sample book after convolution and local contrast normalization operate.
6. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 1, is characterized in that, described step 5) in concrete steps be:
5.1) setting space pyramid pondization three kinds of yardsticks are respectively 1,2,3;
5.2) size of pond window and pond step-length under current scale is calculated;
5.3) obtained all two dimensional character blocks of gained feature by circulation, the size that characteristic block is often tieed up is the size of pond window;
5.4) to current characteristic block averaged.
7. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 6, is characterized in that, described step 6) in concrete steps be:
6.1) setting view data concentrates the size of every width image to be H*H, the center point coordinate position [midpointX, midpointY] of computed image;
6.2) distance of each pixel and central point in computed image, distance value being stored in size is in the matrix of the capable H row of H;
6.3) by adjustment size operation, by step 6.2) in gained matrix conversion be the column vector that H*H is capable;
6.4) operated by dot product, by step 6.3) in gained column vector and step 5) gained multi-Scale Pyramid pond feature merges respectively, obtains the fusion feature of three yardsticks;
6.5) by step 6.4) in the fusion feature of three yardsticks that obtains carry out row and be connected, obtain the expression of training sample and the final validity feature of test sample book.
8. the image classification method instructing non-supervisory feature learning based on conspicuousness according to claim 7, is characterized in that, described step 7) concrete steps be:
7.1) select the liblinear increased income as sorter, the feature set of training sample obtained and the sample label of correspondence are carried out ten folding cross-trainings to liblinear, seeks and get optimized parameter C;
7.2) with the sorter model trained, test sample book is predicted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510821480.2A CN105426919B (en) | 2015-11-23 | 2015-11-23 | The image classification method of non-supervisory feature learning is instructed based on conspicuousness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510821480.2A CN105426919B (en) | 2015-11-23 | 2015-11-23 | The image classification method of non-supervisory feature learning is instructed based on conspicuousness |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105426919A true CN105426919A (en) | 2016-03-23 |
CN105426919B CN105426919B (en) | 2017-11-14 |
Family
ID=55505117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510821480.2A Expired - Fee Related CN105426919B (en) | 2015-11-23 | 2015-11-23 | The image classification method of non-supervisory feature learning is instructed based on conspicuousness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105426919B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778773A (en) * | 2016-11-23 | 2017-05-31 | 北京小米移动软件有限公司 | The localization method and device of object in picture |
CN106919980A (en) * | 2017-01-24 | 2017-07-04 | 南京大学 | A kind of increment type target identification system based on neuromere differentiation |
CN107305694A (en) * | 2016-04-14 | 2017-10-31 | 武汉科技大学 | A kind of video greyness detection method of view-based access control model significant characteristics |
CN107451604A (en) * | 2017-07-12 | 2017-12-08 | 河海大学 | A kind of image classification method based on K means |
CN107563430A (en) * | 2017-08-28 | 2018-01-09 | 昆明理工大学 | A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension |
CN107578055A (en) * | 2017-06-20 | 2018-01-12 | 北京陌上花科技有限公司 | A kind of image prediction method and apparatus |
CN107729901A (en) * | 2016-08-10 | 2018-02-23 | 阿里巴巴集团控股有限公司 | Method for building up, device and the image processing method and system of image processing model |
CN108460379A (en) * | 2018-02-06 | 2018-08-28 | 西安电子科技大学 | Well-marked target detection method based on refinement Space Consistency two-stage figure |
CN108647695A (en) * | 2018-05-02 | 2018-10-12 | 武汉科技大学 | Soft image conspicuousness detection method based on covariance convolutional neural networks |
CN109376787A (en) * | 2018-10-31 | 2019-02-22 | 聚时科技(上海)有限公司 | Manifold learning network and computer visual image collection classification method based on it |
CN109446990A (en) * | 2018-10-30 | 2019-03-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN109583499A (en) * | 2018-11-30 | 2019-04-05 | 河海大学常州校区 | A kind of transmission line of electricity target context categorizing system based on unsupervised SDAE network |
CN109620244A (en) * | 2018-12-07 | 2019-04-16 | 吉林大学 | The Infants With Abnormal behavioral value method of confrontation network and SVM is generated based on condition |
CN110313017A (en) * | 2017-03-28 | 2019-10-08 | 赫尔实验室有限公司 | The machine vision method classified based on subject component to input data |
CN110706205A (en) * | 2019-09-07 | 2020-01-17 | 创新奇智(重庆)科技有限公司 | Method for detecting cloth hole-breaking defect by using computer vision technology |
CN111401434A (en) * | 2020-03-12 | 2020-07-10 | 西北工业大学 | Image classification method based on unsupervised feature learning |
US10783394B2 (en) | 2017-06-20 | 2020-09-22 | Nvidia Corporation | Equivariant landmark transformation for landmark localization |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810699A (en) * | 2013-12-24 | 2014-05-21 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network |
CN104408435A (en) * | 2014-12-05 | 2015-03-11 | 浙江大学 | Face identification method based on random pooling convolutional neural network |
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
CN104573731A (en) * | 2015-02-06 | 2015-04-29 | 厦门大学 | Rapid target detection method based on convolutional neural network |
CN104809469A (en) * | 2015-04-21 | 2015-07-29 | 重庆大学 | Indoor scene image classification method facing service robot |
-
2015
- 2015-11-23 CN CN201510821480.2A patent/CN105426919B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810699A (en) * | 2013-12-24 | 2014-05-21 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network |
CN104408435A (en) * | 2014-12-05 | 2015-03-11 | 浙江大学 | Face identification method based on random pooling convolutional neural network |
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
CN104573731A (en) * | 2015-02-06 | 2015-04-29 | 厦门大学 | Rapid target detection method based on convolutional neural network |
CN104809469A (en) * | 2015-04-21 | 2015-07-29 | 重庆大学 | Indoor scene image classification method facing service robot |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107305694A (en) * | 2016-04-14 | 2017-10-31 | 武汉科技大学 | A kind of video greyness detection method of view-based access control model significant characteristics |
CN107729901B (en) * | 2016-08-10 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Image processing model establishing method and device and image processing method and system |
CN107729901A (en) * | 2016-08-10 | 2018-02-23 | 阿里巴巴集团控股有限公司 | Method for building up, device and the image processing method and system of image processing model |
CN106778773A (en) * | 2016-11-23 | 2017-05-31 | 北京小米移动软件有限公司 | The localization method and device of object in picture |
CN106778773B (en) * | 2016-11-23 | 2020-06-02 | 北京小米移动软件有限公司 | Method and device for positioning target object in picture |
CN106919980A (en) * | 2017-01-24 | 2017-07-04 | 南京大学 | A kind of increment type target identification system based on neuromere differentiation |
CN106919980B (en) * | 2017-01-24 | 2020-02-07 | 南京大学 | Incremental target identification system based on ganglion differentiation |
CN110313017A (en) * | 2017-03-28 | 2019-10-08 | 赫尔实验室有限公司 | The machine vision method classified based on subject component to input data |
CN107578055A (en) * | 2017-06-20 | 2018-01-12 | 北京陌上花科技有限公司 | A kind of image prediction method and apparatus |
CN107578055B (en) * | 2017-06-20 | 2020-04-14 | 北京陌上花科技有限公司 | Image prediction method and device |
US10783393B2 (en) | 2017-06-20 | 2020-09-22 | Nvidia Corporation | Semi-supervised learning for landmark localization |
US10783394B2 (en) | 2017-06-20 | 2020-09-22 | Nvidia Corporation | Equivariant landmark transformation for landmark localization |
CN107451604A (en) * | 2017-07-12 | 2017-12-08 | 河海大学 | A kind of image classification method based on K means |
CN107563430A (en) * | 2017-08-28 | 2018-01-09 | 昆明理工大学 | A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension |
CN108460379A (en) * | 2018-02-06 | 2018-08-28 | 西安电子科技大学 | Well-marked target detection method based on refinement Space Consistency two-stage figure |
CN108460379B (en) * | 2018-02-06 | 2021-05-04 | 西安电子科技大学 | Salient object detection method based on refined space consistency two-stage graph |
CN108647695A (en) * | 2018-05-02 | 2018-10-12 | 武汉科技大学 | Soft image conspicuousness detection method based on covariance convolutional neural networks |
CN109446990A (en) * | 2018-10-30 | 2019-03-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN109376787A (en) * | 2018-10-31 | 2019-02-22 | 聚时科技(上海)有限公司 | Manifold learning network and computer visual image collection classification method based on it |
CN109376787B (en) * | 2018-10-31 | 2021-02-26 | 聚时科技(上海)有限公司 | Manifold learning network and computer vision image set classification method based on manifold learning network |
CN109583499B (en) * | 2018-11-30 | 2021-04-16 | 河海大学常州校区 | Power transmission line background object classification system based on unsupervised SDAE network |
CN109583499A (en) * | 2018-11-30 | 2019-04-05 | 河海大学常州校区 | A kind of transmission line of electricity target context categorizing system based on unsupervised SDAE network |
CN109620244A (en) * | 2018-12-07 | 2019-04-16 | 吉林大学 | The Infants With Abnormal behavioral value method of confrontation network and SVM is generated based on condition |
CN110706205A (en) * | 2019-09-07 | 2020-01-17 | 创新奇智(重庆)科技有限公司 | Method for detecting cloth hole-breaking defect by using computer vision technology |
CN110706205B (en) * | 2019-09-07 | 2021-05-14 | 创新奇智(重庆)科技有限公司 | Method for detecting cloth hole-breaking defect by using computer vision technology |
CN111401434A (en) * | 2020-03-12 | 2020-07-10 | 西北工业大学 | Image classification method based on unsupervised feature learning |
CN111401434B (en) * | 2020-03-12 | 2024-03-08 | 西北工业大学 | Image classification method based on unsupervised feature learning |
Also Published As
Publication number | Publication date |
---|---|
CN105426919B (en) | 2017-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105426919A (en) | Significant guidance and unsupervised feature learning based image classification method | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN107506761A (en) | Brain image dividing method and system based on notable inquiry learning convolutional neural networks | |
CN102722712B (en) | Multiple-scale high-resolution image object detection method based on continuity | |
CN109063753A (en) | A kind of three-dimensional point cloud model classification method based on convolutional neural networks | |
CN106650830A (en) | Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
CN108537102A (en) | High Resolution SAR image classification method based on sparse features and condition random field | |
CN103984959A (en) | Data-driven and task-driven image classification method | |
CN106682696A (en) | Multi-example detection network based on refining of online example classifier and training method thereof | |
CN107871102A (en) | A kind of method for detecting human face and device | |
CN105574063A (en) | Image retrieval method based on visual saliency | |
CN104598885A (en) | Method for detecting and locating text sign in street view image | |
CN107918772A (en) | Method for tracking target based on compressive sensing theory and gcForest | |
CN105469050B (en) | Video behavior recognition methods based on local space time's feature description and pyramid words tree | |
CN107767416A (en) | The recognition methods of pedestrian's direction in a kind of low-resolution image | |
CN105046272A (en) | Image classification method based on concise unsupervised convolutional network | |
Arora et al. | Application of statistical features in handwritten devnagari character recognition | |
CN108564111A (en) | A kind of image classification method based on neighborhood rough set feature selecting | |
CN107451594A (en) | A kind of various visual angles Approach for Gait Classification based on multiple regression | |
CN103186776A (en) | Human detection method based on multiple features and depth information | |
CN104537353A (en) | Three-dimensional face age classifying device and method based on three-dimensional point cloud | |
Arora et al. | Study of different features on handwritten Devnagari character | |
CN109408655A (en) | The freehand sketch retrieval method of incorporate voids convolution and multiple dimensioned sensing network | |
Liu et al. | Image retrieval using CNN and low-level feature fusion for crime scene investigation image database |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171114 Termination date: 20201123 |