CN103489004A - Method for achieving large category image identification of deep study network - Google Patents
Method for achieving large category image identification of deep study network Download PDFInfo
- Publication number
- CN103489004A CN103489004A CN201310461551.3A CN201310461551A CN103489004A CN 103489004 A CN103489004 A CN 103489004A CN 201310461551 A CN201310461551 A CN 201310461551A CN 103489004 A CN103489004 A CN 103489004A
- Authority
- CN
- China
- Prior art keywords
- layer
- eigenmatrix
- carries out
- local
- maximum
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method for achieving large category image identification of a deep study network. The method comprises the steps of the training process and the identification process, wherein in the identification process, Gabor characteristics of a sample image are firstly extracted, maximum selection is conducted, then, linear partial coding is conducted through a characteristic codebook which is clustered, finally, characteristic vectors are led out through the space pyramid method, and a support vector machine classifier is used for training; in the identification process, the characteristic vectors of the image to be tested are identified through the trained support vector machine classifier. According to the method for achieving large category image identification of the deep study network, the defect that semantic information is absent when partial characteristics are extracted through a traditional method is overcome and the identification rate of multi-category image identification can be remarkably improved.
Description
Technical field
The present invention relates to pattern-recognition and field of artificial intelligence, particularly a kind of degree of deep learning network realizes the method for large classification image recognition.
Background technology
Image recognition refers to whether a kind of technology of required image of the image that identifies as requested the current image provided or search, it is an important research content in pattern-recognition and field of artificial intelligence, have at present the achievement in research of many image recognitions, as patent 200710179461.X proposes a kind of image-recognizing method based on feature extraction and sorter.It selects to be applicable to test different classes of feature classification and the sorter of picture by housebroken arbiter, thereby make image recognition can be applicable to different environment, and carry out respectively image recognition by feature extraction and the classifiers combination of selected a plurality of classifications, effectively organize various features extracting method and Various Classifiers on Regional to carry out image recognition, and then the recognition result decision making package obtained according to a plurality of combinations, thereby improved the reliability of image recognition result.Patent 201110081240.5 proposes a kind of image based on improving sparse constraint bilinear model and carries out sorting technique, and at first the method extracts the local feature of image on image; Then intensive a plurality of parts of extraction from image; Then all parts is used to the character representation of the histogram of visual word as parts, and the character representation of each parts is lined up in order, by the form of a matrix, carry out presentation video; Finally use improving sparse constraint bilinear model analog vision word to parts, parts to the relation between image category, thereby reach the purpose to Images Classification.Patent 201110049008.3 has proposed the Classification of Polarimetric SAR Image method based on eigenwert Gaussian statistics characteristic, and mainly solving prior art needs artificial definite problem to not enough in the cognition of feature distribution character and classification judgement boundary.This invention has the significant advantage of Classification of Polarimetric SAR Image effect, can be used for Polarimetric SAR Image target detection and target identification.The method of current image recognition is main or, by extracting the local feature of image, still the local feature due to image lacks Semantic, so, when processing large classification image recognition, its performance is not ideal.
Summary of the invention
For the above-mentioned shortcoming and deficiency that overcomes prior art, the object of the present invention is to provide a kind of degree of deep learning network to realize the method for large classification image recognition, the shortage of semantic information while overcoming the traditional method for extracting local feature, can significantly promote the discrimination of multi-class image recognition.
Purpose of the present invention is achieved through the following technical solutions:
A kind of degree of deep learning network realizes the method for large classification image recognition, adopts degree of deep learning network to carry out large classification image recognition, and described degree of deep learning network comprises the first simple layer S1 layer, the first complicated layer C1 layer, the second simple layer S2 layer and the second complicated layer C2 layer;
Identifying comprises the following steps:
(1) training process:
(11) samples pictures is carried out to pre-service, described samples pictures comprises plurality of classes;
(12) samples pictures is carried out to the Gabor feature extraction, obtain the Gabor eigenmatrix, be i.e. eigenmatrix in the S1 layer;
(13) eigenmatrix in S1 layer step (12) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(14) eigenmatrix in the C1 layer that the use characteristic code book obtains step (13) carries out the local linear coding, obtains the eigenmatrix in the S2 layer;
(15) eigenmatrix in the S2 layer that usage space pyramid method obtains step (14) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, i.e. the eigenvector of C2 layer:
(16) eigenvector of C2 layer step (15) obtained is sent into support vector machine classifier and is trained;
(2) identifying:
(21) the test picture is carried out to pre-service;
(22) the test picture is carried out to the Gabor feature extraction, obtain the Gabor eigenmatrix, i.e. eigenmatrix in the S1 layer;
(23) eigenmatrix in S1 layer step (22) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(24) eigenmatrix in the C1 layer that the use characteristic code book obtains step (23) carries out the local linear coding, obtains the eigenmatrix in the S2 layer;
(25) eigenmatrix in the S2 layer that usage space pyramid method obtains step (24) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, i.e. the eigenvector of C2 layer:
(26) eigenvector of C2 layer step (25) obtained is sent into the support vector machine classifier that step (16) trains and is identified.
Eigenmatrix in the C1 layer that the described use characteristic code book of step (14) obtains step (13) carries out the local linear coding, is specially:
In the C1 layer, for arbitrfary point in samples pictures, when take this point during as basic point, the each point in the feature templates size extract the feature code book around this basic point in, and transfer the one-dimensional characteristic vector to, the use characteristic code book carries out the local linear coding to the one-dimensional characteristic vector; After all basic points of sample image are carried out to the local linear coding, obtain the coding result that the eigenmatrix in the C1 layer is encoded through local linear, this coding result is the eigenmatrix in the S2 layer;
Wherein, described local linear coding is implemented as follows: suppose that current one-dimensional characteristic vector is x
i, its feature code book is B, sets after the local linear coding and is output as c
i, one-dimensional characteristic vector x
i, feature code book B and output c
imust meet following optimization formula, its mathematic(al) representation is as follows:
s.t.1
Tc
i=1
Wherein
dist (xi, B)=[dist (x
i, b
1) ..., dist (x
i, b
i)]
tfor the distance vector of one-dimensional characteristic vector and feature code book,
for multiplying each other between element; This local linear coding has analytic solution to be
The leaching process of feature code book is as follows:
In the C1 of every samples pictures layer, choose at random the piece of 20 4x4, the piece of 15 8x8, the piece of the piece of 10 12x12 and 5 16x16, carry out cluster to all respectively by size by the k-means method after transferring the one dimension amount to by spatial order, using cluster Hou center as the feature code book.
Eigenmatrix in the C1 layer that step (24) use characteristic code book obtains step (23) carries out the local linear coding, is specially:
Eigenmatrix in the C1 layer that the feature code book that uses step (14) to obtain obtains step (23) carries out the local linear coding.
Eigenmatrix in the S2 layer that step (15) usage space pyramid method obtains step (14) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, and is specially:
First the eigenmatrix in the S2 layer of samples pictures being carried out to a Global maximum chooses; Again samples pictures is divided into to the zonule of 2x2, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of each zonule and convert the one-dimensional characteristic vector to, obtain the eigenvector that visual pattern feature and local uniform enconding combine.
Eigenmatrix in the S2 layer that step (25) usage space pyramid method obtains step (24) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, and is specially:
First the eigenmatrix in the S2 layer of test picture being carried out to a Global maximum chooses; To test the zonule that picture is divided into 2x2 again, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of each zonule and convert the one-dimensional characteristic vector to, obtain the eigenvector that visual pattern feature and local uniform enconding combine.
Step (11) is described carries out pre-service to samples pictures, is specially:
Samples pictures is carried out to the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, obtain N image layer, N 2.
Step (21) is described carries out pre-service to the test picture, is specially:
To test picture and carry out the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, and obtain N image layer, N 2.
Compared with prior art, the present invention has the following advantages and beneficial effect:
(1) at first the present invention simulates the Gabor feature of the visual characteristic extraction picture of human eye, then utilize local linear coding method imictron, the local visual feature is combined, and utilizing space pyramid method extraction finally to there is semantic eigenvector, the eigenvector of extraction is done final identification in the support vector machine sorter.The identifying of the similar human brain of identifying of the present invention, in large classification Images Classification process, performance will be optimized traditional method based on local feature.
(2) use characteristic code book of the present invention carries out the local linear coding to the maximized Gabor feature in part, even make the set of blocks of the large quantity of sampling in the sample graph valut, also has to the S2 feature of less dimension.Owing to having loosened the quantitative limitation of set of blocks hits, the diversity of feature templates can be guaranteed simultaneously.
(3) the present invention has adopted space pyramid method, makes eigenvector still can retain certain geometry information, and support vector machine classifier is classified while processing, and geometry information contributes to improve the recognition performance of image.
The accompanying drawing explanation
The degree of deep learning network that Fig. 1 is embodiments of the invention realizes the process flow diagram of the method for large classification image recognition.
Embodiment
Below in conjunction with embodiment, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment
As shown in Figure 1, the degree of deep learning network of the present embodiment realizes the method for large classification image recognition, adopts degree of deep learning network to carry out large classification image recognition, and described degree of deep learning network comprises the first simple layer S1 layer, the first complicated layer C1 layer, the second simple layer S2 layer and the second complicated layer C2 layer;
Identifying comprises the following steps:
(1) training process:
(11) samples pictures is carried out to pre-service, samples pictures comprises totally 1000 kinds, 40, the picture of every kind:
Samples pictures is carried out to the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, obtain N image layer (N > 2, it is 9 layers that the present invention sets N);
(12) samples pictures is carried out to the Gabor feature extraction, obtains the Gabor eigenmatrix:
Step (11) is obtained to N image layer, utilize the Gabor filter filtering, obtain the Gabor eigenmatrix, be i.e. eigenmatrix in the S1 layer;
(13) eigenmatrix in S1 layer step (12) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(14) eigenmatrix in the C1 layer that the use characteristic code book obtains step (13) carries out the local linear coding, obtains the eigenmatrix in the S2 layer, is specially:
In the C1 layer, for arbitrfary point in samples pictures, when take this point during as basic point, the each point in the feature templates size extract the feature code book around this basic point in, and transfer the one-dimensional characteristic vector to, the use characteristic code book carries out the local linear coding to the one-dimensional characteristic vector; After all basic points of sample image are carried out to the local linear coding, obtain the coding result that the eigenmatrix in the C1 layer is encoded through local linear, this coding result is the eigenmatrix in the S2 layer;
Wherein, described local linear coding is implemented as follows: suppose that current one-dimensional characteristic vector is x
i, its feature code book is B, sets after the local linear coding and is output as c
i, one-dimensional characteristic vector x
i, feature code book B and output c
imust meet following optimization formula, its mathematic(al) representation is as follows:
s.t.1
Tc
i=1
Wherein
dist (xi, B)=[dist (x
i, b
1) ..., dist (x
i, b
i)]
tfor the distance vector of one-dimensional characteristic vector and feature code book,
for multiplying each other between element; This local linear coding has analytic solution to be
The leaching process of feature code book is as follows:
In the C1 of every samples pictures layer, choose at random the piece of 20 4x4, the piece of 15 8x8, the piece of the piece of 10 12x12 and 5 16x16, carry out cluster to all respectively by size by the k-means method after transferring the one dimension amount to by spatial order, using cluster Hou center as feature code book B.
(15) eigenmatrix in the S2 layer that usage space pyramid method obtains step (14) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, i.e. the eigenvector of C2 layer:
Described space pyramid method is about to image and carries out layering, and ground floor is sample image, and the second layer is samples pictures to be divided into to the zonule of 2x2.For this reason, algorithm first carries out a Global maximum to the eigenmatrix in the S2 layer of samples pictures and chooses; Again samples pictures is divided into to the zonule of 2x2, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of the zonule of 2x2 and convert the one-dimensional characteristic vector to, can obtain the eigenvector of the C2 layer that visual pattern feature and local uniform enconding combine.
(16) eigenvector that visual pattern feature step (15) obtained and local uniform enconding combine is sent into support vector machine classifier (svm classifier device) and is trained.
(2) identifying:
(21) the test picture is carried out to pre-service:
To test picture and carry out the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, obtain N image layer;
(22) the test picture is carried out to the Gabor feature extraction, obtain the Gabor eigenmatrix, i.e. eigenmatrix in the S1 layer;
(23) eigenmatrix in S1 layer step (22) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(24) eigenmatrix in C1 layer step (23) obtained carries out the local linear coding:
Eigenmatrix in the C1 layer that the feature code book that uses step (14) to obtain obtains step (23) carries out the local linear coding, obtains the eigenmatrix in the S2 layer;
(25) eigenmatrix in the S2 layer that usage space pyramid method obtains step (24) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine:
First the eigenmatrix in the S2 layer of test picture being carried out to a Global maximum chooses; To test the zonule that picture is divided into 2x2 again, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of the zonule of 2x2 and convert the one-dimensional characteristic vector to, can obtain the eigenvector of the C2 layer that visual pattern feature and local uniform enconding combine;
(26) eigenvector that visual pattern feature step (25) obtained and local uniform enconding combine is sent into the support vector machine classifier that step (16) trains and is identified.
Above-described embodiment is preferably embodiment of the present invention; but embodiments of the present invention are not limited by the examples; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.
Claims (7)
1. a degree of deep learning network realizes the method for large classification image recognition, it is characterized in that, adopt degree of deep learning network to carry out large classification image recognition, described degree of deep learning network comprises the first simple layer S1 layer, the first complicated layer C1 layer, the second simple layer S2 layer and the second complicated layer C2 layer;
Identifying comprises the following steps:
(1) training process:
(11) samples pictures is carried out to pre-service, described samples pictures comprises plurality of classes;
(12) samples pictures is carried out to the Gabor feature extraction, obtain the Gabor eigenmatrix, be i.e. eigenmatrix in the S1 layer;
(13) eigenmatrix in S1 layer step (12) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(14) eigenmatrix in the C1 layer that the use characteristic code book obtains step (13) carries out the local linear coding, obtains the eigenmatrix in the S2 layer;
(15) eigenmatrix in the S2 layer that usage space pyramid method obtains step (14) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, i.e. the eigenvector of C2 layer:
(16) eigenvector of C2 layer step (15) obtained is sent into support vector machine classifier and is trained;
(2) identifying:
(21) the test picture is carried out to pre-service;
(22) the test picture is carried out to the Gabor feature extraction, obtain the Gabor eigenmatrix, i.e. eigenmatrix in the S1 layer;
(23) eigenmatrix in S1 layer step (22) obtained carries out local maximum to be chosen, and obtains the Gabor eigenmatrix of suboptimize, i.e. eigenmatrix in the C1 layer;
(24) eigenmatrix in the C1 layer that the use characteristic code book obtains step (23) carries out the local linear coding, obtains the eigenmatrix in the S2 layer;
(25) eigenmatrix in the S2 layer that usage space pyramid method obtains step (24) carries out maximum to be chosen, and obtains the eigenvector that visual pattern feature and local uniform enconding combine, i.e. the eigenvector of C2 layer:
(26) eigenvector of C2 layer step (25) obtained is sent into the support vector machine classifier that step (16) trains and is identified.
2. degree of deep learning network according to claim 1 realizes the method for large classification image recognition, it is characterized in that, the eigenmatrix in the C1 layer that the described use characteristic code book of step (14) obtains step (13) carries out the local linear coding, is specially:
In the C1 layer, for arbitrfary point in samples pictures, when take this point during as basic point, the each point in the feature templates size extract the feature code book around this basic point in, and transfer the one-dimensional characteristic vector to, the use characteristic code book carries out the local linear coding to the one-dimensional characteristic vector; After all basic points of sample image are carried out to the local linear coding, obtain the coding result that the eigenmatrix in the C1 layer is encoded through local linear, this coding result is the eigenmatrix in the S2 layer;
Wherein, described local linear coding is implemented as follows: suppose that current one-dimensional characteristic vector is x
i, its feature code book is B, sets after the local linear coding and is output as c
i, one-dimensional characteristic vector x
i, feature code book B and output c
imust meet following optimization formula, its mathematic(al) representation is as follows:
s.t.1
Tc
i=1
Wherein
dist (xi, B)=[dist (x
i, b
1) ..., dist (x
i, b
i)]
tfor the distance vector of one-dimensional characteristic vector and feature code book,
for multiplying each other between element; This local linear coding has analytic solution to be
The leaching process of described feature code book is as follows:
In the C1 of every samples pictures layer, choose at random the piece of 20 4x4, the piece of 15 8x8, the piece of the piece of 10 12x12 and 5 16x16, carry out cluster to all respectively by size by the k-means method after transferring the one dimension amount to by spatial order, using cluster Hou center as the feature code book.
3. degree of deep learning network according to claim 2 realizes the method for large classification image recognition, it is characterized in that, the eigenmatrix in the C1 layer that step (24) use characteristic code book obtains step (23) carries out the local linear coding, is specially:
Eigenmatrix in the C1 layer that the feature code book that uses step (14) to obtain obtains step (23) carries out the local linear coding.
4. degree of deep learning network according to claim 1 realizes the method for large classification image recognition, it is characterized in that, eigenmatrix in the S2 layer that step (15) usage space pyramid method obtains step (14) carries out maximum and chooses, obtain the eigenvector that visual pattern feature and local uniform enconding combine, be specially:
First the eigenmatrix in the S2 layer of samples pictures being carried out to a Global maximum chooses; Again samples pictures is divided into to the zonule of 2x2, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of each zonule and convert the one-dimensional characteristic vector to, obtain the eigenvector that visual pattern feature and local uniform enconding combine.
5. degree of deep learning network according to claim 1 realizes the method for large classification image recognition, it is characterized in that, eigenmatrix in the S2 layer that step (25) usage space pyramid method obtains step (24) carries out maximum and chooses, obtain the eigenvector that visual pattern feature and local uniform enconding combine, be specially:
First the eigenmatrix in the S2 layer of test picture being carried out to a Global maximum chooses; To test the zonule that picture is divided into 2x2 again, eigenmatrix in the S2 layer of each zonule is carried out respectively to maximum to be chosen, Global maximum is chosen and chosen unification as a result with the maximum of each zonule and convert the one-dimensional characteristic vector to, obtain the eigenvector that visual pattern feature and local uniform enconding combine.
6. degree of deep learning network according to claim 1 realizes the method for large classification image recognition, it is characterized in that, step (11) is described carries out pre-service to samples pictures, is specially:
Samples pictures is carried out to the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, obtain N image layer, N 2.
7. degree of deep learning network according to claim 1 realizes the method for large classification image recognition, it is characterized in that, step (21) is described carries out pre-service to the test picture, is specially:
To test picture and carry out the gray processing processing, and be adjusted into the yardstick of 140x140, then carry out dwindling for N-1 time with the ratio of 2^1/4, and obtain N image layer, N 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310461551.3A CN103489004A (en) | 2013-09-30 | 2013-09-30 | Method for achieving large category image identification of deep study network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310461551.3A CN103489004A (en) | 2013-09-30 | 2013-09-30 | Method for achieving large category image identification of deep study network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103489004A true CN103489004A (en) | 2014-01-01 |
Family
ID=49829211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310461551.3A Pending CN103489004A (en) | 2013-09-30 | 2013-09-30 | Method for achieving large category image identification of deep study network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103489004A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104166819A (en) * | 2014-07-25 | 2014-11-26 | 小米科技有限责任公司 | Identity verification method, device and terminal |
CN104537647A (en) * | 2014-12-12 | 2015-04-22 | 中安消技术有限公司 | Target detection method and device |
CN104881671A (en) * | 2015-05-21 | 2015-09-02 | 电子科技大学 | High resolution remote sensing image local feature extraction method based on 2D-Gabor |
CN105960647A (en) * | 2014-05-29 | 2016-09-21 | 北京旷视科技有限公司 | Compact face representation |
CN106407369A (en) * | 2016-09-09 | 2017-02-15 | 华南理工大学 | Photo management method and system based on deep learning face recognition |
CN107451604A (en) * | 2017-07-12 | 2017-12-08 | 河海大学 | A kind of image classification method based on K means |
CN107844758A (en) * | 2017-10-24 | 2018-03-27 | 量子云未来(北京)信息科技有限公司 | Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing |
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN108304885A (en) * | 2018-02-28 | 2018-07-20 | 宜宾学院 | A kind of Gabor wavelet CNN image classification methods |
CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
CN109490814A (en) * | 2018-09-07 | 2019-03-19 | 广西电网有限责任公司电力科学研究院 | Metering automation terminal fault diagnostic method based on deep learning and Support Vector data description |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1858773A (en) * | 2005-04-30 | 2006-11-08 | 中国科学院计算技术研究所 | Image identifying method based on Gabor phase mode |
US20100310157A1 (en) * | 2009-06-05 | 2010-12-09 | Samsung Electronics Co., Ltd. | Apparatus and method for video sensor-based human activity and facial expression modeling and recognition |
CN101968846A (en) * | 2010-07-27 | 2011-02-09 | 上海摩比源软件技术有限公司 | Face tracking method |
CN102194108A (en) * | 2011-05-13 | 2011-09-21 | 华南理工大学 | Smiley face expression recognition method based on clustering linear discriminant analysis of feature selection |
CN102609732A (en) * | 2012-01-31 | 2012-07-25 | 中国科学院自动化研究所 | Object recognition method based on generalization visual dictionary diagram |
CN102622607A (en) * | 2012-02-24 | 2012-08-01 | 河海大学 | Remote sensing image classification method based on multi-feature fusion |
-
2013
- 2013-09-30 CN CN201310461551.3A patent/CN103489004A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1858773A (en) * | 2005-04-30 | 2006-11-08 | 中国科学院计算技术研究所 | Image identifying method based on Gabor phase mode |
US20100310157A1 (en) * | 2009-06-05 | 2010-12-09 | Samsung Electronics Co., Ltd. | Apparatus and method for video sensor-based human activity and facial expression modeling and recognition |
CN101968846A (en) * | 2010-07-27 | 2011-02-09 | 上海摩比源软件技术有限公司 | Face tracking method |
CN102194108A (en) * | 2011-05-13 | 2011-09-21 | 华南理工大学 | Smiley face expression recognition method based on clustering linear discriminant analysis of feature selection |
CN102609732A (en) * | 2012-01-31 | 2012-07-25 | 中国科学院自动化研究所 | Object recognition method based on generalization visual dictionary diagram |
CN102622607A (en) * | 2012-02-24 | 2012-08-01 | 河海大学 | Remote sensing image classification method based on multi-feature fusion |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105960647A (en) * | 2014-05-29 | 2016-09-21 | 北京旷视科技有限公司 | Compact face representation |
CN104166819A (en) * | 2014-07-25 | 2014-11-26 | 小米科技有限责任公司 | Identity verification method, device and terminal |
CN104537647A (en) * | 2014-12-12 | 2015-04-22 | 中安消技术有限公司 | Target detection method and device |
CN104537647B (en) * | 2014-12-12 | 2017-10-20 | 中安消技术有限公司 | A kind of object detection method and device |
CN104881671B (en) * | 2015-05-21 | 2018-01-19 | 电子科技大学 | A kind of high score remote sensing image Local Feature Extraction based on 2D Gabor |
CN104881671A (en) * | 2015-05-21 | 2015-09-02 | 电子科技大学 | High resolution remote sensing image local feature extraction method based on 2D-Gabor |
CN106407369A (en) * | 2016-09-09 | 2017-02-15 | 华南理工大学 | Photo management method and system based on deep learning face recognition |
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN107451604A (en) * | 2017-07-12 | 2017-12-08 | 河海大学 | A kind of image classification method based on K means |
CN107844758A (en) * | 2017-10-24 | 2018-03-27 | 量子云未来(北京)信息科技有限公司 | Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing |
CN108304885A (en) * | 2018-02-28 | 2018-07-20 | 宜宾学院 | A kind of Gabor wavelet CNN image classification methods |
CN109490814A (en) * | 2018-09-07 | 2019-03-19 | 广西电网有限责任公司电力科学研究院 | Metering automation terminal fault diagnostic method based on deep learning and Support Vector data description |
CN109490814B (en) * | 2018-09-07 | 2021-02-26 | 广西电网有限责任公司电力科学研究院 | Metering automation terminal fault diagnosis method based on deep learning and support vector data description |
CN109345531A (en) * | 2018-10-10 | 2019-02-15 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
CN109345531B (en) * | 2018-10-10 | 2019-07-30 | 四川新网银行股份有限公司 | A kind of method and system based on picture recognition user's shooting distance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103489004A (en) | Method for achieving large category image identification of deep study network | |
CN105956560B (en) | A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization | |
CN105894045B (en) | A kind of model recognizing method of the depth network model based on spatial pyramid pond | |
CN106815604B (en) | Method for viewing points detecting based on fusion of multi-layer information | |
CN102314614B (en) | Image semantics classification method based on class-shared multiple kernel learning (MKL) | |
CN105069481B (en) | Natural scene multiple labeling sorting technique based on spatial pyramid sparse coding | |
CN103971097B (en) | Vehicle license plate recognition method and system based on multiscale stroke models | |
CN106022300A (en) | Traffic sign identifying method and traffic sign identifying system based on cascading deep learning | |
CN105956626A (en) | Deep learning based vehicle license plate position insensitive vehicle license plate recognition method | |
CN109102014A (en) | The image classification method of class imbalance based on depth convolutional neural networks | |
CN104866810A (en) | Face recognition method of deep convolutional neural network | |
CN104657748A (en) | Vehicle type recognition method based on convolutional neural network | |
CN102915453B (en) | Real-time feedback and update vehicle detection method | |
CN104268552B (en) | One kind is based on the polygonal fine classification sorting technique of part | |
CN109271991A (en) | A kind of detection method of license plate based on deep learning | |
CN107480620A (en) | Remote sensing images automatic target recognition method based on heterogeneous characteristic fusion | |
CN103218831A (en) | Video moving target classification and identification method based on outline constraint | |
CN106228166B (en) | The recognition methods of character picture | |
CN103902968A (en) | Pedestrian detection model training method based on AdaBoost classifier | |
CN105938565A (en) | Multi-layer classifier and Internet image aided training-based color image emotion classification method | |
CN106156798B (en) | Scene image classification method based on annular space pyramid and Multiple Kernel Learning | |
CN105760858A (en) | Pedestrian detection method and apparatus based on Haar-like intermediate layer filtering features | |
CN104200228A (en) | Recognizing method and system for safety belt | |
CN103049760B (en) | Based on the rarefaction representation target identification method of image block and position weighting | |
CN103077399B (en) | Based on the biological micro-image sorting technique of integrated cascade |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140101 |