CN106446951A - Singular value selection-based integrated learning device - Google Patents

Singular value selection-based integrated learning device Download PDF

Info

Publication number
CN106446951A
CN106446951A CN201610856560.6A CN201610856560A CN106446951A CN 106446951 A CN106446951 A CN 106446951A CN 201610856560 A CN201610856560 A CN 201610856560A CN 106446951 A CN106446951 A CN 106446951A
Authority
CN
China
Prior art keywords
singular
sample
sigma
alpha
svm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610856560.6A
Other languages
Chinese (zh)
Inventor
吴定雄
秦小林
张国华
张力戈
王文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Information Technology Co Ltd of CAS
Original Assignee
Chengdu Information Technology Co Ltd of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Information Technology Co Ltd of CAS filed Critical Chengdu Information Technology Co Ltd of CAS
Priority to CN201610856560.6A priority Critical patent/CN106446951A/en
Publication of CN106446951A publication Critical patent/CN106446951A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a singular value selection-based integrated learning device. The realization process of the integrated learning device includes the following steps that: normalization preprocessing is performed on training sample sets; sampling with replacement is performed through adopting a Bootstrap random sampling method, so that M new sample sets can be generated; partial singular value decomposition (SVD) is performed on each sample in the M new sample sets, and the corresponding singular values as well as left and right singular vectors of each sample are obtained; k singular values as well as left and right singular vectors corresponding to the k singular values are randomly extracted each time, so that a 2D SVM (support vector machine) base learning device can be obtained, and the M new sample sets are trained, so that M 2D SVM (support vector machine) base learning devices can be obtained; and the base learning devices are combined according to a relative majority voting criterion, so that the integrated learning device can be obtained, and the obtained learning device is adopted to classify and identify samples to be classified. With the singular value selection-based integrated learning device of the invention adopted, problems such as large computation amount, curse of dimensionality, structure information loss of data and inherent correlation damage of the data which are caused by the application of an existing classifier to stretching matrix object data into high-dimension vectors can be solved.

Description

Integrated learning device based on singular value selection
Technical Field
The invention relates to the technical field of machine learning and image processing, in particular to a classification method taking tensor data of two dimensions or more as input samples, which can be used for target detection, pattern recognition and behavior recognition.
Background
With the rapid development of the internet and computer technology, the information volume of human beings in dozens of years is comparable to the sum of the information volume of all the past times of human beings. The continuous development of data brings great changes to the work, life and thinking of human beings, and the development of data is mainly reflected in two aspects: firstly, the scale of data is getting bigger and bigger; secondly, the structure of the data is more and more complex, compared with the traditional paper text information, various information formats such as web pages, black and white images, color images, medical images, satellite remote sensing images, videos and the like cannot be represented by structures such as simple vectors and the like, and more dimensions are needed to represent the characteristics of the data object, so that the information quantity such as data dimensions and the like is increased. Therefore, it can be said that "big data" is a subject word of the information age.
Different classification algorithms may yield different classification performance, but none of them can yield good results for all applications. Regarding the design of classifiers, various classification methods have been proposed so far by researchers in data mining, machine learning, statistics, pattern recognition and neurobiology, such as expert systems, association rules, decision trees, bayesian classifiers, support vector machines, neural networks, genetic algorithms, etc., and these methods have been applied to different fields and contribute to the development of research industry.
Although the proposed classification method has achieved a certain success in some fields, in most of the mentioned learning methods, data is generally expressed by using a vector mode, and in order to enable a learning algorithm based on the vector mode to learn about two-dimensional or more tensor data, it is generally necessary to firstly vectorize and expand the data in the tensor mode, and then learn by using a conventional learning algorithm. Taking a black-and-white image as an example, simply stretching the image and converting the image into a vector mode for processing, and ignoring data inherent structure information such as relative positions between pixels in an original image, the spatial-temporal structure of the original data can be damaged, and related information between data structures can be lost. If the size of the original data is large, processing into a vector mode can cause dimension increase, possibly causing dimension disaster or small sample high dimension problems, so that the obtained classifier has poor effect.
Therefore, in order to solve the above problems, it is necessary to provide a 2D SVM ensemble learning method, which can improve the precision of the classifier by using the advantages of the ensemble learning without breaking the space-time structure of the original data.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the integrated learner based on singular value selection, which improves the diversity among the base classifiers and obtains the integrated effect with strong generalization capability by randomly selecting partial singular values of the samples.
The technical scheme adopted by the invention for solving the technical problems is as follows: an ensemble learner based on singular value selection comprising the steps of:
step one, carrying out normalization pretreatment on a training sample set;
step two, sampling is carried out on the training sample set subjected to normalization pretreatment in a replacement mode by adopting a Bootstrap random sampling method, and M new sample sets are generated;
performing partial SVD on each sample in the M new sample sets to obtain a singular value and left and right singular vectors corresponding to each sample;
randomly extracting k singular values and corresponding left and right singular vectors thereof each time to generate a 2D SVM base learner, and respectively training M new sample sets to obtain M2D SVM base classifiers;
and step five, merging the base classifiers according to the relative majority voting criteria to obtain an ensemble learner, and carrying out classification and identification on the samples to be classified by using the obtained ensemble learner.
Compared with the prior art, the invention has the following positive effects:
(1) the invention solves the problems of huge computation amount and dimensional disaster caused by stretching matrix object (such as images, EEG and the like) data into high-dimensional vectors, data structure information loss, internal correlation destruction and the like of the existing classifier.
(2) According to the method, through partial singular value decomposition of the sample, a certain number of singular values and singular vectors are randomly selected from the obtained singular values and singular vectors, and the sample is compressed and denoised to a certain degree.
(3) The invention constructs a base classifier with larger diversity through singular value selection, thereby generating integration with strong generalization capability.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
An ensemble learner based on singular value selection, as shown in fig. 1, comprising the steps of:
step one, training sample setIs subjected to normalized pretreatment to obtain
For training sample setThe normalization preprocessing method adopts 0-1 normalization, which is a linear transformation of the original sample data to make the result fall to [0, 1%]Interval, the transfer function is as follows:
wherein, Xi,Xi'∈Rp×qIs the ith sample, yi∈Y,Y={C1,C2,…,CNIs sample Xi,Xi' corresponding class label, sample X can be seeni,Xi' is represented in the form of a two-dimensional matrix; max (X)i) Representing the taking of training samples XiMaximum value of middle element, min (X)i) Representing the taking of training samples XiMinimum value of middle element, repmat { min (X)i)}∈Rp×qRepresents a sample minimum matrix with elements in the matrix all min (X)i) (ii) a Finally, all the preprocessed training samples X are usedi' and label y thereofiConstructing a preprocessed training sample set
Step two, sampling is carried out on the normalized training sample set in a back-to-back mode by adopting a Bootstrap random sampling method, and M new sample sets are finally generated
And uniformly and randomly sampling the training sample set in a put-back mode to obtain a new sample set with the same size as the original sample set. Since there is a back-to-back uniform sampling, the probability that a sample is not selected at one time can be expressed as,when n → ∞ p ≈ 0.368, therefore, each base learner uses only about 63.2% of the samples in the initial training set, and the remaining about 36.8% of the samples can be used as a validation set to perform an "out-of-package estimation" (OOB) of the generalization performance of the learner, which has been proven to be unbiased estimation, so that no further cross-validation or separate test set is required in the integrated learning algorithm to obtain an unbiased estimation of the test set error.
Performing partial SVD on each sample in the sample set to obtain a singular value and left and right singular vectors corresponding to each sample:
(1) first, to sample XiPerforming SVD full decomposition to obtain SVD full decomposition productThe formula is as follows: xi=UΣVTWherein X isi∈Rp×qIs a two-dimensional matrix, U ∈ Rp×pIs XiOf left singular vectors of sigma ∈ Rp×qIs XiOf singular values of, VT∈Rq×qIs XiA matrix composed of right singular vectors of (a);
(2) in most cases, the larger partial singular values of the matrix can well represent the basic information of the matrix, and the first r large singular values (i.e. the first r larger singular values) are used to approximately describe the sample XiThus, the matrix is compressed to some extent, and the partial singular value decomposition is as follows:
wherein sigmaip,μip,vipIs XiAnd its corresponding left and right singular vectors.
Step four, randomly extracting k singular values and corresponding left and right singular vectors thereof each time to generate a 2D SVM base learner:
4.1 for the two-class problem
Given a training data setWherein Xi∈Rp×qIs the ith input sample, yi∈ { -1,1} is sample XiCorresponding class label, one sees the input sample XiIs represented in the form of a matrix.
The 4.1.12D SVM support vector machine is defined as follows:
s.t.yi(<W,Xi>+b)≥1-ξi,i=1,…,n (4)
ξi≥0,i=1,…,n (5)
wherein, W is the direction of the classification hyperplane determined by the normal matrix, and b is the displacement item.
4.1.2 Lagrangian functions of equations (3) - (5) are obtained by the Lagrangian multiplier method as follows:
α thereini≥0,βiA value of > 0 is the Lagrangian multiplier.
Let L (W, b, α) pair W, b, ξiThe partial derivative of (c) is zero:
C=αii,i=1,…,n (9)
the dual problems of formulae (3) to (5) can be obtained by substituting formulae (7) to (9) for formula (4) as follows:
0≤αi≤C,i=1,…,n (12)
wherein,<Xi,Xj>is XiAnd XjThe inner product of (d).
4.1.3 when sample X is inputiIn the form of vectors, then the optimization models (3) - (5) degenerate to the standard support vector machine. If we use the original form of the input sample to compute<Xi,Xj>Then the optimal solutions of (3) - (5) are the same as the solution of the linear support vector machine. Because of the "dimension disaster" and small sample problem, the support vector machine cannot effectively handle the matrix sample problem, and the optimization models (3) - (5) also suffer from the same problem. Specifically, the dual form of the optimization models (3) - (5) depends only on the inner product between sample data, while that in (10)<Xi,Xj>Inner product operations do not make good use of the structure information of the sample data.
Considering that the SVD decomposition of the matrix can better embody the structural information and the intrinsic correlation of the matrix data, the SVD decomposition of the matrix is used to replace the original matrix input, thereby improving the calculation of the inner product of the matrix. On one hand, the recognition capability of the learning machine can be improved; on the other hand, the learning speed of the learning machine can be accelerated.
4.1.4 according to the three steps, carrying out partial SVD decomposition on the samples to obtain r singular values and corresponding left and right singular vectors of each sample, and randomly selecting k singular values and corresponding left and right singular vectors from the r singular valuesAndthen matrix XiThe inner product of Xj is calculated as follows:
substituting (13) into (10) to obtain:
0≤αi≤C,i=1,…,n (16)
as shown in (7), the weight matrix W for classifying the hyperplane can be expressed as a linear combination of training samples in a two-dimensional space, the optimization models (14) - (16) are 2D-SVM, and the 2D-SVM can be shown as the extension of a linear support vector machine on the two-dimensional matrix, so that the optimization models (14) - (16) can be solved by using an SMO algorithm.
The base learner 2D SVM classifier f (x) classification decision function is:
wherein sigmaip、σq、uip、uq、vipAnd vqAre each XiAnd the singular values of X and the corresponding left and right singular vectors.
4.2 Multi-Classification problem for 2D SVM
The "one-to-one" (OvO) strategy is used, as follows:
given data setyi∈{C1,C2,…,CNOvO pairs these N classes two by two, resulting in N (N-1)/2 binary classification tasks, e.g., OvO would be classified as discriminating class CiAnd CjTraining a classifier that measures DtC in (1)iClass sample as a good example,CjClass examples are used as counter examples. In the testing stage, a new sample will be submitted to all classifiers at the same time, so that N (N-1)/2 classification results will be obtained, and the final result can be generated by voting: i.e. the most predicted classes are taken as the final classification result.
The above is a description of a 2D SVM, similar to the definition and derivation description of a Support Vector Machine (SVM).
<Xi,Xj>The processing data dimensions are different, the SVM processes the inner product of vector samples, and the 2d SVM proposed by the present invention can process the inner product of matrix (such as a matrix composed of picture pixels) samples.
However, the two matrixes cannot be directly subjected to inner product, so before the inner product is carried out, the matrixes are subjected to SVD decomposition (a part of singular values of the front R is randomly selected, a similar random forest is a base classifier with good diversity), and then the original matrixes are replaced by the selected singular values and singular vectors to carry out the inner product, so that 1, structural damage caused by matrix vector drawing (for example, the upper and lower relation of two pixels in a picture is originally, and the position relation is damaged after the matrix drawing) can be avoided, and 2, the operation speed of the inner product can be accelerated.
Finally, M new sample sets are obtained based on the Bootstrap random sampling method in the second stepThe training method is adopted for each sample set, and M2D SVM-based classifiers { h) are finally obtained1,h2,…,hM}。
Step five, merging the base classifiers according to the relative majority voting criterion to obtain an ensemble learner:
in this step, the base classifiers are merged to obtain a stronger classifier-ensemble learner:
the relative majority voting criterion is adopted to merge the base classifiers, and the mathematical expression of the combination mode is as follows:
and finally, carrying out classification and identification on the sample to be classified by using the obtained ensemble learner.

Claims (6)

1. An ensemble learner based on singular value selection, comprising: the method comprises the following steps:
step one, carrying out normalization pretreatment on a training sample set;
step two, sampling is carried out on the training sample set subjected to normalization pretreatment in a replacement mode by adopting a Bootstrap random sampling method, and M new sample sets are generated;
performing partial SVD on each sample in the M new sample sets to obtain a singular value and left and right singular vectors corresponding to each sample;
randomly extracting k singular values and corresponding left and right singular vectors thereof each time to generate a 2D SVM base learner, and respectively training M new sample sets to obtain M2D SVM base classifiers;
and step five, merging the base classifiers according to the relative majority voting criteria to obtain an ensemble learner, and carrying out classification and identification on the samples to be classified by using the obtained ensemble learner.
2. The singular value selection-based ensemble learner of claim 1, wherein: the method for carrying out normalization preprocessing on the training sample set in the first step comprises the following steps:
s101, training sample setRespectively preprocessing each sample in the training sequence to obtain a normalized training sample Xi' and its class label yi
X i &prime; = X i - r e p m a t { m i n ( X i ) } m a x ( X i ) - min ( X i ) , y i = y i
Wherein, max (X)i) Representing training sample XiMaximum value of (1), min (X)i) Representing training sample XiMinimum value of (1), repmat { min (X)i)}∈Rp×qRepresents a sample minimum matrix with elements in min (X)i);
S102, training sample X after all preprocessing is usedi' and its class label yiConstructing a preprocessed training sample set
3. The singular value selection-based ensemble learner of claim 1, wherein: step three, the method for performing partial SVD decomposition on each sample in the M new sample sets is as follows:
s301, sample XiThe SVD of (1) is in the form of: xi=UΣVTWherein X isi∈Rp×qIs a two-dimensional matrix, U ∈ Rp×pIs XiOf left singular vectors of sigma ∈ Rp×qIs XiOf singular values of, VT∈Rq×qIs XiA matrix composed of right singular vectors of (a);
s302, approximately describing sample X by using singular value with large front riThe partial singular value decomposition is in the form:
wherein sigmaip,μip,vipIs XiAnd its corresponding left and right singular vectors.
4. The singular value selection-based ensemble learner of claim 3, wherein: fourthly, the generation method of the 2D SVM base learner comprises the following steps:
s401, for the two classification tasks, a training data set is givenWherein Xi∈Rp×qIs the ith input sample, yi∈ { -1,1} is sample XiA corresponding class label;
s402, the 2D SVM support vector machine is defined as follows:
m i n W , b , &xi; i J ( W , b , &xi; i ) = 1 2 | | W | | F 2 + C &Sigma; i = 1 n &xi; i s . t . y i ( < W , X i > + b ) &GreaterEqual; 1 - &xi; i , i = 1 , ... , n &xi; i &GreaterEqual; 0 , i = 1 , ... , n
wherein, W is a normal matrix, and b is a displacement term;
the dual problem of the 2D SVM support vector machine obtained by the Lagrange multiplier method is as follows:
m a x &alpha; &Sigma; i = 1 n &alpha; i - 1 2 &Sigma; i , j n &alpha; i &alpha; j y i y j < X i , X j > s . t . &Sigma; i = 1 n &alpha; i y i = 0 0 &le; &alpha; i &le; C , i = 1 , ... , n - - - ( 1 )
wherein,<Xi,Xj>is Xi∈Rp×qAnd Xj∈Rp×qInner product of (C) ═ αiiI is 1, …, n, wherein αi≥0,βiLagrange multiplier is greater than or equal to 0;
s403, randomly selecting k singular values and corresponding left and right singular vectors from r singular values and corresponding left and right singular vectors of each sample, wherein the k singular values and the corresponding left and right singular vectors are respectivelyAndthen matrix XiAnd XjThe inner product of (d) is calculated as follows:
substituting formula (2) into formula (1) to obtain the final form of the 2D SVM as follows:
max &alpha; &Sigma; i = 1 n &alpha; i - 1 2 &Sigma; i , j n &Sigma; p = 1 k &Sigma; q = 1 k &alpha; i &alpha; j y i y j &lambda; i p &lambda; j p < u i p , u j q > < v i p , v j q > &Sigma; i = 1 n &alpha; i y i = 0 0 &le; &alpha; i &le; C , i = 1 , ... , n
the base learner 2D SVM classifier f (x) classification decision function is:
h ( X ) = f ( X ) = s i g n ( &Sigma; i , j n &Sigma; p = 1 k &Sigma; q = 1 k &alpha; i y i &sigma; i p &sigma; q < u i p , u i q > < v i p , v q > + b )
wherein σip、σq、uip、uq、vipAnd vqAre each XiAnd the singular value of X and the corresponding left singular vector and right singular vector;
s404, carrying out classification decision on the 2D SVM multi-classification task by adopting a one-to-one strategy.
5. The singular value selection-based ensemble learner of claim 4, wherein: in step S404, the method for performing classification decision by using one-to-one strategy includes: given data setXi∈Rp×q,yi∈{C1,C2,…,CNPair N classes two by two, resulting in N (N-1)/2 bi-categorical tasks.
6. The singular value selection-based ensemble learner of claim 4, wherein: step five, the method for merging the base classifiers according to the relative majority voting criterion comprises the following steps:
y ( X ) = H ( X ) = argmax y &Element; Y &Sigma; t = 1 M l ( y = h t ( X ) ) .
CN201610856560.6A 2016-09-28 2016-09-28 Singular value selection-based integrated learning device Pending CN106446951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610856560.6A CN106446951A (en) 2016-09-28 2016-09-28 Singular value selection-based integrated learning device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610856560.6A CN106446951A (en) 2016-09-28 2016-09-28 Singular value selection-based integrated learning device

Publications (1)

Publication Number Publication Date
CN106446951A true CN106446951A (en) 2017-02-22

Family

ID=58169509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610856560.6A Pending CN106446951A (en) 2016-09-28 2016-09-28 Singular value selection-based integrated learning device

Country Status (1)

Country Link
CN (1) CN106446951A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220346A (en) * 2017-05-27 2017-09-29 荣科科技股份有限公司 A kind of higher-dimension deficiency of data feature selection approach
CN108615024A (en) * 2018-05-03 2018-10-02 厦门大学 A kind of EEG signal disaggregated model based on genetic algorithm and random forest
CN110163252A (en) * 2019-04-17 2019-08-23 平安科技(深圳)有限公司 Data classification method and device, electronic equipment, storage medium
WO2020056621A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Learning method and apparatus for intention recognition model, and device
CN111767803A (en) * 2020-06-08 2020-10-13 北京理工大学 Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar
CN113095994A (en) * 2021-04-15 2021-07-09 河北工程大学 Robust digital image blind watermarking method based on support vector machine and singular value decomposition

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220346A (en) * 2017-05-27 2017-09-29 荣科科技股份有限公司 A kind of higher-dimension deficiency of data feature selection approach
CN107220346B (en) * 2017-05-27 2021-04-30 荣科科技股份有限公司 High-dimensional incomplete data feature selection method
CN108615024A (en) * 2018-05-03 2018-10-02 厦门大学 A kind of EEG signal disaggregated model based on genetic algorithm and random forest
WO2020056621A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Learning method and apparatus for intention recognition model, and device
US12079579B2 (en) 2018-09-19 2024-09-03 Huawei Technologies Co., Ltd. Intention identification model learning method, apparatus, and device
CN110163252A (en) * 2019-04-17 2019-08-23 平安科技(深圳)有限公司 Data classification method and device, electronic equipment, storage medium
CN110163252B (en) * 2019-04-17 2023-11-24 平安科技(深圳)有限公司 Data classification method and device, electronic equipment and storage medium
CN111767803A (en) * 2020-06-08 2020-10-13 北京理工大学 Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar
CN111767803B (en) * 2020-06-08 2022-02-08 北京理工大学 Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar
CN113095994A (en) * 2021-04-15 2021-07-09 河北工程大学 Robust digital image blind watermarking method based on support vector machine and singular value decomposition

Similar Documents

Publication Publication Date Title
Boughida et al. A novel approach for facial expression recognition based on Gabor filters and genetic algorithm
CN106446951A (en) Singular value selection-based integrated learning device
Basly et al. CNN-SVM learning approach based human activity recognition
Zheng et al. A support vector machine classifier with automatic confidence and its application to gender classification
US8849790B2 (en) Rapid iterative development of classifiers
Ma et al. Discriminant analysis in correlation similarity measure space
Sun et al. Facial expression recognition based on a hybrid model combining deep and shallow features
Liu et al. Hypergraph with sampling for image retrieval
Hossain et al. Fine-grained image analysis for facial expression recognition using deep convolutional neural networks with bilinear pooling
Bargshady et al. The modeling of human facial pain intensity based on Temporal Convolutional Networks trained with video frames in HSV color space
Soula et al. A novel incremental Kernel Nonparametric SVM model (iKN-SVM) for data classification: An application to face detection
Noroozi et al. Semi-supervised deep representation learning for multi-view problems
Simao et al. Improving novelty detection with generative adversarial networks on hand gesture data
Bose et al. In-situ recognition of hand gesture via Enhanced Xception based single-stage deep convolutional neural network
Zhao et al. Classification and saliency detection by semi-supervised low-rank representation
CN104376308A (en) Human action recognition method based on multitask learning
Fan et al. Fast recognition of multi-view faces with feature selection
Zeng et al. Maximum margin classification based on flexible convex hulls
Mehta et al. Deep convolutional neural network-based effective model for 2D ear recognition using data augmentation
Zilu et al. Facial expression recognition based on NMF and SVM
Alenazy et al. An automatic facial expression recognition system employing convolutional neural network with multi-strategy gravitational search algorithm
Le et al. On incrementally using a small portion of strong unlabeled data for semi-supervised learning algorithms
Horn et al. Predicting pairwise relations with neural similarity encoders
Anil kumar et al. Emotion Recognition from Facial Biometric System Using Deep Convolution Neural Network (D-CNN)
Abuhammad et al. Emotional faces in the wild: Feature descriptors for emotion classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170222

RJ01 Rejection of invention patent application after publication