KR102043960B1 - Method and systems of face expression features classification robust to variety of face image appearance - Google Patents

Method and systems of face expression features classification robust to variety of face image appearance Download PDF

Info

Publication number
KR102043960B1
KR102043960B1 KR1020150046083A KR20150046083A KR102043960B1 KR 102043960 B1 KR102043960 B1 KR 102043960B1 KR 1020150046083 A KR1020150046083 A KR 1020150046083A KR 20150046083 A KR20150046083 A KR 20150046083A KR 102043960 B1 KR102043960 B1 KR 102043960B1
Authority
KR
South Korea
Prior art keywords
facial
image
facial expression
expression
class
Prior art date
Application number
KR1020150046083A
Other languages
Korean (ko)
Other versions
KR20160053749A (en
Inventor
노용만
이승호
Original Assignee
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술원 filed Critical 한국과학기술원
Publication of KR20160053749A publication Critical patent/KR20160053749A/en
Application granted granted Critical
Publication of KR102043960B1 publication Critical patent/KR102043960B1/en

Links

Images

Classifications

    • G06K9/00221
    • G06K9/00268
    • G06K9/00288

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a method and system for classifying facial expression features. The facial expression feature classification method performed in the facial expression feature classification system for the facial expression recognition according to the present invention generates a change image in the facial expression class corresponding to the change component in the facial expression class by using training face images configured for each facial expression class for the query facial image. Calculating an image difference between the change image in the facial expression class and the query facial image and defining the facial feature; classifying the facial feature into a specific facial expression class by applying sparse expression to classify the facial expression class corresponding to the query facial image. Determining. As such, by using a facial feature extraction and classification method for removing change components in the facial expression class that are not related to the facial expressions appearing in the query face image, there is an advantage of showing stable facial expression performance even if various changes exist in the query face image.

Description

METHOOD AND SYSTEMS OF FACE EXPRESSION FEATURES CLASSIFICATION ROBUST TO VARIETY OF FACE IMAGE APPEARANCE}

The present invention relates to a method and system for classifying facial expression features, and more particularly, to a method and a system for feature extraction and classification of facial features for facial expression recognition.

Facial expression recognition extracts facial features related to facial expressions from query face images and classifies these facial features to determine which facial expressions (eg, neutral, smile, and surprise) the query face images correspond to. This is determined by the classification process. Many facial expression recognition methods use a facial feature extraction method that extracts texture information corresponding to contours or wrinkles in the entire area or locally.

In this case, the method of recognizing the facial expression is to measure the similarity between the query face image and the registered face image based on the face image in units of blocks, and compare the face of the user input through the camera with the pre-registered face image. This is done by finding a way. In this regard, Korean Patent Laid-Open Publication No. 2009-0021279 (published on September 27, 2010), which is a prior art, can improve face recognition performance even when the number of face images registered like a robot environment is small, and input face images in predetermined block units. Since various combinations of images are made by judging similarity, the method of increasing the diversity of registered face images is disclosed.

However, if the person corresponding to the query face image does not exist in the training face images, confusion occurs between facial facial identity and facial expressions of the person, and the accuracy of facial recognition is drastically deteriorated. There is a disadvantage.

To solve these drawbacks, a facial expression recognition method has been studied to remove the unique information of a person by calculating an image difference between the face image of the query face and the negative expression face of the person.

However, there are two problems in the facial expression recognition method of removing the unique information of a person. First, in actual facial expression recognition, the person corresponding to the query face image does not exist frequently in the training face images, but it is not applicable in this case. Second, when the acquired lighting conditions are different between the query face image and the expressionless face image, the recognition deterioration due to the light change occurs when calculating the image difference.

Korean Patent Publication No. 2009-0021279 (published Sep. 27, 2010) discloses a block-based face recognition method and apparatus thereof.

An embodiment of the present invention provides a method and system for classifying facial expression features so as to be robust to a variety of facial images that are not related to facial expressions present in the facial image. However, the technical problem to be achieved by the present embodiment is not limited to the technical problem as described above, and other technical problems may exist.

As a technical means for achieving the above technical problem, according to an aspect of the present invention, the facial expression feature classification method performed in the facial expression feature classification system for facial expression recognition, the training face configured for each facial expression class for the query face image Generating a change image in the expression class corresponding to the change component in the expression class by using the images, and calculating an image difference between the change image in the expression class and the query face image and defining it as a facial feature And classifying the facial feature into a specific facial expression class by applying a rare expression to determine a facial expression class corresponding to the query facial image.

Here, the change image in the facial expression class may be obtained through approximation of the query face image using a linear combination of the training face images.

In addition, the change image in the facial expression class may be obtained by applying a normalized least square method to obtain each weight vector representing a weight of a linear combination of each training face image, and using each weight vector and each training face image. have.

The determining of the facial expression class may include obtaining a plurality of sparse coefficient vectors for expressing facial features of the query face image by using a dictionary composed of facial features of the training face image, and the plurality of sparse coefficients. Fusing a vector to obtain a fused sparse coefficient vector, and finding a facial expression class in which the sparse coefficient is most concentrated in the fused sparse coefficient vector, and determining the facial expression class of the query face image.

In addition, the dictionary may define facial features as many as the number of expressions by using an image difference between the training face image and the change image in the expression class.

According to another aspect of the present invention, the facial expression feature classification system for facial expression recognition, using the training face images configured for each facial expression class for the query facial image using the change image in the facial expression class corresponding to the change component in the facial expression class A change image generator for generating a face difference, an image difference calculator that calculates an image difference between the change image in the expression class and the query face image, and defines a face feature; The image classification unit may be classified to determine a facial expression class corresponding to the query face image.

Here, the change image in the facial expression class may be obtained through approximation of the query face image using a linear combination of the training face images.

In addition, the change image in the facial expression class may be obtained by applying a normalized least square method to obtain each weight vector representing a weight of a linear combination of each training face image, and using each weight vector and each training face image. have.

The image classification unit may include a sparse expression application unit that obtains a plurality of sparse coefficient vectors for expressing a face feature of the query face image using a dictionary composed of face features of the training face image, and the plurality of sparse coefficient vectors. The fusion unit may include a fusion unit that obtains a fused sparse coefficient vector by fusion and a classification unit that finds an expression class in which the sparse coefficient is most concentrated in the fused sparse coefficient vector and determines the expression class of the query face image.

In addition, the dictionary may define facial features as many as the number of expressions by using an image difference between the training face image and the change image in the expression class.

According to an embodiment of the present invention, in order to achieve facial expression recognition, which is robust to the diversity of facial images that are not related to the facial expressions present in the facial image, the character when the expressionless facial image of the query facial image person does not exist in the training facial images. By removing the unique information of and reducing the illumination change in the query face image, it is possible to overcome the problems of the prior art in which the recognition performance is degraded by the diversity of the query face image.

As such, by using a facial feature extraction and classification method that removes intra-class variation components irrelevant to the facial expressions appearing in the query face image, even if various changes exist in the query face image, stable facial recognition performance is exhibited. .

1 is a block diagram of a facial expression feature classification system according to an embodiment of the present invention.
2 is a detailed block diagram of an image classification unit constituting a facial expression feature classification system according to an exemplary embodiment of the present invention.
3 is a flowchart illustrating a facial expression feature classification method by the facial expression feature classification system according to an exemplary embodiment of the present invention.
4 exemplarily illustrates a process of extracting facial features in a facial expression feature classification method by the facial expression feature classification system according to an embodiment of the present invention.
5 and 6 exemplarily illustrate a change image in a facial expression class generated by using a query face image and a training face image by a facial expression feature classification method by the facial expression feature classification system according to an exemplary embodiment of the present invention. .
7 is a query face image q used in the facial expression feature classification method according to the facial expression feature classification system according to an embodiment of the present invention, a change image in the facial expression class h i q , and a facial feature y i q ) Is shown as an example.

DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.

Throughout the specification, when a part is "connected" to another part, this includes not only "directly connected" but also "electrically connected" with another element in between. . In addition, when a part is said to "include" a certain component, which means that it may further include other components, except to exclude other components, unless specifically stated otherwise, one or more other features It is to be understood that the present disclosure does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, parts, or combinations thereof.

Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram of a facial expression feature classification system (apparatus) according to an embodiment of the present invention.

As described above, the facial expression feature classification system 100 according to the embodiment includes a change image generation unit 110, an image difference calculation unit 120, an image classification unit 130, and the like.

The change image generator 110 generates a change image in the expression class corresponding to the change component in the expression class by using the training face images configured for each expression class with respect to the query face image.

Here, the change image in the facial expression class may be obtained through approximation of the query face image using a linear combination of training face images. In addition, by applying a normalized least square method, each weight vector representing weights of linear combinations of each training face image may be obtained, and a change image in the facial expression class may be generated using each weight vector and each training face image.

The image difference calculator 120 calculates an image difference between the change image in the expression class and the query face image and defines it as a facial feature.

The image classifying unit 130 classifies the facial feature into a specific facial expression class by applying the sparse expression to determine the facial expression class to which the query facial image corresponds. The detailed configuration of the image classifying unit 130 is illustrated in FIG. 2.

2 is a detailed block diagram of an image classification unit 130 constituting a facial expression feature classification system 100 according to an exemplary embodiment of the present invention.

As shown therein, the image classifying unit 130 includes a sparse expression applying unit 131, a fusion unit 133, and a classification unit 135.

The sparse expression application unit 131 obtains a plurality of sparse coefficient vectors for expressing the facial features of the query face image by using a dictionary composed of the facial features of the training face image.

Here, the dictionary defines facial features as many as the number of facial expressions using the image difference between the training facial image and the changing image in the facial expression class.

The fusion unit 133 fuses a plurality of sparse coefficient vectors to obtain a fused sparse coefficient vector.

The classifier 135 finds an expression class in which the sparse coefficient is most concentrated in the fused sparse coefficient vector and determines the expression class of the query face image.

3 is a flowchart illustrating a facial expression feature classification method by the facial expression feature classification system (apparatus) according to an embodiment of the present invention.

As described above, in the facial expression feature classification method according to the embodiment, generating the change image in the expression class corresponding to the change component in the expression class by using the training face images configured for each expression class with respect to the input query face image. (S201 to S203).

Subsequently, the method may further include calculating an image difference between the change image in the expression class and the query face image and defining it as a facial feature (S205).

In operation S207 to S211, the facial feature may be classified into a specific facial expression class by applying the rare expression to determine a facial expression class corresponding to the query facial image.

Finally, the method may further include outputting a label of the determined facial expression class (S213).

Hereinafter, a method of classifying facial expression features by the facial expression feature classification system (apparatus) according to an embodiment of the present invention will be described in more detail with reference to FIGS. 1 to 7.

First, when the query face image is input to the facial expression feature classification system 100 (S201), the change image generator 110 uses an i-th (i = 1) using training face images configured for each expression class with respect to the query face image. ,…, C) Generates intra-class variation images one by one. Through this process, change images in the C facial expression classes corresponding to the total C facial expression classes are obtained.

For this purpose, the query face image q∈R N is defined, and the entire training face image set is defined as Φ = [Φ 1 , Φ 2 , ..., Φ C ] ∈R NxM . Where N and M are the number of dimensions of the vectorized face image and the total number of training face images, respectively. Also, Φ = [t i , 1 , t i , 2 , ..., t i , Mi ] NR NxMi is the training face image set of the i facial expression class, where t ij ∈ R N is the j in the i facial expression class First training face image.

A change image h i q in the facial expression class is generated using Φ i to extract the facial features of the query face image q (S203). h i q is obtained by approximation of q using a linear combination of training face images contained in Φ i . Therefore, to obtain a weight vector w i q = [w 1 , w 2 , ..., w Mi ] T ∈R Mi that represents the weight of each training face image, the normalized least square method is applied. The optimization problem defined in Equation 1 must be solved.

Figure 112015031980887-pat00001

Where || · || 2 is the L2 norm of the vector. q-Φ i w i q || 2 2 is the reconstruction error and || w i q || 2 2 is the year

Figure 112015031980887-pat00002
Normalization term for stabilizing

Weight vector obtained using Equation 1

Figure 112015031980887-pat00003
The change image h i q in the facial expression class is expressed as in Equation 2 below.

Figure 112015031980887-pat00004

here,

Figure 112015031980887-pat00005
Each element of represents a weight for a linear combination of a corresponding training face image. h i q is generated using the training face image set Φ i of the i th facial expression class and thus represents an facial expression corresponding to the i th facial expression class.

Subsequently, the image difference calculator 120 calculates an image difference between the change image in the i th (i = 1, ..., C) facial expression class and the query face image and defines it as a facial feature. Through this process, C facial features corresponding to a total of C facial expression classes are obtained (S205).

FIGS. 5 and 6, respectively i = Neutral and i = Surprise day when the query face image three (the leftmost column) q, h i q with the highest weight training video when creating (three to gown columns), and create Show the change image (rightmost column) in the facial expression class. The generated h i q can be seen that the lighting is similar to each other when compared with the query face image q and the person's own facial identity.

Changed images in the C facial expression classes included in the total training face image set Φ h i q The facial feature y i q is defined using Equation (3) using (i = 1, ..., C).

Figure 112015031980887-pat00006

7 illustrates facial features y i q obtained using Equation 3 when i = Neutral, i = Smile, and i = Surprise. h i q and q is the face acquired by the image order, because the similar to each other one trillion people when compared with figures unique facial features characteristic y i q is reducing the effect of that is not related to the expression elements and expression as h i q q having the It has the effect of emphasizing the difference of the reference expression.

Next, the image classifying unit 130 classifies the facial features into a specific facial expression class by applying the sparse expression to determine the facial expression class to which the query facial image corresponds. That is, the facial expression and the dictionary obtained from the change image in the i th facial expression class are input and the sparse expression is applied. In this process, C sparse expression coefficient vectors corresponding to a total of C facial expression classes are obtained.

To this end, the sparse expression applying unit 131 obtains a plurality of sparse coefficient vectors for expressing the facial features of the query face image using a dictionary composed of the facial features of the training face image. Here, the dictionary defines a facial feature by the number of facial expressions by using the image difference between the training facial image and the change image in the facial expression class (S207).

A classification process using sparse expression classification will be described using the facial features y i s ∈ R N (i = 1,…, C) of the C facial expression classes obtained by Equation (3).

First, we need to define a dictionary of training facial expression features to classify sparse expressions. Training face images

Figure 112015031980887-pat00007
From the training face image set Φ (i = 1, ..., C) and Equation 1 and Equation 2 and Equation 3 are extracted to extract the training face features from. This is exactly the same as the method of extracting facial features from the query face image. Specifically, when extracting a facial feature from one training face image s, a change image h i s in the facial expression class is generated and the facial feature y i s is the number of expressions (= C using the image difference between s and h i s) . Defined by). Referring to FIG. 4, the change in the i th facial expression class Let A i R NXM be a dictionary consisting of M training face features y i s obtained using the image h i s .

A sparse coefficient vector for expressing the query face feature y i q using the dictionary A i by solving the L1 norm minimization problem defined in Equation 4 to determine the facial expression class of the query face image q.

Figure 112015031980887-pat00008
Should be obtained.

Figure 112015031980887-pat00009

In this equation, ε is a noise term with a small amount of energy. C sparse coefficient vector using change images in C facial expression classes by Equation 4

Figure 112015031980887-pat00010
Can be obtained.

Next, the fusion unit 133 fuses the plurality of sparse coefficient vectors to obtain a fused sparse coefficient vector (S209).

Sparse coefficient vector obtained using change images in different facial expression classes

Figure 112015031980887-pat00011
C sparse coefficient vectors, as shown in Equation 5, in order to utilize the complementary information of
Figure 112015031980887-pat00012
Fused sparse coefficient vector by fusing
Figure 112015031980887-pat00013
Can be obtained.

Figure 112015031980887-pat00014

Normalized term in this expression

Figure 112015031980887-pat00015
By each
Figure 112015031980887-pat00016
Is
Figure 112015031980887-pat00017
Make the same contribution when generating.

Next, the classification unit 135 finds the facial expression class in which the sparse coefficient is most concentrated in the fused sparse coefficient vector and determines the facial expression class of the query face image (S211).

As in Equation 6, fused sparse coefficient vectors

Figure 112015031980887-pat00018
The expression class of the query face image q is determined by finding the expression class in which the sparse coefficient is concentrated.

Figure 112015031980887-pat00019

Where x com i , j is a fused sparse coefficient vector

Figure 112015031980887-pat00020
The sparse coefficient value associated with the j th training face image in the i th facial expression class in.

Finally, the classification unit 135 outputs the label of the determined facial expression class (S213).

As described above, by using the facial feature extraction and classification method that removes the components of the facial expression class that are not related to the facial expressions appearing in the query face image, even if various changes exist in the query face image, the expression performance is stable.

Combinations of each block of the block diagrams and respective steps of the flowcharts attached to the present invention may be performed by computer program instructions. These computer program instructions may be mounted on a processor of a general purpose computer, special purpose computer, or other programmable data processing equipment such that instructions executed through the processor of the computer or other programmable data processing equipment may be used in each block or flowchart of the block diagram. It will create means for performing the functions described in each step. These computer program instructions may be stored in a computer usable or computer readable memory that can be directed to a computer or other programmable data processing equipment to implement functionality in a particular manner, and thus the computer usable or computer readable memory. It is also possible for the instructions stored in to produce an article of manufacture containing instruction means for performing the functions described in each block or flowchart of each step of the block diagram. Computer program instructions may also be mounted on a computer or other programmable data processing equipment, such that a series of operating steps may be performed on the computer or other programmable data processing equipment to create a computer-implemented process to create a computer or other programmable data. Instructions that perform processing equipment may also provide steps for performing the functions described in each block of the block diagram and in each step of the flowchart.

In addition, each block or step may represent a portion of a module, segment or code that includes one or more executable instructions for executing a specified logical function (s). It should also be noted that in some alternative embodiments, the functions noted in the blocks or steps may occur out of order. For example, the two blocks or steps shown in succession may in fact be executed substantially concurrently or the blocks or steps may sometimes be performed in the reverse order, depending on the functionality involved.

The above description is merely illustrative of the technical idea of the present invention, and those skilled in the art to which the present invention pertains may make various modifications and changes without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are not intended to limit the technical idea of the present invention but to describe the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be interpreted by the following claims, and all technical ideas falling within the scope of the present invention should be construed as being included in the scope of the present invention.

Claims (10)

An expression feature classification method performed in an expression feature classification system for facial expression recognition,
Generating a change image in the expression class corresponding to a change component irrelevant to the expression appearing in the query face image in each expression class by using training face images configured for each expression class with respect to the query face image;
Calculating an image difference between the change image in the expression class and the query face image and defining it as a facial feature;
Classifying the facial feature into a specific facial expression class by applying a rare expression to determine a facial expression class corresponding to the query facial image.
Facial expression feature classification method for facial expression recognition.
The method of claim 1,
The change image in the facial expression class is obtained through approximation of the query face image using a linear combination of the training face images.
Facial expression feature classification method for facial expression recognition.
The method of claim 2,
The change image in the facial expression class is obtained by applying a normalized least square method to obtain each weight vector representing a weight for a linear combination of each training face image, and using each weight vector and each training face image.
Facial expression feature classification method for facial expression recognition.
The method of claim 1,
Determining the facial expression class,
Obtaining a plurality of sparse coefficient vectors for expressing a facial feature of the query facial image using a dictionary composed of facial features of the training face image;
Fusing the plurality of sparse coefficient vectors to obtain a fused sparse coefficient vector;
And finding an expression class in which the sparse coefficient is most concentrated in the fused sparse coefficient vector and determining the expression class of the query face image.
Facial expression feature classification method for facial expression recognition.
The method of claim 4, wherein
The dictionary may define facial features by the number of facial expressions using the image difference between the training facial image and the changing image in the facial expression class.
Facial expression feature classification method for facial expression recognition.
A change image generation unit generating a change image in an expression class corresponding to a change component not related to an expression appearing in the query face image in each expression class by using training face images configured for each expression class with respect to the query face image. Wow,
An image difference calculator configured to calculate an image difference between the change image in the facial expression class and the query face image and define the image difference as a facial feature;
The image classification unit may be configured to classify the facial feature into a specific facial expression class by applying a rare expression to determine a facial expression class corresponding to the query facial image.
Facial expression feature classification system for facial expression recognition.
The method of claim 6,
The change image in the facial expression class is obtained through approximation of the query face image using a linear combination of the training face images.
Facial expression feature classification system for facial expression recognition.
The method of claim 7, wherein
The change image in the facial expression class is obtained by applying a normalized least square method to obtain each weight vector representing a weight of a linear combination of each training face image, and using each weight vector and each training face image.
Facial expression feature classification system for facial expression recognition.
The method of claim 6,
The image classification unit,
A sparse expression application unit for obtaining a plurality of sparse coefficient vectors for expressing a facial feature of the query facial image by using a dictionary composed of facial features of the training face image;
A fusion unit for fusion of the plurality of sparse coefficient vectors to obtain a fused sparse coefficient vector;
And a classification unit for determining an expression class of which the sparse coefficient is most concentrated in the fused sparse coefficient vector and determining the expression class of the query face image.
Facial expression feature classification system for facial expression recognition.
The method of claim 9,
The dictionary may define facial features by the number of facial expressions using the image difference between the training facial image and the changing image in the facial expression class.
Facial expression feature classification system for facial expression recognition.

KR1020150046083A 2014-11-05 2015-04-01 Method and systems of face expression features classification robust to variety of face image appearance KR102043960B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20140152665 2014-11-05
KR1020140152665 2014-11-05

Publications (2)

Publication Number Publication Date
KR20160053749A KR20160053749A (en) 2016-05-13
KR102043960B1 true KR102043960B1 (en) 2019-11-12

Family

ID=56023544

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150046083A KR102043960B1 (en) 2014-11-05 2015-04-01 Method and systems of face expression features classification robust to variety of face image appearance

Country Status (1)

Country Link
KR (1) KR102043960B1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3571627A2 (en) 2017-01-19 2019-11-27 Mindmaze Holding S.A. Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location including for at least one of a virtual and augmented reality system
US10943100B2 (en) 2017-01-19 2021-03-09 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
US10515474B2 (en) 2017-01-19 2019-12-24 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in a virtual reality system
EP3568804A2 (en) 2017-02-07 2019-11-20 Mindmaze Holding S.A. Systems, methods and apparatuses for stereo vision and tracking
CN106980815A (en) * 2017-02-07 2017-07-25 王俊 Facial paralysis objective evaluation method under being supervised based on H B rank scores
US11328533B1 (en) 2018-01-09 2022-05-10 Mindmaze Holding Sa System, method and apparatus for detecting facial expression for motion capture
CN109886149B (en) * 2019-01-29 2021-09-14 中国人民解放军空军预警学院 Double-dictionary and multi-feature fusion decision-making facial expression recognition method based on sparse representation
CN113361307A (en) * 2020-03-06 2021-09-07 上海卓繁信息技术股份有限公司 Facial expression classification method and device and storage equipment
CN113343885A (en) * 2021-06-23 2021-09-03 杭州天翼智慧城市科技有限公司 Feature point reconstruction method for complex human face posture
CN113554073B (en) * 2021-07-09 2024-03-15 常州大学 Emotion state feature selection optimization method integrating sparse learning and dichotomy

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100695136B1 (en) * 2005-01-04 2007-03-14 삼성전자주식회사 Face detection method and apparatus in image
US7257283B1 (en) 2006-06-30 2007-08-14 Intel Corporation Transmitter-receiver with integrated modulator array and hybrid bonded multi-wavelength laser array
KR100988323B1 (en) * 2008-07-24 2010-10-18 포항공과대학교 산학협력단 Method and apparatus of recognizing detailed facial expression using facial expression information amplification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
He Jun 외 6명, Sparse presentation based classification with position-weighted block dictionary, Image Processing: Algorithms and Systems XII, 90190X (2014.02.25)*
Rui Min 외 1명, IMPROVED COMBINATION OF LBP AND SPARSE REPRESENTATION BASED CLASSIFICATION (SRC) FOR FACE RECOGNITION, 2011 IEEE International Conference on Multimedia and Expo (2011)*

Also Published As

Publication number Publication date
KR20160053749A (en) 2016-05-13

Similar Documents

Publication Publication Date Title
KR102043960B1 (en) Method and systems of face expression features classification robust to variety of face image appearance
CN109145766B (en) Model training method and device, recognition method, electronic device and storage medium
Dachapally Facial emotion detection using convolutional neural networks and representational autoencoder units
KR20230021043A (en) Method and apparatus for recognizing object, and method and apparatus for learning recognizer
Ansari et al. Nearest neighbour classification of Indian sign language gestures using kinect camera
CN110543841A (en) Pedestrian re-identification method, system, electronic device and medium
KR20160061856A (en) Method and apparatus for recognizing object, and method and apparatus for learning recognizer
KR20190081243A (en) Method and apparatus of recognizing facial expression based on normalized expressiveness and learning method of recognizing facial expression
Bonis et al. Persistence-based pooling for shape pose recognition
Karis et al. Local Binary Pattern (LBP) with application to variant object detection: A survey and method
Raut Facial emotion recognition using machine learning
CN111160225B (en) Human body analysis method and device based on deep learning
CN113177449A (en) Face recognition method and device, computer equipment and storage medium
Rao et al. Selfie sign language recognition with multiple features on adaboost multilabel multiclass classifier
Khurana et al. Static hand gestures recognition system using shape based features
CN113343981A (en) Visual feature enhanced character recognition method, device and equipment
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
Carneiro et al. Static gestures recognition for Brazilian sign language with kinect sensor
CN108875496B (en) Pedestrian representation generation and representation-based pedestrian recognition
Mantecón et al. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition
Joshi et al. Static hand gesture recognition using an android device
Travieso et al. Using a discrete Hidden Markov Model Kernel for lip-based biometric identification
Divya et al. Segmentation, tracking and feature extraction for Indian sign language recognition
Li et al. A novel art gesture recognition model based on two channel region-based convolution neural network for explainable human-computer interaction understanding
Rayeed et al. Bangla sign digits recognition using depth information

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application
AMND Amendment
E902 Notification of reason for refusal
J201 Request for trial against refusal decision
J301 Trial decision

Free format text: TRIAL NUMBER: 2017101005395; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20171109

Effective date: 20190716

S901 Examination by remand of revocation
GRNO Decision to grant (after opposition)
GRNT Written decision to grant