CN104077579B - Facial expression recognition method based on expert system - Google Patents
Facial expression recognition method based on expert system Download PDFInfo
- Publication number
- CN104077579B CN104077579B CN201410333366.0A CN201410333366A CN104077579B CN 104077579 B CN104077579 B CN 104077579B CN 201410333366 A CN201410333366 A CN 201410333366A CN 104077579 B CN104077579 B CN 104077579B
- Authority
- CN
- China
- Prior art keywords
- image
- expression
- facial expression
- expert system
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The present invention relates to a kind of facial expression recognition method based on expert system, the method makes inferences identification human face expression by setting up the expert system on the basis of facial expression image processing method and traditional computer program function to the image for pre-processing, and the described method comprises the following steps:1) image is caught from video, the user profile in the video is obtained, then by image procossing, image characteristics extraction carries out authentication, obtains the facial expression image characteristic parameter of user, determine user's expression storehouse, set up identification human face expression expert system;2) image procossing and image characteristics extraction are carried out to the image caught from video, obtain characteristic parameter when user's expression degree is maximized, by characteristic parameter and step 1) determine user expression storehouse in expression training sample parameter compare, by the statistics of the reasoning final output Expression Recognition of inference engine of expert system.Compared with prior art, the present invention has the advantages that recognition speed is fast.
Description
Technical field
The present invention relates to a kind of expert system application technology, more particularly, to a kind of human face expression figure based on expert system
As recognition methods.
Background technology
Expert system is the computer intelligence programming system that a class has special knowledge and experience, by human expert's
The modeling of problem solving ability, is simulated generally by expert's ability using the representation of knowledge in artificial intelligence and knowledge reasoning technology
The challenge of solution, reaches with the level with the equal ability to solve problem of expert.This KBS Knowledge Based System design side
Method is launched centered on knowledge base and inference machine.It comes knowledge with being separately from other sections from system.Expert system
It is emphasised that knowledge is rather than method.Many problems are not based on the solution of algorithm, or algorithm arrangement is too complicated, using special
Family's system, it is possible to use human expert possesses abundant knowledge, therefore expert system is also referred to as KBS Knowledge Based System.
At present, with the development of science and technology, rehabilitation nursing intelligent bed becomes increasingly popular.But in patient, there is a big chunk people
Accurately can not outwardly be made oneself understood by limbs or language.To everybody, different expressions represents difference
State.When language and limbs can not be passed on, we can recognize the idea of patient by expressing one's feelings, and complete corresponding behaviour
Make.Care bed operator is nursing staff on the market, and have ignored wish of patient itself.We are in line with setting that people-oriented
Meter theory, by the expert system of Expression Recognition technology so that action inconvenient patient also can Self-operating care bed.
The content of the invention
The purpose of the present invention is exactly to provide a kind of based on expert system for the defect for overcoming above-mentioned prior art to exist
With learning functionality and can quick and precisely recognize the facial expression recognition method of human face expression.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of facial expression recognition method based on expert system, the method is by setting up in facial expression image treatment side
The expert system for expression recognition on the basis of method and traditional computer program function is carried out to the image for pre-processing
Reasoning recognizes human face expression, the described method comprises the following steps:
1) image is caught from video, the user profile in the video is obtained, then by image procossing, characteristics of image is carried
Take, carry out authentication, obtain the facial expression image characteristic parameter of user, determine user's expression storehouse, set up identification human face expression special
Family's system;
2) image procossing and image characteristics extraction are carried out to the image caught from video, obtains user's expression degree maximum
Characteristic parameter during change, by characteristic parameter and step 1) determine user expression storehouse in expression training sample parameter compared
It is right, by the statistics of the reasoning final output Expression Recognition of inference engine of expert system.
The step 1) in set up identification Facial Expression Image expert system specifically include step:
11) Facial Expression Image is obtained;
12) Facial Expression Image is pre-processed;
13) Facial Expression Image feature is extracted;
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited into rule base
In.
The step 2) in Expression Recognition obtain facial expression recognition result specifically include step:
21) facial expression image of user is obtained:After video information is received, image is caught from video information, obtain user's
Facial expression image.
22) to step 21) the middle Facial Expression Image for obtaining, carry out image preprocessing;
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction;
24) Expression Recognition:By step 23) in the characteristic parameter that obtains be input in identification Facial Expression Image expert system
And contrasted with the facial expression feature parameter of storage in expert system rule storehouse, by the reasoning of inference machine in expert system
And export the reasoning results.
The step 12) and step 22) and the pretreatment of middle Facial Expression Image specifically include image denoising, yardstick normalizing
Change, gray scale normalization, image segmentation and image binaryzation;
Image g (i, j) after denoising is obtained after described image denoising is:
G (i, j)=∑ f (i, j)/N, (i, j) ∈ M
Wherein:F (i, j) is given containing noisy image, and M is the coordinate of each neighborhood pixels in taken neighborhood, and N is adjacent
The number of the neighborhood pixels included in domain;
Target image g (x, y) is obtained after the dimension normalization is:
G (x, y)=f (x/a+x0, y/a+y0)
Wherein:F (x, y) is image, (x before normalization0, y0) it is the center of gravity of picture target area, a is scale factor, described
Scale factor is:
A=sqrt (T/m)
Wherein:M is the area of target image;T is the area of image before normalization
The gray scale normalization is piecewise linear gray transformation method;
Described image cutting techniques are serial domain decomposition technique, by the direct detection to face and eyes target area
To split to image.
Described image Binarization methods are binarization method Otsu algorithms, and gray level image is divided into target using gray threshold
Part and the class of background parts two.
The step 13) and 33) in Facial Expression Image feature include left eye feature, right eye feature and mouth feature, institute
State Facial Expression Image feature extraction algorithm and specifically include following steps:
201) correlation matrix M is calculated to each pixel:
Wherein:
Wherein:IxIt is the difference in x directions, IyIt is the difference in y directions, w (x, y) is Gaussian function, specially:
202) the Harris angle points per pixel are calculated to respond:R=(AB-CD)2-k(A+B)2;
203) maximum point is found in the range of w*w, if the response of Harris angle points is more than threshold value, the threshold value is w*w models
Interior maximum is enclosed, is then considered as angle point, by the feature extraction of eye and mouth out.
The step 24) terminate after, be added to this task data in rule base by system.
Described image binarization specifically includes step:
101) the average gray u of calculating image is:
U=∑s i*n (i)/(M*N)
Wherein:M*N is the number of pixels of image, and n (i) is that gray scale is the number of the pixel of i;
102) determine that gray threshold t, t are prospect and (the gray threshold t is to make inter-class variance G for the segmentation threshold of background
Maximum value), the inter-class variance G is:
G=w1*(u1-u)*(u1-u)+w2*(u2-u)*(u2-u)
When variance G is maximum, it is believed that now foreground and background difference is maximum, that is, gray scale now is optimal threshold
Value.
Wherein:w1The ratio of image, w are accounted for for object pixel2The ratio of image, u are accounted for for background pixel1It is object pixel
Average gray, u2It is the average gray of background pixel, the object pixel accounts for the ratio w of image1With the average gray of object pixel
u1Respectively:
w1=W1/(M*N) u1=∑ i*n (i)/W1, i > t
Wherein:W1Statistical number for gray value more than t,
The background pixel accounts for the ratio w of image2With the average gray u of background pixel2Respectively:
w2=W2/(M*N) u2=∑ i*n (i)/W2, i < t
Wherein:W2Statistical number for gray value less than t;
103) gray value is more than the pixel of t as object pixel, using pixel of the gray value less than t as background pixel pair
Image carries out binaryzation.
Compared with prior art, the present invention has advantages below:
1) proposed by the present invention to set up identification human face expression expert system, expert system is one and specially knows with substantial amounts of
Know the programming system with experience, taken into full account number of training and its classification information, obtain preferable recognition result, be people
Face identification provides a kind of effective approach.
2) present invention proposes a kind of method for quickly identifying of human face expression, proposes to be used to recognize human face expression under video environment
Not only there is speed faster but also have the new facial expression recognizing method of discrimination higher.
3) expert system is built on the basis of facial expression image processing method and traditional computer program function.At facial expression image
Reason expert system is not to replace to have possessed compared with powerful and reached quite high-caliber classical way and traditional program, but is being filled
Divide on the basis of being fruitful using it, the part problem for being still difficult to solve at present is processed emphatically.
Brief description of the drawings
A kind of configuration diagram of the expert system of identification Facial Expression Image that Fig. 1 is provided for the present invention;
Fig. 2 is point three sections of heterogeneous linear transforming function transformation function figures in gray scale normalization;
Fig. 3 is sensitiveness schematic diagram of the harris operators to yardstick.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention
Premised on implemented, give detailed implementation method and specific operating process, but protection scope of the present invention is not limited to
Following embodiments.
A kind of facial expression recognition method based on expert system, the method is comprised the following steps:
1) image is caught from video, the user profile in the video is obtained, then by image procossing, characteristics of image is carried
Take, carry out authentication, obtain the facial expression image characteristic parameter of user, determine user's expression storehouse, set up identification human face expression special
Family's system;
2) image procossing and image characteristics extraction are carried out to the image caught from video, obtains user's expression degree maximum
Characteristic parameter during change, by characteristic parameter and step 1) determine user expression storehouse in expression training sample parameter compared
It is right, by the statistics of the reasoning final output Expression Recognition of inference engine of expert system.
Step 1) in set up identification Facial Expression Image expert system specifically include step:
11) Facial Expression Image is obtained;
12) Facial Expression Image is pre-processed;
13) Facial Expression Image feature is extracted;
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited into rule base
In.
Step 2) in Expression Recognition obtain facial expression recognition result specifically include step:
21) facial expression image of user is obtained:After video information is received, image is caught from video information, obtain user's
Facial expression image.
22) to step 21) the middle Facial Expression Image for obtaining, carry out image preprocessing;
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction;
24) Expression Recognition:By step 23) in the characteristic parameter that obtains be input in identification Facial Expression Image expert system
And contrasted with the facial expression feature parameter of storage in expert system rule storehouse, by the reasoning of inference machine in expert system
And export the reasoning results.
It is as shown in Figure 1 work configuration diagram of the invention, expert system of the present invention has learning functionality, step 24)
This secondary data is added into rule base after end, further increases operating efficiency of the invention and recognition accuracy.
Step 12) and step 22) and middle Facial Expression Image pretreatment specifically include image denoising, dimension normalization,
Gray scale normalization, image segmentation and image binaryzation.
Step 13) and 33) in Facial Expression Image feature include left eye feature, right eye feature and mouth feature.
With reference to example, the invention will be further described.
1) Facial Expression Image expert system is set up;
After video information is received, image is caught from video information, and can obtain the user profile of the video information,
By image preprocessing, image characteristics extraction, authentication is carried out, determine the expression storehouse of the user, set up identification human face expression
Expert system, extracts when Expression Recognition;
11) Facial Expression Image is obtained
The still image of certain human face expression is obtained by camera image trap tool;
12) Facial Expression Image pretreatment
A image denoisings
The final purpose of image denoising is to improve given image, and solving real image causes image due to noise jamming
The problem of Quality Down.Picture quality can be effectively improved by noise-removed technology, increase signal to noise ratio, preferably embody original figure
Information as entrained by.Based on C# program languages, Image denoising algorithm uses traditional filter in spatial domain method in the present invention.
Airspace filter is that data operation is directly carried out on original image, and the gray value to pixel is processed.The sky that the present invention is used
Between area image Denoising Algorithm be neighborhood averaging.Neighborhood averaging is expressed with mathematical formulae:If f (i, j) is given containing making an uproar
The image of sound, the image after neighborhood averaging is processed is g (i, j), then g (i, j)=∑ f (i, j)/N, and (i, j) ∈ M, M are institutes
The coordinate of each neighborhood pixels in neighborhood is taken, is the number of the neighborhood pixels included in neighborhood.Neighborhood averaging processing method is to scheme
Reduce noise as obscuring as cost, and template size is bigger, the effect that noise reduces is more notable.If f (i, j) is noise
Point, its neighborhood pixels gray scale differs greatly therewith, is exactly to replace it with the average value of neighborhood pixels using neighborhood averaging, this
Sample can substantially slacken noise spot, and gray scale plays a part of smooth grey close to uniform in making neighborhood.
B dimension normalizations
Image translation dimension normalization refers to just by converting elimination translation and influence of the proportional zoom to image.Based on C#
Program language, the method that the dimension normalization in the present invention uses standard square first moves to the origin of coordinates at image reform,
Center of gravity (the x of target can be obtained by standard square0, y0).Because the center of gravity of target is constant to translation, yardstick and rotation, will scheme
The origin of picture is placed on target barycentric to solve the problems, such as translation.Then define a scale factor a and solve scale problem:A=
sqrt(T/m).In fact, if target pixel value is 1 on bianry image, background is pixel value 0, and m is the area of target, makes target
Area be a fixed size, so, converted by following mark and can be obtained by dimension normalization target:G (x, y)=
f(x/a+x0, y/a+y0).Thus the region of face is amplified.
C gray scale normalizations
Gray scale normalization is in order to improve the quality of image.Based on C# program languages, we are using most basic in the present invention
Piecewise linear gray transformation method, piecewise linear transform is also referred to as gray scale linear stretch, is point three sections of heterogeneous linear conversion.Such as
Shown in Fig. 2, gray scale interval [a, b] is extended in figure, and gray scale interval [0, a] and [b, c] have received compression.By thin
The position of heart adjustment broken line flex point and the slope of control segmented linear, can be extended and compress to any gray scale interval, realize
Gray scale normalization treatment.Human face expression coloured image is converted into gray level image.
D image segmentations
The place that human face expression change can most be reflected in face is exactly face and eyes, so, by image in the present invention
The position extracted after dividing processing is eyes and face.Based on C# program languages, we use serial region in the present invention
Cutting techniques, are exactly the technology split to image by the direct detection to target area using serial mode.It is special
Putting is:Multiple steps that whole processing procedure is decomposed into order are carried out successively, the treatment to subsequent step will be walked according to front and continued
Depending on rapid result.Then we gradually split into required cut zone using from full figure.
E image binaryzations
Based on C# program languages, in the present invention binarization method we use the binarization method Otsu of classics and calculate
Method.The basic thought of the algorithm is:If using some threshold value by gray level image according to gray scale size, being divided into target part and the back of the body
The class of scape part two, when the variance within clusters of this two class are minimum maximum with inter-class variance, the threshold value for obtaining is optimal two-value
Change threshold value.For the image of N*M pixel of a width, the average gray u of image is calculated first, statistics obtains ash in all images
It is corresponding number of pixels u (i) of i to spend, then average gray value u=∑s i*n (i) of the image/(M*N);List solution optimal
The correlated variables of threshold value t, note t is the segmentation threshold of target and background, and the ratio that note object pixel (gray scale is more than t) accounts for image is
w1:w1=W1/ (M*N), wherein W1It is statistical number of the gray value more than t;The average gray for remembering object pixel is u1:u1=∑ i*n
(i)/W1, i > t similarly, obtain the ratio w that background pixel accounts for image2With the average gray u of background pixel2.Finally solve optimal
Threshold value t is to make class difference maximum, that is, cause G=w1*(u1*u)*(u1-u)+w2*(u2*u)*(u2- u) it is maximum.When G is maximum, obtain final product
Optimal threshold is arrived.
13) Facial Expression Image feature extraction
The image of eye and mouth is obtained by above-mentioned steps E image segmentations, feature extraction is carried out using geometric properties, it is right
Eyes, the change in location of face are positioned, measured, and determine the features such as its size, distance, shape and mutual ratio.In this hair
Carried out in bright in table expression recognition algorithm, it is necessary to extract geometric properties, we use Harris operator angle point grids
Algorithm, based on C# program languages, but these detection algorithms need some threshold values to set, and the intersection point number of each image is not
The same.The angle point of eyes is extracted, selection identical N number of point (left and right canthus, eyelid up and down totally four points) is fixed manually:
The first step calculates correlation matrix M, Harris operator and replaces two-value window function with Gaussian function to each pixel,
To from central point more close to pixel give bigger weight, to reduce influence of noise.
Wherein,
Wherein:IxIt is the difference in x directions, IyIt is the difference in y directions, w (x, y) is Gaussian function, specially:
The Harris angle points that second step calculates per pixel are responded.
R=(AB-CD)2-k(A+B)2
3rd step finds maximum point in the range of w*w, if as shown in figure 3, the response of Harris angle points is more than threshold value, threshold value
Maximum in the range of generally w*w, then be considered as angle point.So, just by the feature extraction of eye and mouth out.
The mankind have six kinds of main emotions, and every kind of emotion reflects a kind of unique psychological activity of people with unique expression.
This six kinds of emotions are referred to as basic emotion, by angry (anger), glad (smile), sad (sadness), surprised
(surprise) (disgust) and frightened (fear) composition, is detested.Collection represents this six kinds Facial Expression Images of expression, warp
Cross step 12), 13) process after, the characteristic parameter that will represent different expressions is deposited into rule base.The feature ginseng of every kind of expression
Number is all replaced with corresponding letter, for example, represent happily characteristic parameter, is just marked with happiness.
So, by above-mentioned steps, identification human face expression expert system is just established.
(2) Expression Recognition obtains facial expression recognition result
21) facial expression image of user is obtained:
After video information is received, image is caught from video information, obtain the facial expression image of user.
22) image preprocessing:
To step 21) the middle Facial Expression Image for obtaining, first pass around step 12), image is pre-processed, to face
Detected, face is positioned, then image is cut, the image of the final eye for obtaining image and mouth is special
Levy.
23) image characteristics extraction:
Eye and the image of mouth to being obtained in above-mentioned steps (2) carry out feature extraction, using in above-mentioned steps (3)
Method of geometry extracts the characteristic parameter of eye and mouth.
24) Expression Recognition:
The characteristic parameter that will be obtained in step is input in identification Facial Expression Image expert system, with expert system rule
The facial expression feature parameter stored in storehouse is contrasted, then by the reasoning of inference machine in expert system, last expert system
System exports the reasoning results.For example, if input is glad Facial Expression Image, the output result of expert system is exactly
smile。
As shown in figure 1, expert system of the present invention has learning functionality, after end of identification, this secondary data is also added into rule
Storehouse, while expert can also manually be updated rule base and program to the expert system.
Claims (4)
1. a kind of facial expression recognition method based on expert system, it is characterised in that the method is by setting up in expression
The expert system for expression recognition on the basis of image processing method and traditional computer program function is to pretreatment
Image make inferences identification human face expression, the described method comprises the following steps:
1) image is caught from video, the user profile in the video is obtained, then by image procossing, image characteristics extraction,
Authentication is carried out, the facial expression image characteristic parameter of user is obtained, user's expression storehouse is determined, identification human face expression expert system is set up
System,
2) image procossing and image characteristics extraction are carried out to the image caught from video, when obtaining user's expression degree maximization
Characteristic parameter, by characteristic parameter and step 1) determine user expression storehouse in expression training sample parameter compare, pass through
Cross the statistics of the reasoning final output Expression Recognition of inference engine of expert system;
The step 1) in set up identification Facial Expression Image expert system specifically include step:
11) Facial Expression Image is obtained,
12) Facial Expression Image is pre-processed,
13) Facial Expression Image feature is extracted,
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited into rule base;
The step 2) in Expression Recognition obtain facial expression recognition result specifically include step:
21) facial expression image of user is obtained:After video information is received, image is caught from video information, obtain the expression of user
Image,
22) to step 21) the middle Facial Expression Image for obtaining, image preprocessing is carried out,
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction,
24) Expression Recognition:By step 23) in the characteristic parameter that obtains be input in identification Facial Expression Image expert system and with
The facial expression feature parameter stored in expert system rule storehouse is contrasted, by the reasoning of inference machine in expert system and defeated
Go out the reasoning results;
The step 12) and step 22) and middle Facial Expression Image pretreatment specifically include image denoising, dimension normalization,
Gray scale normalization, image segmentation and image binaryzation,
Image g (i, j) after denoising is obtained after described image denoising is:
G (i, j)=∑ f (i, j)/N, (i, j) ∈ M
Wherein:F (i, j) is given containing noisy image, and M is the coordinate of each neighborhood pixels in taken neighborhood, during N is neighborhood
Comprising neighborhood pixels number,
Target image g (x, y) is obtained after the dimension normalization is:
G (x, y)=f (x/a+x0,y/a+y0)
Wherein:F (x, y) is image, (x before normalization0,y0) it is the center of gravity of picture target area, a is scale factor, the yardstick
The factor is:
A=sqrt (T/m)
Wherein:M is the area of target image;T is the area of image before normalization;
The gray scale normalization is piecewise linear gray transformation method,
Described image cutting techniques are serial domain decomposition technique, by the direct detection to face and eyes target area come right
Image is split,
Described image Binarization methods are binarization method Otsu algorithms, and gray level image is divided into target part using gray threshold
With the class of background parts two.
2. a kind of facial expression recognition method based on expert system according to claim 1, it is characterised in that institute
State step 13) and 33) in Facial Expression Image feature include left eye feature, right eye feature and mouth feature, the human face expression
Image characteristics extraction algorithm specifically includes following steps:
201) correlation matrix M is calculated to each pixel:
Wherein:
Wherein:IxIt is the difference in x directions, IyIt is the difference in y directions, w (x, y) is Gaussian function, specially:
202) the Harris angle points per pixel are calculated to respond:R=(AB-CD)2-k(A+B)2;
203) maximum point is found in the range of w*w, if the response of Harris angle points is more than threshold value, the threshold value is in the range of w*w
Maximum, then be considered as angle point, by the feature extraction of eye and mouth out.
3. a kind of facial expression recognition method based on expert system according to claim 1, it is characterised in that institute
State step 24) terminate after, be added to this task data in rule base by system.
4. a kind of facial expression recognition method based on expert system according to claim 1, it is characterised in that institute
State image binaryzation process and specifically include step:
101) the average gray u of calculating image is:
U=∑s i*n (i)/(M*N)
Wherein:M*N is the number of pixels of image, and n (i) is that gray scale is the number of the pixel of i;
102) determine that gray threshold t, the gray threshold t are the value for making inter-class variance G maximum, the inter-class variance G is:
G=w1*(u1-u)*(u1-u)+w2*(u2-u)*(u2-u)
Wherein:w1The ratio of image, w are accounted for for object pixel2The ratio of image, u are accounted for for background pixel1It is average for object pixel
Gray scale, u2It is the average gray of background pixel, the object pixel accounts for the ratio w of image1With the average gray u of object pixel1Point
It is not:
w1=W1/(M*N) u1=Σ i*n (i)/W1, i > t
Wherein:W1Statistical number for gray value more than t,
The background pixel accounts for the ratio w of image2With the average gray u of background pixel2Respectively:
w2=W2/(M*N) u2=Σ i*n (i)/W2, i < t
Wherein:W2Statistical number for gray value less than t;
103), used as object pixel, the pixel using gray value less than t is as background pixel to image for the pixel using gray value more than t
Carry out binaryzation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333366.0A CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333366.0A CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104077579A CN104077579A (en) | 2014-10-01 |
CN104077579B true CN104077579B (en) | 2017-07-04 |
Family
ID=51598827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410333366.0A Active CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104077579B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318221A (en) * | 2014-11-05 | 2015-01-28 | 中南大学 | Facial expression recognition method based on ELM |
CN104634105B (en) * | 2014-12-25 | 2016-09-28 | 贵州永兴科技有限公司 | A kind of flip-shell universal electric furnace switch with counting and face identification functions |
CN104596250B (en) * | 2014-12-25 | 2016-08-17 | 贵州永兴科技有限公司 | A kind of information-based universal electric furnace with counting and face identification functions |
CN104596266B (en) * | 2014-12-25 | 2016-08-24 | 贵州永兴科技有限公司 | A kind of information-based universal electric furnace with counting and face identification functions |
JP2016161830A (en) * | 2015-03-03 | 2016-09-05 | カシオ計算機株式会社 | Content output device, content output method, and program |
CN104794444A (en) * | 2015-04-16 | 2015-07-22 | 美国掌赢信息科技有限公司 | Facial expression recognition method in instant video and electronic equipment |
CN104899255B (en) * | 2015-05-15 | 2018-06-26 | 浙江大学 | Suitable for the construction method of the image data base of training depth convolutional neural networks |
CN104951778A (en) * | 2015-07-24 | 2015-09-30 | 上海华旌科技有限公司 | Face recognition expert system based on semantic network |
CN106778679B (en) * | 2017-01-05 | 2020-10-30 | 唐常芳 | Specific crowd video identification method based on big data machine learning |
TWI731920B (en) * | 2017-01-19 | 2021-07-01 | 香港商斑馬智行網絡(香港)有限公司 | Image feature extraction method, device, terminal equipment and system |
CN106919923A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis method based on the identification of people face |
CN106919924A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis system based on the identification of people face |
US11042729B2 (en) * | 2017-05-01 | 2021-06-22 | Google Llc | Classifying facial expressions using eye-tracking cameras |
CN107945848A (en) | 2017-11-16 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | A kind of exercise guide implementation method, device, equipment and medium |
CN109034079B (en) * | 2018-08-01 | 2022-03-11 | 中国科学院合肥物质科学研究院 | Facial expression recognition method for non-standard posture of human face |
CN109159129A (en) * | 2018-08-03 | 2019-01-08 | 深圳市益鑫智能科技有限公司 | A kind of intelligence company robot based on facial expression recognition |
CN112968999B (en) * | 2021-02-25 | 2021-11-12 | 上海吉盛网络技术有限公司 | Digital-analog mixed elevator multi-party call device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102880855A (en) * | 2011-08-16 | 2013-01-16 | 武汉大学 | Cloud-model-based facial expression recognition method |
CN103268150A (en) * | 2013-05-13 | 2013-08-28 | 苏州福丰科技有限公司 | Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition |
CN103514441A (en) * | 2013-09-21 | 2014-01-15 | 南京信息工程大学 | Facial feature point locating tracking method based on mobile platform |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1723467A (en) * | 2002-12-13 | 2006-01-18 | 皇家飞利浦电子股份有限公司 | Expression invariant face recognition |
-
2014
- 2014-07-14 CN CN201410333366.0A patent/CN104077579B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102880855A (en) * | 2011-08-16 | 2013-01-16 | 武汉大学 | Cloud-model-based facial expression recognition method |
CN103268150A (en) * | 2013-05-13 | 2013-08-28 | 苏州福丰科技有限公司 | Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition |
CN103514441A (en) * | 2013-09-21 | 2014-01-15 | 南京信息工程大学 | Facial feature point locating tracking method based on mobile platform |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
Also Published As
Publication number | Publication date |
---|---|
CN104077579A (en) | 2014-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104077579B (en) | Facial expression recognition method based on expert system | |
CN107491726B (en) | Real-time expression recognition method based on multichannel parallel convolutional neural network | |
CN105976809B (en) | Identification method and system based on speech and facial expression bimodal emotion fusion | |
CN107358180B (en) | Pain assessment method for facial expression | |
Dabre et al. | Machine learning model for sign language interpretation using webcam images | |
CN106529504B (en) | A kind of bimodal video feeling recognition methods of compound space-time characteristic | |
CN107358949A (en) | Robot sounding automatic adjustment system | |
CN111666845B (en) | Small sample deep learning multi-mode sign language recognition method based on key frame sampling | |
Zhao et al. | Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection | |
CN110909680A (en) | Facial expression recognition method and device, electronic equipment and storage medium | |
Thongtawee et al. | A novel feature extraction for American sign language recognition using webcam | |
CN104809450B (en) | Wrist vena identification system based on online extreme learning machine | |
Vishwakarma et al. | Simple and intelligent system to recognize the expression of speech-disabled person | |
CN109325472B (en) | Face living body detection method based on depth information | |
Das et al. | Sign language recognition using facial expression | |
CN113343860A (en) | Bimodal fusion emotion recognition method based on video image and voice | |
Bhavanam et al. | On the classification of kathakali hand gestures using support vector machines and convolutional neural networks | |
CN111079465A (en) | Emotional state comprehensive judgment method based on three-dimensional imaging analysis | |
Spivak et al. | Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems. | |
CN117198468A (en) | Intervention scheme intelligent management system based on behavior recognition and data analysis | |
Wati et al. | Real time face expression classification using convolutional neural network algorithm | |
Shah et al. | Facial expression recognition for color images using Gabor, log Gabor filters and PCA | |
Kumar et al. | Emotion recognition using anatomical information in facial expressions | |
CN113642446A (en) | Detection method and device based on face dynamic emotion recognition | |
Talea et al. | Automatic combined lip segmentation in color images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |