CN104077579A - Facial expression image recognition method based on expert system - Google Patents
Facial expression image recognition method based on expert system Download PDFInfo
- Publication number
- CN104077579A CN104077579A CN201410333366.0A CN201410333366A CN104077579A CN 104077579 A CN104077579 A CN 104077579A CN 201410333366 A CN201410333366 A CN 201410333366A CN 104077579 A CN104077579 A CN 104077579A
- Authority
- CN
- China
- Prior art keywords
- image
- facial expression
- expert system
- gray
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000014509 gene expression Effects 0.000 claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 5
- 238000004590 computer program Methods 0.000 claims abstract description 4
- 238000010606 normalization Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 230000005484 gravity Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000011426 transformation method Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract 1
- 238000003672 processing method Methods 0.000 abstract 1
- 238000012795 verification Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 8
- 238000012935 Averaging Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Landscapes
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a facial expression image recognition method based on an expert system. According to the method, inference and recognition of the facial expression of a preprocessed image are carried out through the expert system established on the basis of an expression image processing method and the function of a traditional computer program. The method comprises the following steps: (1) capturing an image in a video, acquiring user information in the video, then carrying out identity verification through image processing and image characteristic extraction, acquiring characteristic parameters of the expression image of a user, determining a user expression library, and establishing the expert system for facial expression recognition; (2) carrying out imaging processing and image characteristic extraction on the image captured in the video, acquiring the characteristic parameters generated when the degree of the expression of the user is maximized, comparing the characteristic parameters with parameters, determined in the step (1), of image training samples in the user expression library, and finally outputting the statistical result of facial expression recognition through an inference engine of the expert system. Compared with the prior art, the method has the advantages of being high in recognition speed and the like.
Description
Technical field
The present invention relates to a kind of expert system application technology, especially relate to a kind of facial expression recognition method based on expert system.
Background technology
Expert system is the computer intelligence programming system that a class has special knowledge and experience, by the modeling to human expert's problem solving ability, adopt the representation of knowledge and knowledge reasoning technology in artificial intelligence to simulate the challenge that conventionally could be solved by expert, reach the level having with the equal ability to solve problem of expert.This KBS Knowledge Based System method for designing is centered by knowledge base and inference machine and launch.It separates knowledge from system with other parts.What expert system was emphasized is knowledge rather than method.A lot of problems are the solution based on algorithm not, or algorithm arrangement is too complicated, adopts expert system, can utilize human expert to have abundant knowledge, so expert system is also referred to as KBS Knowledge Based System.
At present, along with scientific and technological development, rehabilitation nursing intelligent bed is day by day universal.Yet in patient, there is people greatly to the external world, not make oneself understood accurately by limbs or language.To everybody, different expressions represents different states.When language and limbs can not be passed on, we can identify patient's idea by expression, complete corresponding operation.Care bed operator is paramedic on the market, and has ignored patient's wish own.We are in line with the design concept that people-oriented, by the expert system of Expression Recognition technology, make the action inconvenient patient also can Self-operating care bed.
Summary of the invention
Object of the present invention is exactly that a kind of having learning functionality and can quick and precisely identify the facial expression recognition method of human face expression based on expert system is provided in order to overcome the defect that above-mentioned prior art exists.
Object of the present invention can be achieved through the following technical solutions:
A kind of facial expression recognition method based on expert system, the method is carried out reasoning identification human face expression by the expert system for human face expression identification being based upon on the basis of facial expression image disposal route and traditional computer program function to pretreated image, said method comprising the steps of:
1) from video, catch image, obtain the user profile in this video, then by image, process, image characteristics extraction, carries out authentication, obtains user's facial expression image characteristic parameter, determines the user storehouse of expressing one's feelings, and sets up identification human face expression expert system;
2) image catching is carried out to image processing and image characteristics extraction from video, characteristic parameter while obtaining the maximization of user's expression degree, by characteristic parameter and step 1) definite user expression training sample parameter of expressing one's feelings in storehouse compares, and finally exports the statistics of Expression Recognition through the reasoning of inference engine of expert system.
Described step 1) in, set up identification Facial Expression Image expert system and specifically comprise step:
11) obtain Facial Expression Image;
12) to Facial Expression Image pre-service;
13) extract Facial Expression Image feature;
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited in rule base.
Described step 2) in, Expression Recognition is obtained facial expression recognition result and is specifically comprised step:
21) obtain user's facial expression image: video information catches image after receiving from video information, obtains user's facial expression image.
22) to step 21) in the Facial Expression Image that obtains, carry out image pre-service;
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction;
24) characteristic parameter obtaining Expression Recognition: by step 23) be input in identification Facial Expression Image expert system and with expert system rule storehouse in the facial expression feature parameter of storing contrast, through the reasoning of inference machine in expert system and export the reasoning results.
Described step 12) and step 22) and the pre-service of middle Facial Expression Image specifically comprises image denoising, yardstick normalization, gray scale normalization, image is cut apart and image binaryzation;
The image g (i, j) obtaining after described image denoising after denoising is:
g(i,j)=∑f(i,j)/N,(i,j)∈M
Wherein: f (i, j) is that given containing noisy image, M is the coordinate of each neighborhood pixels in got neighborhood, N is the number of the neighborhood pixels that comprises in neighborhood;
After described yardstick normalization, obtaining target image g (x, y) is:
g(x,y)=f(x/a+x
0,y/a+y
0)
Wherein: f (x, y) is image before normalization, (x
0, y
0) be the center of gravity of picture target area, a is scale factor, described scale factor is:
a=sqrt(T/m)
Wherein: the area that m is target image; T is the area of image before normalization
Described gray scale normalization is piecewise linear gray transformation method;
Described image Segmentation Technology is serial Region Segmentation technology, by the direct-detection of face and eyes target area is come Image Segmentation Using.
Described Image binarizing algorithm is binarization method Otsu algorithm, uses gray threshold that gray level image is divided into target part and background parts two classes.
Described step 13) and 33) in Facial Expression Image feature comprise left eye feature, right eye feature and mouth feature, described Facial Expression Image feature extraction algorithm specifically comprises the following steps:
201) each pixel is calculated to correlation matrix M:
Wherein:
Wherein: I
xfor the difference of x direction, I
yfor the difference of y direction, w (x, y) is Gaussian function, is specially:
202) calculate the Harris angle point response of every pixel: R=(AB-CD)
2-k (A+B)
2;
203) within the scope of w*w, find maximum point, if the response of Harris angle point is greater than threshold value, described threshold value is maximum value within the scope of w*w, is considered as angle point, by the feature extraction of eye and mouth out.
Described step 24) after finishing, system joins this task data in rule base.
Described image binaryzation process specifically comprises step:
101) the average gray u of computed image is:
u=∑i*n(i)/(M*N)
Wherein: the number of pixels that M*N is image, n (i) is the number of the gray scale pixel that is i;
102) determine gray threshold t, t is the segmentation threshold (described gray threshold t is the value that makes inter-class variance G maximum) of prospect and background, and described inter-class variance G is:
G=w
1*(u
1-u)*(u
1-u)+w
2*(u
2-u)*(u
2-u)
When variance G is maximum, can think that now prospect and background difference are maximum, namely gray scale is now optimal threshold.
Wherein: w
1for object pixel accounts for the ratio of image, w
2for background pixel accounts for the ratio of image, u
1for the average gray of object pixel, u
2for the average gray of background pixel, described object pixel accounts for the ratio w of image
1average gray u with object pixel
1be respectively:
w
1=W
1/(M*N) u
1=∑i*n(i)/W
1,i>t
Wherein: W
1for gray-scale value is greater than the statistical number of t,
Described background pixel accounts for the ratio w of image
2average gray u with background pixel
2be respectively:
w
2=W
2/(M*N) u
2=∑i*n(i)/W
2,i<t
Wherein: W
2for gray-scale value is less than the statistical number of t;
103) pixel that gray-scale value is greater than to t is as object pixel, and the pixel that gray-scale value is less than to t as a setting pixel is carried out binaryzation to image.
Compared with prior art, the present invention has the following advantages:
1) the foundation identification human face expression expert system that the present invention proposes, expert system is one and has a large amount of special knowledge and the programming system of experience, taken into full account number of training and classification information thereof, obtained good recognition result, for recognition of face provides a kind of effective approach.
2) the present invention proposes a kind of method for quickly identifying of human face expression, proposes under video environment for identifying not only having speed faster but also having the new facial expression recognizing method of higher discrimination of human face expression.
3) on the basis of facial expression image disposal route and traditional computer program function, build expert system.Facial expression image processing expert system is not to replace possess compared with powerful and reach quite high-caliber classical way and traditional program, but is making full use of on its productive basis, processes emphatically the part problem that is still difficult at present solution.
Accompanying drawing explanation
Fig. 1 is a kind of configuration diagram of identifying the expert system of Facial Expression Image provided by the invention;
Fig. 2 is minute three sections of minutes linear transforming function transformation function figure in gray scale normalization;
Fig. 3 is the susceptibility schematic diagram of harris operator to yardstick.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.The present embodiment be take technical solution of the present invention and is implemented as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
A facial expression recognition method based on expert system, the method comprises the following steps:
1) from video, catch image, obtain the user profile in this video, then by image, process, image characteristics extraction, carries out authentication, obtains user's facial expression image characteristic parameter, determines the user storehouse of expressing one's feelings, and sets up identification human face expression expert system;
2) image catching is carried out to image processing and image characteristics extraction from video, characteristic parameter while obtaining the maximization of user's expression degree, by characteristic parameter and step 1) definite user expression training sample parameter of expressing one's feelings in storehouse compares, and finally exports the statistics of Expression Recognition through the reasoning of inference engine of expert system.
Step 1) in, set up identification Facial Expression Image expert system and specifically comprise step:
11) obtain Facial Expression Image;
12) to Facial Expression Image pre-service;
13) extract Facial Expression Image feature;
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited in rule base.
Step 2) in, Expression Recognition is obtained facial expression recognition result and is specifically comprised step:
21) obtain user's facial expression image: video information catches image after receiving from video information, obtains user's facial expression image.
22) to step 21) in the Facial Expression Image that obtains, carry out image pre-service;
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction;
24) characteristic parameter obtaining Expression Recognition: by step 23) be input in identification Facial Expression Image expert system and with expert system rule storehouse in the facial expression feature parameter of storing contrast, through the reasoning of inference machine in expert system and export the reasoning results.
Be illustrated in figure 1 work configuration diagram of the present invention, expert system of the present invention has learning functionality, step 24) finish rear secondary data and be added in rule base, further increase work efficiency of the present invention and recognition accuracy.
Step 12) and step 22) and the pre-service of middle Facial Expression Image specifically comprises image denoising, yardstick normalization, gray scale normalization, image is cut apart and image binaryzation.
Step 13) and 33) in, Facial Expression Image feature comprises left eye feature, right eye feature and mouth feature.
Below in conjunction with example, the invention will be further described.
1) set up Facial Expression Image expert system;
Video information catches image, and can obtain the user profile of this video information after receiving from video information, by image pre-service, image characteristics extraction, carry out authentication, determine this user's expression storehouse, set up identification human face expression expert system, when Expression Recognition, extract;
11) Facial Expression Image obtains
By camera image trap tool, obtain the still image of certain human face expression;
12) Facial Expression Image pre-service
A image denoising
The final purpose of image denoising is to improve given image, solves real image because noise causes the problem of image quality decrease.By noise-removed technology, can effectively improve picture quality, increase signal to noise ratio (S/N ratio), better embody the entrained information of original image.Based on C# program language, what in the present invention, Image denoising algorithm adopted is traditional spatial domain filter method.Airspace filter is on original image, directly to carry out data operation, and the gray-scale value of pixel is processed.The present invention adopt spatial domain Image denoising algorithm be neighborhood averaging.Neighborhood averaging is expressed with mathematical formulae: establish f (i, j) be the given noisy image that contains, image after neighborhood averaging is processed is g (i, j), g (i, j)=∑ f (i, j)/N, (i, j) ∈ M, M is the coordinate of each neighborhood pixels in got neighborhood, is the number of the neighborhood pixels that comprises in neighborhood.Neighborhood averaging disposal route be take and image blurringly carried out noise decrease as cost, and template size is larger, and the effect that noise reduces is more remarkable.If f (i, j) is noise spot, its neighborhood pixels gray scale differs greatly with it, and adopting neighborhood averaging is exactly to replace it with the mean value of neighborhood pixels, can obviously slacken noise spot like this, and gray scale in neighborhood is approached evenly, plays the effect of level and smooth gray scale.
The normalization of B yardstick
The normalization of image translation yardstick just refers to by conversion eliminates translation and the impact of proportional zoom on image.Based on C# program language, what the yardstick normalization in the present invention was used is the method for standard square, first true origin is moved to image center of gravity place, can be obtained the center of gravity (x of target by standard square
0, y
0).Because the center of gravity of target is constant to translation, yardstick and rotation, the initial point of image is placed in target center of gravity to solve the problem of translation.Then define a scale factor a and solve scale problem: a=sqrt (T/m).In fact, if target pixel value is 1 on bianry image, background is pixel value 0, m is the area of target, and the area that makes target is a fixing size, like this, mark conversion by below just can obtain yardstick normalization target: g (x, y)=f (x/a+x
0, y/a+y
0).Amplify in the region of Jiu Jiang face like this.
C gray scale normalization
Gray scale normalization is in order to improve the quality of image.Based on C# program language, in the present invention, we adopt the most basic piecewise linear gray transformation method, and piecewise linear transform is also called gray scale linear stretch, use be a minute linear transformation in three sections of minutes.As shown in Figure 2, in figure, [a, b] between gray area expanded, and between gray area, [0, a] and [b, c] received compression.By carefulness, adjust the position of broken line flex point and the slope of control segmentation straight line, can, to expanding and compress between any gray area, realize gray scale normalization and process.Convert human face expression coloured image to gray level image.
D image is cut apart
In people's face, can reflect that the place that human face expression changes is exactly face and eyes, so the position of extracting in the present invention is eyes and face after image dividing processing.Based on C# program language, what in the present invention, we adopted is serial Region Segmentation technology, adopts exactly serial mode, by the direct-detection of target area is carried out to the technology to Image Segmentation Using.Be characterized in: a plurality of steps that whole processing procedure are decomposed into order are carried out successively, to the processing of subsequent step, will determine according to the result of front and continued step.We adopt from full figure, then split into gradually required cut zone.
E image binaryzation
Based on C# program language, in the present invention, we adopt binarization method is classical binarization method Otsu algorithm.The basic thought of this algorithm is: if by some threshold values, gray level image is big or small according to gray scale, be divided into target part and background parts two classes, in the class internal variance minimum and inter-class variance maximum of this two class, the threshold value obtaining is optimum binary-state threshold.Concerning the image of a width N*M pixel, the average gray u of computed image first, it is the number of pixels u that i is corresponding (i) that statistics obtains gray scale in all images, then the average gray value u=∑ i*n (i) of this image/(M*N); List the correlated variables that solves optimal threshold t, note t is the segmentation threshold of target and background, and the ratio that note object pixel (gray scale is greater than t) accounts for image is w
1: w
1=W
1/ (M*N), W wherein
1it is the statistical number that gray-scale value is greater than t; The average gray of note object pixel is u
1: u
1=∑ i*n (i)/W
1, i > t, in like manner, obtains the ratio w that background pixel accounts for image
2average gray u with background pixel
2.Finally solving optimal threshold t is to make class difference maximum, makes G=w
1* (u
1* u) * (u
1-u)+w
2* (u
2* u) * (u
2-u) maximum.When G is maximum, obtained optimal threshold.
13) Facial Expression Image feature extraction
By above-mentioned steps E image, cut apart the image that obtains eye and mouth, adopt geometric properties to carry out feature extraction, the change in location of eyes, face is positioned, measured, determine the features such as its size, distance, shape and mutual ratio.Show in the present invention in human face expression recognizer, need to extract geometric properties, what we adopted is Harris operator Robust Algorithm of Image Corner Extraction, take C# program language as basis, but these detection algorithms need some threshold value settings, the intersection point number of every width image is different.Extract the angle point of eyes, be manually fixed and choose identical N point (canthus, left and right, eyelid up and down totally four points):
The first step is calculated correlation matrix M to each pixel, and Harris operator replaces two-value window function with Gaussian function, gives larger weight, to reduce noise effect to the nearer pixel of decentering point.
Wherein,
Wherein: I
xfor the difference of x direction, I
yfor the difference of y direction, w (x, y) is Gaussian function, is specially:
Second step calculates the Harris angle point response of every pixel.
R=(AB-CD)
2-k(A+B)
2
The 3rd step is found maximum point within the scope of w*w, and as shown in Figure 3, if the response of Harris angle point is greater than threshold value, threshold value is generally maximum value within the scope of w*w, is considered as angle point.Like this, just by the feature extraction of eye and mouth out.
The mankind have six kinds of main emotions, and every kind of emotion reflects the psychological activity of a kind of uniqueness of people with unique expression.These six kinds of emotions are called as basic emotion, indignation (anger), glad (smile), sad (sadness), surprised (surprise), detest (disgust) and fear (fear), consist of.The Facial Expression Image that gather to represent these six kinds of expressions, through step 12), 13) process after, will represent that different characteristic parameters of expressing one's feelings are deposited in rule base.The characteristic parameter of every kind of expression all uses corresponding letter to replace, for example, represent characteristic parameter happily, just with happiness, marks.
Like this, by above-mentioned steps, just set up identification human face expression expert system.
(2) Expression Recognition is obtained facial expression recognition result
21) obtain user's facial expression image:
Video information catches image after receiving from video information, obtains user's facial expression image.
22) image pre-service:
To step 21) in the Facial Expression Image that obtains, first pass through step 12), image is carried out to pre-service, people's face is detected, people's face is positioned, then image is cut, finally obtain the eye of image and the characteristics of image of mouth.
23) image characteristics extraction:
The eye obtaining in above-mentioned steps (2) and the image of mouth are carried out to feature extraction, and the method for geometry in employing above-mentioned steps (3) is extracted the characteristic parameter of eye and mouth.
24) Expression Recognition:
The characteristic parameter obtaining in step is input in identification Facial Expression Image expert system, contrast with the facial expression feature parameter of storing in expert system rule storehouse, then pass through the reasoning of inference machine in expert system, last expert system is exported the reasoning results.For example, if input is glad Facial Expression Image, the Output rusults of expert system is exactly smile.
As shown in Figure 1, expert system of the present invention has learning functionality, after end of identification, this secondary data is also added to rule base, and expert also can carry out manual update rule storehouse and program to this expert system simultaneously.
Claims (7)
1. the facial expression recognition method based on expert system, it is characterized in that, the method is carried out reasoning identification human face expression by the expert system for human face expression identification being based upon on the basis of facial expression image disposal route and traditional computer program function to pretreated image, said method comprising the steps of:
1) from video, catch image, obtain the user profile in this video, then by image, process, image characteristics extraction, carries out authentication, obtains user's facial expression image characteristic parameter, determines the user storehouse of expressing one's feelings, and sets up identification human face expression expert system;
2) image catching is carried out to image processing and image characteristics extraction from video, characteristic parameter while obtaining the maximization of user's expression degree, by characteristic parameter and step 1) definite user expression training sample parameter of expressing one's feelings in storehouse compares, and finally exports the statistics of Expression Recognition through the reasoning of inference engine of expert system.
2. a kind of facial expression recognition method based on expert system according to claim 1, is characterized in that described step 1) in set up identification Facial Expression Image expert system and specifically comprise step:
11) obtain Facial Expression Image;
12) to Facial Expression Image pre-service;
13) extract Facial Expression Image feature;
14) set up human face expression expert system rule storehouse and by step 13) in the characteristic parameter that extracts be deposited in rule base.
3. a kind of facial expression recognition method based on expert system according to claim 1, is characterized in that described step 2) in Expression Recognition obtain facial expression recognition result and specifically comprise step:
21) obtain user's facial expression image: video information catches image after receiving from video information, obtains user's facial expression image.
22) to step 21) in the Facial Expression Image that obtains, carry out image pre-service;
23) to step 22) in the eye that obtains and the image of mouth carry out feature extraction;
24) characteristic parameter obtaining Expression Recognition: by step 23) be input in identification Facial Expression Image expert system and with expert system rule storehouse in the facial expression feature parameter of storing contrast, through the reasoning of inference machine in expert system and export the reasoning results.
4. according to a kind of facial expression recognition method based on expert system described in claim 2 and 3, it is characterized in that described step 12) and step 22) and the pre-service of middle Facial Expression Image specifically comprises image denoising, yardstick normalization, gray scale normalization, image is cut apart and image binaryzation;
The image g (i, j) obtaining after described image denoising after denoising is:
g(i,j)=∑f(i,j)/N,(i,j)∈M
Wherein: f (i, j) is that given containing noisy image, M is the coordinate of each neighborhood pixels in got neighborhood, N is the number of the neighborhood pixels that comprises in neighborhood;
After described yardstick normalization, obtaining target image g (x, y) is:
g(x,y)=f(x/a+x
0,y/a+y
0)
Wherein: f (x, y) is image before normalization, (x
0, y
0) be the center of gravity of picture target area, a is scale factor, described scale factor is:
a=sqrt(T/m)
Wherein: the area that m is target image; T is the area of image before normalization.
Described gray scale normalization is piecewise linear gray transformation method;
Described image Segmentation Technology is serial Region Segmentation technology, by the direct-detection of face and eyes target area is come Image Segmentation Using;
Described Image binarizing algorithm is binarization method Otsu algorithm, uses gray threshold that gray level image is divided into target part and background parts two classes.
5. according to a kind of facial expression recognition method based on expert system described in claim 2 or 3, it is characterized in that, described step 13) and 33) in Facial Expression Image feature comprise left eye feature, right eye feature and mouth feature, described Facial Expression Image feature extraction algorithm specifically comprises the following steps:
201) each pixel is calculated to correlation matrix M:
Wherein:
Wherein: I
xfor the difference of x direction, I
yfor the difference of y direction, w (x, y) is Gaussian function, is specially:
202) calculate the Harris angle point response of every pixel: R=(AB-CD)
2-k (A+B)
2;
203) within the scope of w*w, find maximum point, if the response of Harris angle point is greater than threshold value, described threshold value is maximum value within the scope of w*w, is considered as angle point, by the feature extraction of eye and mouth out.
6. a kind of facial expression recognition method based on expert system according to claim 3, is characterized in that described step 24) finish after, system joins this task data in rule base.
7. a kind of facial expression recognition method based on expert system according to claim 4, is characterized in that, described image binaryzation process specifically comprises step:
101) the average gray u of computed image is:
u=∑i*n(i)/(M*N)
Wherein: the number of pixels that M*N is image, n (i) is the number of the gray scale pixel that is i;
102) determine gray threshold t, described gray threshold t is the value that makes inter-class variance G maximum, and described inter-class variance G is:
G=w
1*(u
1-u)*(u
1-u)+w
2*(u
2-u)*(u
2-u)
Wherein: w
1for object pixel accounts for the ratio of image, w
2for background pixel accounts for the ratio of image, u
1for the average gray of object pixel, u
2for the average gray of background pixel, described object pixel accounts for the ratio w of image
1average gray u with object pixel
1be respectively:
w
1=W
1/(M*N) u
1=∑i*n(i)/W
1,i>t
Wherein: W
1for gray-scale value is greater than the statistical number of t,
Described background pixel accounts for the ratio w of image
2average gray u with background pixel
2be respectively:
w
2=W
2/(M*N) u
2=∑i*n(i)/W
2,i<t
Wherein: W
2for gray-scale value is less than the statistical number of t;
103) pixel that gray-scale value is greater than to t is as object pixel, and the pixel that gray-scale value is less than to t as a setting pixel is carried out binaryzation to image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333366.0A CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333366.0A CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104077579A true CN104077579A (en) | 2014-10-01 |
CN104077579B CN104077579B (en) | 2017-07-04 |
Family
ID=51598827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410333366.0A Expired - Fee Related CN104077579B (en) | 2014-07-14 | 2014-07-14 | Facial expression recognition method based on expert system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104077579B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318221A (en) * | 2014-11-05 | 2015-01-28 | 中南大学 | Facial expression recognition method based on ELM |
CN104596266A (en) * | 2014-12-25 | 2015-05-06 | 贵州永兴科技有限公司 | Informationized universal electric furnace having counting and face recognition functions |
CN104596250A (en) * | 2014-12-25 | 2015-05-06 | 贵州永兴科技有限公司 | Informationized universal electric furnace having counting and face recognition functions |
CN104634105A (en) * | 2014-12-25 | 2015-05-20 | 贵州永兴科技有限公司 | Flip type universal electric furnace switch with counting and human face recognizing functions |
CN104794444A (en) * | 2015-04-16 | 2015-07-22 | 美国掌赢信息科技有限公司 | Facial expression recognition method in instant video and electronic equipment |
CN104899255A (en) * | 2015-05-15 | 2015-09-09 | 浙江大学 | Image database establishing method suitable for training deep convolution neural network |
CN104951778A (en) * | 2015-07-24 | 2015-09-30 | 上海华旌科技有限公司 | Face recognition expert system based on semantic network |
CN105938390A (en) * | 2015-03-03 | 2016-09-14 | 卡西欧计算机株式会社 | Content output apparatus and content output method |
CN106778679A (en) * | 2017-01-05 | 2017-05-31 | 唐常芳 | A kind of specific crowd video frequency identifying method and system based on big data machine learning |
CN106919924A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis system based on the identification of people face |
CN106919923A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis method based on the identification of people face |
CN107945848A (en) * | 2017-11-16 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | A kind of exercise guide implementation method, device, equipment and medium |
CN109034079A (en) * | 2018-08-01 | 2018-12-18 | 中国科学院合肥物质科学研究院 | A kind of human facial expression recognition method under the non-standard posture for face |
CN109159129A (en) * | 2018-08-03 | 2019-01-08 | 深圳市益鑫智能科技有限公司 | A kind of intelligence company robot based on facial expression recognition |
CN110249337A (en) * | 2017-05-01 | 2019-09-17 | 谷歌有限责任公司 | Using eye tracks camera to facial expression classification |
CN112968999A (en) * | 2021-02-25 | 2021-06-15 | 上海吉盛网络技术有限公司 | Digital-analog mixed elevator multi-party call device |
TWI731920B (en) * | 2017-01-19 | 2021-07-01 | 香港商斑馬智行網絡(香港)有限公司 | Image feature extraction method, device, terminal equipment and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060110014A1 (en) * | 2002-12-13 | 2006-05-25 | Koninklijke Philips Electronics, N.V. | Expression invariant face recognition |
CN102880855A (en) * | 2011-08-16 | 2013-01-16 | 武汉大学 | Cloud-model-based facial expression recognition method |
CN103268150A (en) * | 2013-05-13 | 2013-08-28 | 苏州福丰科技有限公司 | Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition |
CN103514441A (en) * | 2013-09-21 | 2014-01-15 | 南京信息工程大学 | Facial feature point locating tracking method based on mobile platform |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
-
2014
- 2014-07-14 CN CN201410333366.0A patent/CN104077579B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060110014A1 (en) * | 2002-12-13 | 2006-05-25 | Koninklijke Philips Electronics, N.V. | Expression invariant face recognition |
CN102880855A (en) * | 2011-08-16 | 2013-01-16 | 武汉大学 | Cloud-model-based facial expression recognition method |
CN103268150A (en) * | 2013-05-13 | 2013-08-28 | 苏州福丰科技有限公司 | Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition |
CN103514441A (en) * | 2013-09-21 | 2014-01-15 | 南京信息工程大学 | Facial feature point locating tracking method based on mobile platform |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318221A (en) * | 2014-11-05 | 2015-01-28 | 中南大学 | Facial expression recognition method based on ELM |
CN104596266A (en) * | 2014-12-25 | 2015-05-06 | 贵州永兴科技有限公司 | Informationized universal electric furnace having counting and face recognition functions |
CN104596250A (en) * | 2014-12-25 | 2015-05-06 | 贵州永兴科技有限公司 | Informationized universal electric furnace having counting and face recognition functions |
CN104634105A (en) * | 2014-12-25 | 2015-05-20 | 贵州永兴科技有限公司 | Flip type universal electric furnace switch with counting and human face recognizing functions |
CN105938390B (en) * | 2015-03-03 | 2019-02-15 | 卡西欧计算机株式会社 | Content output apparatus, content outputting method |
CN105938390A (en) * | 2015-03-03 | 2016-09-14 | 卡西欧计算机株式会社 | Content output apparatus and content output method |
CN104794444A (en) * | 2015-04-16 | 2015-07-22 | 美国掌赢信息科技有限公司 | Facial expression recognition method in instant video and electronic equipment |
WO2016165614A1 (en) * | 2015-04-16 | 2016-10-20 | 美国掌赢信息科技有限公司 | Method for expression recognition in instant video and electronic equipment |
CN104899255A (en) * | 2015-05-15 | 2015-09-09 | 浙江大学 | Image database establishing method suitable for training deep convolution neural network |
CN104899255B (en) * | 2015-05-15 | 2018-06-26 | 浙江大学 | Suitable for the construction method of the image data base of training depth convolutional neural networks |
CN104951778A (en) * | 2015-07-24 | 2015-09-30 | 上海华旌科技有限公司 | Face recognition expert system based on semantic network |
CN106778679B (en) * | 2017-01-05 | 2020-10-30 | 唐常芳 | Specific crowd video identification method based on big data machine learning |
CN106778679A (en) * | 2017-01-05 | 2017-05-31 | 唐常芳 | A kind of specific crowd video frequency identifying method and system based on big data machine learning |
TWI731920B (en) * | 2017-01-19 | 2021-07-01 | 香港商斑馬智行網絡(香港)有限公司 | Image feature extraction method, device, terminal equipment and system |
CN106919924A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis system based on the identification of people face |
CN106919923A (en) * | 2017-03-07 | 2017-07-04 | 佛山市融信通企业咨询服务有限公司 | A kind of mood analysis method based on the identification of people face |
CN110249337A (en) * | 2017-05-01 | 2019-09-17 | 谷歌有限责任公司 | Using eye tracks camera to facial expression classification |
CN107945848A (en) * | 2017-11-16 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | A kind of exercise guide implementation method, device, equipment and medium |
US11389711B2 (en) | 2017-11-16 | 2022-07-19 | Baidu Online Network Technology (Beijing) Co., Ltd. | Fitness guidance method, device and storage medium |
CN109034079A (en) * | 2018-08-01 | 2018-12-18 | 中国科学院合肥物质科学研究院 | A kind of human facial expression recognition method under the non-standard posture for face |
CN109159129A (en) * | 2018-08-03 | 2019-01-08 | 深圳市益鑫智能科技有限公司 | A kind of intelligence company robot based on facial expression recognition |
CN112968999A (en) * | 2021-02-25 | 2021-06-15 | 上海吉盛网络技术有限公司 | Digital-analog mixed elevator multi-party call device |
Also Published As
Publication number | Publication date |
---|---|
CN104077579B (en) | 2017-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104077579A (en) | Facial expression image recognition method based on expert system | |
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
CN111626371B (en) | Image classification method, device, equipment and readable storage medium | |
US20210034840A1 (en) | Method for Recognzing Face from Monitoring Video Data | |
CN104851140A (en) | Face recognition-based attendance access control system | |
CN107798279B (en) | Face living body detection method and device | |
CN105976809A (en) | Voice-and-facial-expression-based identification method and system for dual-modal emotion fusion | |
CN105354527A (en) | Negative expression recognizing and encouraging system | |
CN109145817A (en) | A kind of face In vivo detection recognition methods | |
CN111666845B (en) | Small sample deep learning multi-mode sign language recognition method based on key frame sampling | |
CN105335691A (en) | Smiling face identification and encouragement system | |
CN106446753A (en) | Negative expression identifying and encouraging system | |
Anand et al. | An improved local binary patterns histograms techniques for face recognition for real time application | |
CN104751186A (en) | Iris image quality classification method based on BP (back propagation) network and wavelet transformation | |
Zhao et al. | Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN103593648A (en) | Face recognition method for open environment | |
CN113544735A (en) | Personal authentication apparatus, control method, and program | |
CN103207995A (en) | PCA (Principal Component Analysis)-based 3D (three dimensional) face identification method | |
CN107315985B (en) | Iris identification method and terminal | |
CN106909880A (en) | Facial image preprocess method in recognition of face | |
Hosur et al. | Facial emotion detection using convolutional neural networks | |
Ganguly et al. | Depth based occlusion detection and localization from 3D face image | |
CN112487904A (en) | Video image processing method and system based on big data analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170704 |
|
CF01 | Termination of patent right due to non-payment of annual fee |