CN105868694A - Dual-mode emotion identification method and system based on facial expression and eyeball movement - Google Patents

Dual-mode emotion identification method and system based on facial expression and eyeball movement Download PDF

Info

Publication number
CN105868694A
CN105868694A CN201610173439.3A CN201610173439A CN105868694A CN 105868694 A CN105868694 A CN 105868694A CN 201610173439 A CN201610173439 A CN 201610173439A CN 105868694 A CN105868694 A CN 105868694A
Authority
CN
China
Prior art keywords
eye
facial expression
vector
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610173439.3A
Other languages
Chinese (zh)
Other versions
CN105868694B (en
Inventor
刘振焘
吴敏
曹卫华
陈略峰
丁学文
潘芳芳
张日
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201610173439.3A priority Critical patent/CN105868694B/en
Publication of CN105868694A publication Critical patent/CN105868694A/en
Application granted granted Critical
Publication of CN105868694B publication Critical patent/CN105868694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a dual-mode emotion identification method and system based on a facial expression and an eyeball movement. The method comprises a step of performing acquisition, a step of extracting a facial expression feature vector, a step of extracting an eyeball movement feature vector, a step of performing qualitative analysis on an emotional state, a step of performing matching-by-time and storage, a step of performing fusion and classification and a step of comparing emotional information. According to the method and system provided by the present invention, facial expression information of a to-be-tested object can be dynamically and accurately extracted and analyzed, and a correlation between the facial expression and the emotion is established; rich eye movement information can be accurately and efficiently acquired by means of tracking of an eye tracker, and the emotional state of the to-be-tested object is analyzed from the angle of the eyeball movement; and the facial expression feature vector and the eyeball movement feature vector are processed by using an SVR, so that the emotional state of the to-be-tested object can be obtained more accurately, and thus accuracy and reliability of emotion identification are improved.

Description

Based on facial expression and the bimodal emotion recognition method of eyeball action and system
Technical field
The invention belongs to emotion recognition field, more particularly, to based on facial expression and eyeball action Bimodal emotion recognition method.
Background technology
High speed development and the mankind's continuous enhancing dependent to robot, man-machine friendship along with information technology Ability is paid attention to widely mutually.Research in terms of emotion recognition, is divided into two big classes both at home and abroad now, One class is emotion recognition based on single mode, and another kind of is based on multi-modal emotion recognition.Single mould The emotion recognition of state is through gathering the information of some passage thus identifies the emotion shape of measured State.Multi-modal emotion recognition be through to multichannel collecting to information be analyzed, and by one be The technological means of row the most accurately obtains the affective state of measured.But present correlative study It is clearly disadvantageous that method also exists some.
In terms of facial expression emotion recognition, great majority remain in the research of basic facial expression identification, base This expression does not contain all emotions of the mankind, basic facial expression non-to such as slight expression, combined expressions etc. Research is seldom.The accuracy rate of human facial expression recognition can be by external factors such as shooting angle, light changes Impact, the robustness how improving Expression Recognition is also the problem being currently needed for solving.Therefore, Jin Jinyi It is difficult to obtain emotion information accurately by facial expression feature, needs to take into full account fusion other mode letter Breath, learns from other's strong points to offset one's weaknesses.
Wang Jun human facial expression recognition based on priori, it is proposed that utilize the side of AU auxiliary Expression Recognition Method, but the sample used is the face-image of only basic facial expression, it is impossible to solve slight expression, mix Close the emotion recognition of the non-basic facial expressions such as expression.The string of Han Zhiyan etc. the multi-mode emotion information combined melt Close and in recognition methods, by voice signal and facial expression information are analyzed and obtain emotion recognition As a result, but its emotion recognition rate still has much room for improvement.
In existing emotion identification method, the position that major part is detected does not includes eyeball, existing eyeball In the emotion identification method of detection, also help facial expression only with eye contour or eyeball Image positions, and as an independent modal, eyeball is not carried out more in-depth study.Wang Jingli Solve the orientation problem of Automatic face recognition as characteristic point with human eye, but its only using human eye as Characteristic point assists the location of facial image, does not know for emotion as an independent mode Not.Lv Yanpeng utilizes pupil to position face-image, and according to oculocentric changes double in successive frame As the auxiliary of emotion recognition, judge that testee nods or shakes the head and judge that testee is affirmative The emotion that emotion still negates, but the movement locus etc. of eyeball is not furtherd investigate by it.
Summary of the invention
The technical problem to be solved is to provide and makes full use of facial expression and eye moves information and carries out Emotion recognition, improves bimodal emotion recognition method and the system of the accuracy rate that human emotion identifies.
The technical scheme is that
Based on facial expression and the bimodal emotion recognition method of eyeball action, it is characterised in that include with Lower step:
S1, in the light range meeting facial expression recognition, obtain different time human face just Face image and eye move information;
S2, from facial expression image extract facial expression feature vector, from eye move information extract eye move Characteristic vector;
S3, to extract facial expression feature vector carry out qualitative analysis, obtain preliminary affective state Z1
S4, facial expression feature vector sum eye movement characteristics vector is temporally mated and stores;
The facial expression feature vector sum eye movement characteristics vector that S5, extraction have been mated, and by SVR to it Carry out merging, classification etc. processes;
Emotion in S6, the affective characteristics vector that good classification will be divided and the emotion information database pre-build Information phase comparison, matches the affective characteristics vector of the most close emotion information with point good classification, thus Obtain affective state Z2, and by Z2With preliminary affective state Z1Merge according to integration percentage, draw and be subject to The final affective state Z of survey person.
The invention has the beneficial effects as follows: the bimodal emotion that the present invention moves detection based on facial expression and eye is known The facial expression feature of other method dynamic, exactly extraction and analysis subject, set up facial expression and Contact between emotion;Abundant eye can be obtained accurately and efficiently by eye tracker tracking and move information, The affective state of angle analysis subject is moved from eye;Utilize SVR that facial expression feature vector sum eye is moved Characteristic vector processes, it is possible to draws the affective state of subject more accurately, thus improves emotion The accuracy identified and reliability.
On the basis of technique scheme, the present invention can also do following improvement.
Further, described step S2 extracts the tool of facial expression feature vector from facial expression image Body step is:
S2a.1, reading facial expression image, the Position Approximate of point estimation facial characteristics on the basis of the crown, Each characteristic portion profile of face arranges mark point uniformly;
S2a.2, the axis general simulated by place between the eyebrows, the midpoint of two interpupillary lines and 3, mouth central authorities Face is divided into symmetrical two parts, not scaling, do not translate, non-rotary under the conditions of, adjust figure As inciting somebody to action, the mark point symmetrical relative to axis is adjusted to same level line, and sets up facial expression shape Shape model;
S2a.3, in facial expression shape according to left-eye/right-eye, left eyebrow/right eyebrow and mouth are drawn It is divided into zones of different, and these regions are defined as feature candidate region;
S2a.4, for each feature candidate region, use difference image method to extract characteristic vector, logical The all image sequences crossed after previous step being processed in image enter with the image of the neutral expression in database Row calculus of differences, extracts face table the image sequence that difference value average is maximum in each feature candidate region Feelings characteristic vector.
Use having the beneficial effect that of above-mentioned further scheme can be had by the method for point on the basis of the crown The Position Approximate of facial characteristics is judged by effect ground, thus grabs effective information more rapidly.Logical Cross each characteristic area at face arrange mark point and realize face by aliging marking accordingly The processing method of portion's image normalization can efficiently reduce to shoot and data are processed the interference caused.? Divide constituency after individual features on facial expression shape, and use difference image method, wait from each feature Extracting facial expression emotion information in the image sequence that in favored area, difference value is maximum, this method can have Effect ground reduces amount of calculation.This step can reduce the interference that data are processed by unnecessary factor effectively, And maximally effective data are carried out advanced treating, it is possible to the quickening operating rate of big degree, raising work effect Rate.
Further, described step S2 is moved from eye and is extracted concretely comprising the following steps of eye movement characteristics vector information:
S2b.1, the eye of collection is moved that information moves the time according to eye movement, eye, eye moves direction, eye moves Distance, fixation time, fixation times, the size of PD and number of winks classification, obtain eight class eyes Dynamic characteristic vector;
S2b.2, information of being moved by eye are depicted as eye trajectory figure according to eye movement, and store eye trajectory Figure;
S2b.3, the fixation time obtained according to collating sort and fixation times, by test of eye movement software GazeLab obtains eye and moves hotspot graph, and stores eye and move hotspot graph.
Use the beneficial effect of above-mentioned further scheme: eye trajectory figure can clearly illustrate that testee notes The situation of change of viewpoint, thus judge hobby and the custom etc. of testee.The dynamic hotspot graph of eye can more be deepened The situation of change of the notice etc. embodying testee entered, eye trajectory figure and eye move hotspot graph, it is possible to allow The user of the present invention knows its notice and lime light while obtaining the emotion information of testee Situation of change, and also can be that follow-up more deep emotion recognition research provides corresponding data and props up Hold.
Further, the concretely comprising the following steps of described step S3:
S3.1, the in advance collection reference faces expressive features organizing standard more, often group canonical reference facial expression It is characterized as left eye/left eyebrow, right eye/right eyebrow and 5 kinds of mouth;
The facial expression feature vector extracted from facial expression image in S3.2, step S2 is according to upper 5 kinds in one step carry out extracting;
S3.3, in step S3.1 5 kinds with reference on the basis of facial expression features, in step S3.2 Facial expression feature carry out qualitative analysis, and determine preliminary affective state with ballot method, if occurring When poll is the conflict phenomenon of 2:2:1, then it is fixed to re-start ballot to the affective characteristics that aggregate votes are 1 Property analyze, select other two kinds of affective states are voted with its most close carrying out, thus obtain preliminary feelings Sense state Z1(and according to preliminary affective state to Z1Indirect assignment), if preliminary affective state is actively, Z1If=1 preliminary affective state is neutrality, Z1=0, if preliminary affective state is passiveness, Z1=-1.
Use the beneficial effect of above-mentioned further scheme: this step collects many group standards in advance by using Reference faces expressive features, draws preliminary feelings by the facial expression feature of testee method in comparison Sense state Z1.For the affective state Z with acquisition2Merge and data support is provided.
Further, the concretely comprising the following steps of described step S5:
If facial expression feature vector sum eye movement characteristics vector is respectively m and n, within a sampling period, Facial expression feature vector set X=[m1,m2,…,mi-1,mi] and the set of eye movement characteristics vector Y=[n1,n2,…,ni-1,ni], wherein i is time series, i >=1.Using X and Y as input, pass through SVR Obtain exporting f (x).Input quantity is described asLinear relationship is described as F (x)=<v, x>+b, v ∈ Rk, b ∈ R, k>=1, wherein v and b is the parameter of hyperplane, and<,>is inner product.And There is minimal errorAnd by maximum deviation between reality and target <v,xj>+b-yj≤ ε, j >=1, the constraint of ε >=0.Draw to eliminate the impact of exceptional sample Enter relaxation factor ξjj *With parameter C, thus there is minimal errorAnd be subject to yj-<v,xj>-b≤ε+ξj *,<v,xj>+b-yj≤ε+ξj,j≥1,ε,ξjj *The constraint of >=0 and to introduce glug bright Day function.The regression function of f (x) is finally given by calculatingWherein αj *, αj It is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
Use the beneficial effect of above-mentioned further scheme: this step is innovated, and eye movement characteristics vector sum is facial Expressive features vector uses SVR grader to carry out merging, classification etc. processes, and the result obtained is for next The data of step process.The dynamic information of eye has important effect, by eye movement characteristics vector sum face to emotion recognition Portion's expressive features vector carries out the emotion identification method institutes before fusion can more efficiently draw some The trickle emotion change of the testee that can not be detected, thus enhance the ability of emotion recognition, and energy Enough improve accuracy and the reliability of emotion recognition.
Further, the integration percentage in described step S6 is w1:w2(w1,w2It is variable, specifically closes System is: 0≤w1,w2≤ 1, Z=w1Z1+w2Z2And w1+w2=1).
Use the beneficial effect of above-mentioned further scheme: by calculating, merge in this way and tentatively obtain Affective state Z1With the affective state Z obtained further2Can effectively improve the accurate of emotion recognition Rate.
Bimodal emotion recognition system based on facial expression and eyeball action, including with lower module:
Acquisition module, in the light range meeting facial expression recognition, obtains different time Direct picture and the eye of human face move information;
Characteristic vector pickup module, for extracting facial expression feature vector from facial expression image, from The dynamic information of eye is extracted eye movement characteristics vector;
Emotion qualitative analysis module, for the facial expression feature vector extracted is carried out qualitative analysis, obtains Take preliminary affective state Z1
Temporally coupling memory module, for by facial expression feature vector sum eye movement characteristics vector on time Between mate and store;
Fusion, sort module, for extracting the facial expression feature vector sum eye movement characteristics vector mated, And utilize SVR that it is merged and classification etc. processes, obtain sorted affective characteristics vector;
Final affective state confirms module, for by the affective characteristics of point good classification vector with pre-build Emotion information phase comparison in emotion information database, by the feelings of the most close emotion information with point good classification Sense characteristic vector matches, thus obtains affective state Z2, and by Z2With preliminary affective state Z1According to melting Composition and division in a proportion example merges, and draws the final affective state Z of testee.Further, described characteristic vector Extraction module includes facial expression feature vector extraction module and eye movement characteristics vector extraction module, described Portion's expressive features vector extraction module includes with lower unit:
Mark point arranges unit, is used for reading facial expression image, and on the basis of the crown, point estimation face is special The Position Approximate levied, arranges mark point on each characteristic portion profile of face uniformly;
Mark point alignment unit, for by 3 matchings in place between the eyebrows, the midpoint of two interpupillary lines and mouth central authorities Face is divided into symmetrical two parts by the axis gone out, not scaling, do not translate, non-rotary bar Under part, adjust image, the mark point symmetrical relative to axis is adjusted to same level line, and sets up Facial expression shape;
Feature candidate region division unit, is used at facial expression shape according to left-eye/right-eye, left Eyebrow/right eyebrow and mouth are divided into zones of different, and these regions are defined as feature candidate region;
Information extraction unit, for for each feature candidate region, uses difference image method to extract spy Levy vector, by all image sequences in image after previous step being processed, with the neutral table in database The image of feelings carries out calculus of differences, the image sequence that difference value average is maximum in each feature candidate region Extraction facial expression feature vector.
Further, described eye movement characteristics vector extraction module specifically includes with lower unit:
The dynamic information classifying unit of eye, for the eye of collection moved information according to eye movement, eye move the time, The dynamic direction of eye, the dynamic distance of eye, fixation time, fixation times, the size of PD and number of winks are divided Class, obtains eight class eye movement characteristics vectors;
Eye trajectory figure signal generating unit, is depicted as eye trajectory for eye is moved information according to eye movement Figure, and store;
The dynamic hotspot graph signal generating unit of eye, for the fixation time obtained according to collating sort and fixation times, Obtain eye by GazeLab and move hotspot graph, and store.
Further, described fusion, sort module are used for by SVR, if facial expression feature vector sum Eye movement characteristics vector is respectively m and n, within a sampling period, and facial expression feature vector set X=[m1,m2,…,mi-1,mi] and the set Y=[n of eye movement characteristics vector1,n2,…,ni-1,ni], wherein i is the time Sequence, i >=1.Using X and Y as input, obtain exporting f (x) by SVR.Input quantity is described asLinear relationship is described as f (x)=<v, x>+b, v ∈ Rk, b ∈ R, k >=1, wherein V and b is the parameter of hyperplane, and<,>is inner product.And there is minimal errorAnd by actual and expection Maximum deviation < v, x between targetj>+b-yj≤ ε, j >=1, the constraint of ε >=0.Different in order to eliminate The often impact of sample introduces relaxation factor ξjj* with one parameter C thus have minimal errorAnd by yj-<v,xj>-b≤ε+ξj *,<v,xj>+b-yj≤ε+ξj,j≥1,ε,ξjj *≥0 Constraint and introduce Lagrangian.The regression function of f (x) is finally given by calculatingWherein αj *jIt is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
Accompanying drawing explanation
Fig. 1 is the inventive method general illustration;
Fig. 2 is that the inventive method step S2 extracts facial expression feature vector from facial expression image Schematic flow sheet;
Fig. 3 is that the inventive method step S2 moves the flow process signal extracting eye movement characteristics vector information from eye Figure;
Fig. 4 is the inventive method step S3 schematic flow sheet;
Fig. 5 is the inventive method step S5 schematic flow sheet;
Fig. 6 is present system general illustration.
Detailed description of the invention
Being described principle and the feature of the present invention below in conjunction with accompanying drawing, example is served only for explaining this Invention, is not intended to limit the scope of the present invention.
As it is shown in figure 1, based on facial expression and the bimodal emotion recognition method of eyeball action, including with Lower step:
S1, in the light range meeting facial expression recognition, obtain different time human face just Face image and eye move information;
S2, from facial expression image extract facial expression feature vector, from eye move information extract eye move Characteristic vector;
S3, to extract facial expression feature vector carry out qualitative analysis, obtain preliminary affective state Z1
S4, facial expression feature vector sum eye movement characteristics vector is temporally mated and stores;
The facial expression feature vector sum eye movement characteristics vector that S5, extraction have been mated, and by SVR to it Carry out merging, classification etc. processes;
Emotion in S6, the affective characteristics vector that good classification will be divided and the emotion information database pre-build Information phase comparison, matches the affective characteristics vector of the most close emotion information with point good classification, thus Obtain affective state Z2, and by Z2With preliminary affective state Z1Merge according to integration percentage, draw and be subject to The final affective state Z of survey person.
As in figure 2 it is shown, described step S2 extracts facial expression feature vector from facial expression image Concretely comprise the following steps:
S2a.1, reading facial expression image, the Position Approximate of point estimation facial characteristics on the basis of the crown, Each characteristic portion profile of face arranges mark point uniformly;
S2a.2, the axis general simulated by place between the eyebrows, the midpoint of two interpupillary lines and 3, mouth central authorities Face is divided into symmetrical two parts, not scaling, do not translate, non-rotary under the conditions of, adjust figure As the mark point symmetrical relative to axis is adjusted to same level line, and set up facial expression shape mould Type;
S2a.3, at facial expression shape according to left-eye/right-eye, left eyebrow/right eyebrow and mouth divide For zones of different, and these regions are defined as feature candidate region;
S2a.4, for each feature candidate region, use difference image method to extract characteristic vector, logical The all image sequences crossed after previous step being processed in image enter with the image of the neutral expression in database Row calculus of differences, extracts face table the image sequence that difference value average is maximum in each feature candidate region Feelings characteristic vector.
As it is shown on figure 3, described step S2 moves the concrete steps extracting eye movement characteristics vector information from eye For:
S2b.1, the eye of collection is moved that information moves the time according to eye movement, eye, eye moves direction, eye moves Distance, fixation time, fixation times, the size of PD and number of winks classification, obtain eight class eyes Dynamic characteristic vector;
S2b.2, information of being moved by eye are depicted as eye trajectory figure according to eye movement, and store eye trajectory Figure;
S2b.3, the fixation time obtained according to collating sort and fixation times, by test of eye movement software GazeLab obtains eye and moves hotspot graph, and stores eye and move hotspot graph.
As shown in Figure 4, the concretely comprising the following steps of described step S3:
S3.1, the in advance collection reference faces expressive features organizing standard, often organizes the reference faces table of standard more Feelings feature includes left-eye/right-eye, left eyebrow/right eyebrow and 5 kinds of mouth;
The facial expression feature vector extracted from facial expression image in S3.2, step S2 is according to upper 5 kinds in one step carry out extracting;
S3.3, in step S3.1 5 kinds with reference on the basis of facial expression features, in step S3.2 Facial expression feature carry out qualitative analysis, and determine preliminary affective state with ballot method, if occurring When poll is the conflict phenomenon of 2:2:1, then it is fixed to re-start ballot to the affective characteristics that aggregate votes are 1 Property analyze, select other two kinds of affective states are voted with its most close carrying out, thus obtain preliminary feelings Sense state Z1(and according to preliminary affective state to Z1Indirect assignment), if preliminary affective state is actively, Z1=1, if preliminary affective state is neutrality, Z1=0, if preliminary affective state is passiveness, Z1=-1.
As it is shown in figure 5, the concretely comprising the following steps of described step S5:
If facial expression feature vector sum eye movement characteristics vector is respectively m and n, within a sampling period, Facial expression feature vector set X=[m1,m2,…,mi-1,mi] and the set of eye movement characteristics vector Y=[n1,n2,…,ni-1,ni], wherein i is time series, i >=1.Using X and Y as input, pass through SVR Obtain exporting f (x).Input quantity is described asLinear relationship is described as F (x)=<v, x>+b, v ∈ Rk, b ∈ R, k>=1, wherein v and b is the parameter of hyperplane, and<,>is inner product.And There is minimal errorAnd by maximum deviation between reality and target <v,xj>+b-yj≤ ε, j >=1, the constraint of ε >=0.Draw to eliminate the impact of exceptional sample Enter relaxation factor ξjj* with one parameter C thus have minimal errorAnd be subject to yj-<v,xj>-b≤ε+ξj *,<v,xj>+b-yj≤ε+ξj,j≥1,ε,ξjj *The constraint of >=0 and to introduce glug bright Day function.The regression function of f (x) is finally given by calculatingWherein αj *jIt is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
Integration percentage in described step S6 is w1:w2(w1,w2Being variable, physical relationship is: 0≤w1,w2≤ 1, Z=w1Z1+w2Z2And w1+w2=1).
As shown in Figure 6, bimodal emotion recognition system based on facial expression and eyeball action, including with Lower module:
Acquisition module, in the light range meeting facial expression recognition, obtains different time Direct picture and the eye of human face move information;
Characteristic vector pickup module, for extracting facial expression feature vector from facial expression image, from The dynamic information of eye is extracted eye movement characteristics vector;
Emotion qualitative analysis module, for the facial expression feature vector extracted is carried out qualitative analysis, obtains Take preliminary affective state Z1
Temporally coupling memory module, for by facial expression feature vector sum eye movement characteristics vector on time Between mate and store;
Fusion, sort module, for extracting the facial expression feature vector sum eye movement characteristics vector mated, And utilize SVR that it is merged and classification etc. processes, obtain sorted affective characteristics vector;
Final affective state confirms module, for by the affective characteristics of point good classification vector with pre-build Emotion information phase comparison in emotion information database, by the feelings of the most close emotion information with point good classification Sense characteristic vector matches, thus obtains affective state Z2, and by Z2With preliminary affective state Z1According to melting Composition and division in a proportion example merges, and draws the final affective state Z of testee.
Described characteristic vector pickup module include facial expression feature vector extraction module and eye movement characteristics to Amount extraction module, described facial expression feature vector extraction module includes with lower unit:
Mark point arranges unit, is used for reading facial expression image, and on the basis of the crown, point estimation face is special The Position Approximate levied, arranges mark point on each characteristic portion profile of face uniformly;
Mark point alignment unit, for by 3 matchings in place between the eyebrows, the midpoint of two interpupillary lines and mouth central authorities Face is divided into symmetrical two parts by the axis gone out, not scaling, do not translate, non-rotary bar Under part, adjust image and the mark point symmetrical relative to axis is adjusted to same level line, and set up face Portion's expression shape;
Feature candidate region division unit, is used at facial expression shape according to left-eye/right-eye, left Eyebrow/right eyebrow and mouth are divided into zones of different, and these regions are defined as feature candidate region;
Information extraction unit, for for each feature candidate region, uses difference image method to extract spy Levy vector, by all image sequences in image after previous step being processed, with the neutral table in database The image of feelings carries out calculus of differences, the image sequence that difference value average is maximum in each feature candidate region Extraction facial expression feature vector.
Described eye movement characteristics vector extraction module specifically includes with lower unit:
The dynamic information classifying unit of eye, for the eye of collection moved information according to eye movement, eye move the time, The dynamic direction of eye, the dynamic distance of eye, fixation time, fixation times, the size of PD and number of winks are divided Class, obtains eight class eye movement characteristics vectors;
Eye trajectory figure signal generating unit, is depicted as eye trajectory for eye is moved information according to eye movement Figure, and store;
The dynamic hotspot graph signal generating unit of eye, for the fixation time obtained according to collating sort and fixation times, Obtain eye by GazeLab and move hotspot graph, and store.
Described integrated classification, module for by SVR, if facial expression feature vector sum eye movement characteristics to Amount is respectively m and n, within a sampling period, facial expression feature vector set X=[m1,m2,…,mi-1,mi] and the set Y=[n of eye movement characteristics vector1,n2,…,ni-1,ni], wherein i is the time Sequence, i >=1.Using X and Y as input, obtain exporting f (x) by SVR.Input quantity is described asLinear relationship is described as f (x)=<v, x>+b, v ∈ Rk, b ∈ R, k >=1, wherein V and b is the parameter of hyperplane, and<,>is inner product.And there is minimal errorAnd by actual and expection Maximum deviation < v, x between targetj〉+b-yj≤ ε, j >=1, the constraint of ε >=0.Different in order to eliminate The often impact of sample introduces relaxation factor ξjj *With parameter C thus have minimal errorAnd by yj-<v,xj〉-b≤ε+ξj *,<v,xj〉+b-yj≤ε+ξj,j≥1,ε,ξjj *≥0 Constraint and introduce Lagrangian.The regression function of f (x) is finally given by calculatingWherein αj *jIt is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
Embodiment
The bimodal emotion recognition method that the present invention moves detection based on facial expression and eye includes following step Rapid:
Step 1-1, uses high-speed camera and Kinect to obtain face image.Gather many groups in advance Face image, including the essential characteristic of positive, passive and neutral three kinds of emotions, including left-eye/right-eye Form, the height of left eyebrow/right eyebrow and the form of mouth.By to the face analyzing object Real-time Collection Image carries out the extraction of affective characteristics vector, and by affective characteristics vector, 3 kinds of emotions is carried out tendency degree Assignment;
Step 1-2, obtains enough images, and stimulates subject with image, defeated by GazeLab Go out hotspot graph and the trajectory diagram of positive, passive and neutral 3 kinds of emotions.Analyze hotspot graph hotspot's distribution density And movement locus.Hotspot graph is distributed density classification;
Step 2, uses the method carrying out rotating with the crown for basic point thus position face to estimate face The Position Approximate of portion's feature also marks a number of signature point.By reading signature point coordinates And the method that signature point aligns is realized the standardization of image, consequently facilitating follow-up data process. And face has been divided into several feature candidate regions by the present invention, and by image-region qualitative classification, Utilize calculus of finite differences to determine the image that facial expression is the abundantest, and extract from the feature candidate region of this image Facial expression feature value.The facial expression feature in each subregion of this image is extracted finally by AAM method Vector;
Step 3-1, collects the reference faces expressive features organizing standard, often group canonical reference face in advance Expressive features is left eye/left eyebrow, right eye/right eyebrow and 5 kinds of mouth;
Step 3-2, on the basis of the 5 kinds of reference facial expression features collected in advance, to institute in step 2 The facial expression feature vector extracted carries out qualitative analysis, and determines preliminary emotion shape with ballot method State, if there is the conflict phenomenon that poll is 2:2:1, is then the affective characteristics of 1 to ballot to aggregate votes Re-start qualitative analysis, select other two kinds of affective states are voted with its most close carrying out, thus Obtain preliminary affective state Z1(and according to preliminary affective state to Z1Indirect assignment), if preliminary emotion shape State is positive the most then Z1If=1 preliminary affective state is neutrality, Z1=0, if preliminary affective state is passiveness, Z1=-1.
Step 4, temporally mates facial expression feature vector sum eye movement characteristics vector and stores;
Step 5-1, eye movement can be obtained by eye tracker, eye moves the time, eye moves direction, eye move Distance, fixation time, fixation times, the size of PD and number of winks eight category information, and will This eight category information stores.Eye movement is depicted as eye trajectory figure, will be watched attentively by GazeLab Time and fixation times are depicted as eye and move hotspot graph.By analyzing PD, eye trajectory figure, focus Figure can know the notice of testee and the change of emotion;
Step 5-2, extracts the facial expression feature vector sum eye movement characteristics vector mated, and passes through SVR Affective characteristics after merge it, classification etc. is processed is vectorial.
Step 6, by comparing with the information in database, finally draws the affective state of testee. By affective characteristics vector and the emotion information phase in the emotion information database pre-build of point good classification Comparison, the information utilizing least square method to find most to be consistent, by the emotion information being consistent most and a point good classification Affective characteristics vector match, thus obtain affective state Z2.Trained by substantial amounts of data, it is thus achieved that Optimal affective state holds and ratio, and by Z2With preliminary affective state Z1Melt according to integration percentage Close, draw the final affective state Z of testee.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all in the present invention Spirit and principle within, any modification, equivalent substitution and improvement etc. made, should be included in this Within bright protection domain.

Claims (10)

1. based on facial expression and the bimodal emotion recognition method of eyeball action, it is characterised in that bag Include following steps:
S1, in the light range meeting facial expression recognition, obtain different time human face just Face image and eye move information;
S2, from facial expression image extract facial expression feature vector, from eye move information extract eye move Characteristic vector;
S3, to extract facial expression feature vector carry out qualitative analysis, obtain preliminary affective state Z1, And according to preliminary affective state to Z1Indirect assignment;
S4, facial expression feature vector sum eye movement characteristics vector is temporally mated and stores;
The facial expression feature vector sum eye movement characteristics vector that S5, extraction have been mated, and by SVR to it Carry out merging, classification etc. processes;
Emotion in S6, the affective characteristics vector that good classification will be divided and the emotion information database pre-build Information phase comparison, described emotion information database has deposited positive, the neutral and passive three kinds of emotion shapes of many groups The left-eye/right-eye of the standard of state, left eyebrow/right eyebrow and mouth reference faces expressive features vector, will be Close emotion information matches with the affective characteristics vector of point good classification, thus obtains affective state Z2, And by Z2With preliminary affective state Z1Merge according to integration percentage, draw the final emotion shape of testee State Z.
The most according to claim 1 based on facial expression with the bimodal emotion recognition of eyeball action Method, it is characterised in that described step S2 extracts facial expression feature vector from facial expression image Concretely comprise the following steps:
S2a.1, reading facial expression image, the Position Approximate of point estimation facial characteristics on the basis of the crown, Each characteristic portion profile of face arranges mark point uniformly;
S2a.2, the axis general simulated by place between the eyebrows, the midpoint of two interpupillary lines and 3, mouth central authorities Face is divided into symmetrical two parts, not scaling, do not translate, non-rotary under the conditions of, adjust figure Picture, is adjusted to same level line by the mark point symmetrical relative to axis, and sets up facial expression shape Model;
S2a.3, in facial expression shape according to left eye/left eyebrow, right eye/right eyebrow and mouth are drawn It is divided into zones of different, and these regions are defined as feature candidate region;
S2a.4, for each feature candidate region, use difference image method to extract characteristic vector;Logical The all image sequences crossed after previous step being processed in image enter with the image of the neutral expression in database Row calculus of differences, extracts face table the image sequence that difference value average is maximum in each feature candidate region Feelings characteristic vector.
The most according to claim 1 based on facial expression with the bimodal emotion recognition of eyeball action Method, it is characterised in that described step S2 moves the concrete step extracting eye movement characteristics vector information from eye Suddenly it is:
S2b.1, the eye of collection is moved that information moves the time according to eye movement, eye, eye moves direction, eye moves Distance, fixation time, fixation times, the size of PD and number of winks classification, obtain eight class eyes Dynamic characteristic vector;
S2b.2, information of being moved by eye are depicted as eye trajectory figure according to eye movement, and store eye trajectory Figure;
S2b.3, the fixation time obtained according to collating sort and fixation times, obtained by GazeLab The dynamic hotspot graph of eye, and store eye and move hotspot graph.
The most according to claim 1 based on facial expression with the bimodal emotion recognition of eyeball action Method, it is characterised in that concretely comprising the following steps of described step S3:
S3.1, the in advance collection reference faces expressive features organizing standard more, often group reference faces expressive features Including left-eye/right-eye, left eyebrow/right eyebrow and 5 kinds of mouth;
The facial expression feature vector extracted from facial expression image in S3.2, step S2 is according to upper 5 kinds in one step carry out extracting;
S3.3, in step S3.1 5 kinds with reference on the basis of facial expression features, in step S3.2 Facial expression feature carry out qualitative analysis, and determine preliminary affective state with ballot method, if occurring When poll is the conflict phenomenon of 2:2:1, then it is fixed to re-start ballot to the affective characteristics that aggregate votes are 1 Property analyze, select other two kinds of affective states are voted with its most close carrying out, thus obtain preliminary feelings Sense state Z1, and according to preliminary affective state to Z1Indirect assignment, if preliminary affective state is actively, Z1=1, if preliminary affective state is neutrality, Z1=0, if preliminary affective state is passiveness, Z1=-1.
The most according to claim 1 based on facial expression with the bimodal emotion recognition of eyeball action Method, it is characterised in that concretely comprising the following steps of described step S5:
If facial expression feature vector sum eye movement characteristics vector is respectively m and n, within a sampling period, Facial expression feature vector set X=[m1,m2,…,mi-1,mi] and the set of eye movement characteristics vector Y=[n1,n2,…,ni-1,ni], wherein i is time series, i >=1;Using X and Y as input, pass through SVR Obtain exporting f (x);Input quantity is described asLinear relationship is described as F (x)=<v, x>+b, v ∈ Rk, b ∈ R, k >=1, wherein v and b is the parameter of hyperplane, and has minimal errorAnd by maximum deviation between reality and target <v,xj>+b-yj≤ ε, j >=1, the constraint of ε >=0;Draw to eliminate the impact of exceptional sample Enter relaxation factor ξjj *With parameter C thus have minimal errorAnd be subject to yj-<v,xj>-b≤ε+ξj *,<v,xj>+b-yj≤ε+ξj,j≥1,ε,ξjj *The constraint of >=0 and to introduce glug bright Day function;The regression function of f (x) is finally given by calculatingWherein αj *, αj It is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
The most according to claim 1 based on facial expression with the bimodal emotion recognition of eyeball action Method, it is characterised in that the integration percentage in described step S6 is w1:w2,w1,w2It is variable, tool Body relation is: 0≤w1,w2≤ 1, Z=w1Z1+w2Z2And w1+w2=1.
7. bimodal emotion recognition system based on facial expression and eyeball action, it is characterised in that bag Include with lower module:
Acquisition module, in the light range meeting facial expression recognition, obtains different time Direct picture and the eye of human face move information;
Characteristic vector pickup module, for extracting facial expression feature vector from facial expression image, from The dynamic information of eye is extracted eye movement characteristics vector;
Emotion qualitative analysis module, for the facial expression feature vector extracted is carried out qualitative analysis, obtains Take preliminary affective state Z1
Temporally coupling memory module, for by facial expression feature vector sum eye movement characteristics vector on time Between mate and store;
Fusion, sort module, for extracting the facial expression feature vector sum eye movement characteristics vector mated, And utilize SVR that it is merged and classification etc. processes, obtain sorted affective characteristics vector;
Final affective state confirms module, for by the affective characteristics of point good classification vector with pre-build Emotion information phase comparison in emotion information database, by the feelings of the most close emotion information with point good classification Sense characteristic vector matches, thus obtains affective state Z2, and by Z2With preliminary affective state Z1According to melting Composition and division in a proportion example merges, and draws the final affective state Z of testee.
The most according to claim 7 based on facial expression with the bimodal emotion recognition of eyeball action System, it is characterised in that described characteristic vector pickup module includes facial expression feature vector extraction module With eye movement characteristics vector extraction module, described facial expression feature vector extraction module includes with lower unit:
Mark point arranges unit, is used for reading facial expression image, and on the basis of the crown, point estimation face is special The Position Approximate levied, arranges mark point on each characteristic portion profile of face uniformly,
Mark point alignment unit, for by 3 matchings in place between the eyebrows, the midpoint of two interpupillary lines and mouth central authorities Face is divided into symmetrical two parts by the axis gone out, not scaling, do not translate, non-rotary bar Under part, adjust image and the mark point symmetrical relative to axis is adjusted to same level line, and set up face Portion's expression shape;
Feature candidate region division unit, is used at facial expression shape according to left eye/left eyebrow, Right eye/right eyebrow and mouth are divided into zones of different, and these regions are defined as feature candidate region;
Information extraction unit, for for each feature candidate region, uses difference image method to extract spy Levy vector, by all image sequences in image after previous step being processed, with the neutral table in database The image of feelings carries out calculus of differences, the image sequence that difference value average is maximum in each feature candidate region Extraction facial expression feature vector.
The most according to claim 8 based on facial expression with the bimodal emotion recognition of eyeball action System, it is characterised in that described eye movement characteristics vector extraction module specifically includes with lower unit:
The dynamic information classifying unit of eye, for the eye of collection moved information according to eye movement, eye move the time, The dynamic direction of eye, the dynamic distance of eye, fixation time, fixation times, the size of PD and number of winks are divided Class, obtains eight class eye movement characteristics vectors;
Eye trajectory figure signal generating unit, is depicted as eye trajectory for eye is moved information according to eye movement Figure, and store;
The dynamic hotspot graph signal generating unit of eye, for the fixation time obtained according to collating sort and fixation times, Obtain eye by GazeLab and move hotspot graph, and store.
The most according to claim 7 based on facial expression with the bimodal emotion recognition of eyeball action System, it is characterised in that described fusion, sort module for by SVR, if facial expression feature to Amount and eye movement characteristics vector are respectively m and n, within a sampling period, and facial expression feature vector set Close X=[m1,m2,…,mi-1,mi] and the set Y=[n of eye movement characteristics vector1,n2,…,ni-1,ni], when wherein i is Between sequence, i >=1;Using X and Y as input, obtain exporting f (x) by SVR;Input quantity is described asLinear relationship is described as f (x)=<v, x>+b, v ∈ Rk, b ∈ R, k >=1, wherein V and b is the parameter of hyperplane, and has minimal errorAnd by maximum between reality and target Deviation < v, xj>+b-yj≤ ε, j >=1, the constraint of ε >=0;In order to eliminate the shadow of exceptional sample Ring and introduce relaxation factor ξjj *With parameter C, thus there is minimal errorAnd be subject to yj-<v,xj>-b≤ε+ξj *,<v,xj>+b-yj≤ε+ξj,j≥1,ε,ξjj *The constraint of >=0, and introduce glug Bright day function;The regression function of f (x) is finally given by calculatingWherein αj *jIt is Lagrange's multiplier, 0≤αj *j≤ C and αj *j≥0。
CN201610173439.3A 2016-03-24 2016-03-24 The bimodal emotion recognition method and system acted based on facial expression and eyeball Active CN105868694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610173439.3A CN105868694B (en) 2016-03-24 2016-03-24 The bimodal emotion recognition method and system acted based on facial expression and eyeball

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610173439.3A CN105868694B (en) 2016-03-24 2016-03-24 The bimodal emotion recognition method and system acted based on facial expression and eyeball

Publications (2)

Publication Number Publication Date
CN105868694A true CN105868694A (en) 2016-08-17
CN105868694B CN105868694B (en) 2019-03-08

Family

ID=56625895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610173439.3A Active CN105868694B (en) 2016-03-24 2016-03-24 The bimodal emotion recognition method and system acted based on facial expression and eyeball

Country Status (1)

Country Link
CN (1) CN105868694B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180236A (en) * 2017-06-02 2017-09-19 北京工业大学 A kind of multi-modal emotion identification method based on class brain model
CN107194151A (en) * 2017-04-20 2017-09-22 华为技术有限公司 Determine the method and artificial intelligence equipment of emotion threshold value
CN107239738A (en) * 2017-05-05 2017-10-10 南京邮电大学 It is a kind of to merge eye movement technique and the sentiment analysis method of heart rate detection technology
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN108416331A (en) * 2018-03-30 2018-08-17 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN108537159A (en) * 2018-04-03 2018-09-14 重庆房地产职业学院 Data analysis system and method for the people to artistic work degree of recognition in public space
CN109108960A (en) * 2017-06-23 2019-01-01 卡西欧计算机株式会社 Robot, the control method of robot and storage medium
CN109199412A (en) * 2018-09-28 2019-01-15 南京工程学院 Abnormal emotion recognition methods based on eye movement data analysis
CN109215763A (en) * 2018-10-26 2019-01-15 广州华见智能科技有限公司 A kind of emotional health monitoring method and system based on facial image
CN109711291A (en) * 2018-12-13 2019-05-03 合肥工业大学 Personality prediction technique based on eye gaze thermodynamic chart
WO2019184620A1 (en) * 2018-03-27 2019-10-03 北京七鑫易维信息技术有限公司 Method, apparatus, and system for outputting information
CN110728194A (en) * 2019-09-16 2020-01-24 中国平安人寿保险股份有限公司 Intelligent training method and device based on micro-expression and action recognition and storage medium
CN111339878A (en) * 2020-02-19 2020-06-26 华南理工大学 Eye movement data-based correction type real-time emotion recognition method and system
CN113158854A (en) * 2021-04-08 2021-07-23 东北大学秦皇岛分校 Automatic monitoring train safety operation method based on multi-mode information fusion
CN113197579A (en) * 2021-06-07 2021-08-03 山东大学 Intelligent psychological assessment method and system based on multi-mode information fusion
WO2021217973A1 (en) * 2020-04-28 2021-11-04 平安科技(深圳)有限公司 Emotion information recognition method and apparatus, and storage medium and computer device
CN113837153A (en) * 2021-11-25 2021-12-24 之江实验室 Real-time emotion recognition method and system integrating pupil data and facial expressions
CN113869229A (en) * 2021-09-29 2021-12-31 电子科技大学 Deep learning expression recognition method based on prior attention mechanism guidance
WO2023116145A1 (en) * 2021-12-21 2023-06-29 北京字跳网络技术有限公司 Expression model determination method and apparatus, and device and computer-readable storage medium
CN116682168A (en) * 2023-08-04 2023-09-01 阳光学院 Multi-modal expression recognition method, medium and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561868A (en) * 2009-05-19 2009-10-21 华中科技大学 Human motion emotion identification method based on Gauss feature
US20110091115A1 (en) * 2009-10-19 2011-04-21 Canon Kabushiki Kaisha Feature point positioning apparatus, image recognition apparatus, processing method thereof and computer-readable storage medium
CN102819744A (en) * 2012-06-29 2012-12-12 北京理工大学 Emotion recognition method with information of two channels fused
CN102968643A (en) * 2012-11-16 2013-03-13 华中科技大学 Multi-mode emotion recognition method based on Lie group theory
CN103400145A (en) * 2013-07-19 2013-11-20 北京理工大学 Voice-vision fusion emotion recognition method based on hint nerve networks
KR101451854B1 (en) * 2013-10-10 2014-10-16 재단법인대구경북과학기술원 Apparatus for recongnizing face expression and method thereof
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561868A (en) * 2009-05-19 2009-10-21 华中科技大学 Human motion emotion identification method based on Gauss feature
US20110091115A1 (en) * 2009-10-19 2011-04-21 Canon Kabushiki Kaisha Feature point positioning apparatus, image recognition apparatus, processing method thereof and computer-readable storage medium
CN102819744A (en) * 2012-06-29 2012-12-12 北京理工大学 Emotion recognition method with information of two channels fused
CN102968643A (en) * 2012-11-16 2013-03-13 华中科技大学 Multi-mode emotion recognition method based on Lie group theory
CN103400145A (en) * 2013-07-19 2013-11-20 北京理工大学 Voice-vision fusion emotion recognition method based on hint nerve networks
KR101451854B1 (en) * 2013-10-10 2014-10-16 재단법인대구경북과학기술원 Apparatus for recongnizing face expression and method thereof
CN104463100A (en) * 2014-11-07 2015-03-25 重庆邮电大学 Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹田熠: "多模态融合的情感识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194151A (en) * 2017-04-20 2017-09-22 华为技术有限公司 Determine the method and artificial intelligence equipment of emotion threshold value
CN107239738A (en) * 2017-05-05 2017-10-10 南京邮电大学 It is a kind of to merge eye movement technique and the sentiment analysis method of heart rate detection technology
CN107180236B (en) * 2017-06-02 2020-02-11 北京工业大学 Multi-modal emotion recognition method based on brain-like model
CN107180236A (en) * 2017-06-02 2017-09-19 北京工业大学 A kind of multi-modal emotion identification method based on class brain model
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN109108960A (en) * 2017-06-23 2019-01-01 卡西欧计算机株式会社 Robot, the control method of robot and storage medium
WO2019184620A1 (en) * 2018-03-27 2019-10-03 北京七鑫易维信息技术有限公司 Method, apparatus, and system for outputting information
CN108416331B (en) * 2018-03-30 2019-08-09 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN108416331A (en) * 2018-03-30 2018-08-17 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN108537159A (en) * 2018-04-03 2018-09-14 重庆房地产职业学院 Data analysis system and method for the people to artistic work degree of recognition in public space
CN109199412A (en) * 2018-09-28 2019-01-15 南京工程学院 Abnormal emotion recognition methods based on eye movement data analysis
CN109215763A (en) * 2018-10-26 2019-01-15 广州华见智能科技有限公司 A kind of emotional health monitoring method and system based on facial image
CN109711291A (en) * 2018-12-13 2019-05-03 合肥工业大学 Personality prediction technique based on eye gaze thermodynamic chart
CN110728194A (en) * 2019-09-16 2020-01-24 中国平安人寿保险股份有限公司 Intelligent training method and device based on micro-expression and action recognition and storage medium
CN111339878A (en) * 2020-02-19 2020-06-26 华南理工大学 Eye movement data-based correction type real-time emotion recognition method and system
CN111339878B (en) * 2020-02-19 2023-06-20 华南理工大学 Correction type real-time emotion recognition method and system based on eye movement data
WO2021217973A1 (en) * 2020-04-28 2021-11-04 平安科技(深圳)有限公司 Emotion information recognition method and apparatus, and storage medium and computer device
CN113158854A (en) * 2021-04-08 2021-07-23 东北大学秦皇岛分校 Automatic monitoring train safety operation method based on multi-mode information fusion
CN113158854B (en) * 2021-04-08 2022-03-22 东北大学秦皇岛分校 Automatic monitoring train safety operation method based on multi-mode information fusion
CN113197579A (en) * 2021-06-07 2021-08-03 山东大学 Intelligent psychological assessment method and system based on multi-mode information fusion
CN113869229A (en) * 2021-09-29 2021-12-31 电子科技大学 Deep learning expression recognition method based on prior attention mechanism guidance
CN113869229B (en) * 2021-09-29 2023-05-09 电子科技大学 Deep learning expression recognition method based on priori attention mechanism guidance
CN113837153A (en) * 2021-11-25 2021-12-24 之江实验室 Real-time emotion recognition method and system integrating pupil data and facial expressions
WO2023116145A1 (en) * 2021-12-21 2023-06-29 北京字跳网络技术有限公司 Expression model determination method and apparatus, and device and computer-readable storage medium
CN116682168A (en) * 2023-08-04 2023-09-01 阳光学院 Multi-modal expression recognition method, medium and system
CN116682168B (en) * 2023-08-04 2023-10-17 阳光学院 Multi-modal expression recognition method, medium and system

Also Published As

Publication number Publication date
CN105868694B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN105868694A (en) Dual-mode emotion identification method and system based on facial expression and eyeball movement
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN101305913B (en) Face beauty assessment method based on video
CN105955465A (en) Desktop portable sight line tracking method and apparatus
CN106296742B (en) A kind of matched online method for tracking target of binding characteristic point
CN106096662B (en) Human motion state identification based on acceleration transducer
CN105740779B (en) Method and device for detecting living human face
CN107590452A (en) A kind of personal identification method and device based on gait and face fusion
CN107341688A (en) The acquisition method and system of a kind of customer experience
CN101526997A (en) Embedded infrared face image identifying method and identifying device
CN105999670A (en) Shadow-boxing movement judging and guiding system based on kinect and guiding method adopted by same
CN109697430A (en) The detection method that working region safety cap based on image recognition is worn
CN106846734A (en) A kind of fatigue driving detection device and method
CN104794451B (en) Pedestrian&#39;s comparison method based on divided-fit surface structure
CN1687957A (en) Man face characteristic point positioning method of combining local searching and movable appearance model
CN106203375A (en) A kind of based on face in facial image with the pupil positioning method of human eye detection
CN104021382A (en) Eye image collection method and system
KR20170082074A (en) Face recognition apparatus and method using physiognomic information
CN105022999A (en) Man code company real-time acquisition system
CN109325408A (en) A kind of gesture judging method and storage medium
CN106295532A (en) A kind of human motion recognition method in video image
CN102831408A (en) Human face recognition method
CN109544523A (en) Quality of human face image evaluation method and device based on more attribute face alignments
CN112801859A (en) Cosmetic mirror system with cosmetic guiding function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant