CN109800734A - Human facial expression recognition method and device - Google Patents

Human facial expression recognition method and device Download PDF

Info

Publication number
CN109800734A
CN109800734A CN201910092813.0A CN201910092813A CN109800734A CN 109800734 A CN109800734 A CN 109800734A CN 201910092813 A CN201910092813 A CN 201910092813A CN 109800734 A CN109800734 A CN 109800734A
Authority
CN
China
Prior art keywords
data
point
facial expression
facial
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910092813.0A
Other languages
Chinese (zh)
Inventor
赵起超
李召
杨苒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jinfa Technology Co Ltd
Original Assignee
Beijing Jinfa Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinfa Technology Co Ltd filed Critical Beijing Jinfa Technology Co Ltd
Priority to CN201910092813.0A priority Critical patent/CN109800734A/en
Publication of CN109800734A publication Critical patent/CN109800734A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention provides a kind of human facial expression recognition method and devices, wherein this method comprises: acquiring the facial characteristics point data indicated by coordinate value according to setting frame rate in recognition time section;Acquisition is able to reflect the physiological sensor data of affective state while acquiring the facial characteristics point data;Facial expression is identified using facial expression data and the corresponding physiological sensor data, and the facial expression data includes the facial characteristics point data.It can be improved the accuracy of human facial expression recognition through the above scheme.

Description

Human facial expression recognition method and device
Technical field
The present invention relates to Expression Recognition technical field more particularly to a kind of human facial expression recognition method and devices.
Background technique
With the rapid development of computer and intelligent recognition, face is carried out to identify the skill of personal identification by biological characteristic Art application starts to come into people's life.Face is considered as a kind of biological identification technology the most friendly, it is combined at image The multiple fields such as reason, computer graphics, pattern-recognition, visualization technique, Human physiology.
The initial stage of face recognition technology can only describe face using the local feature of human face, but due to human face There is no significant edge and be easy the influence by expression, recognition of face is only limitted to front face.With continuously improving for technology, Recognition of face is to a certain extent in the posture and expression shape change for adapting to face, to meet face recognition technology in practical applications Objective demand.Although the prior art can adapt to the variation of the posture and expression of face, the face of people provides abundant Affective state, moreover the mankind are innately exactly by facial expression to express mood and emotion.This to know in facial expression There is a problem of that accuracy is not high during other.
Summary of the invention
In view of this, the present invention provides a kind of human facial expression recognition method and devices, to improve human facial expression recognition Accuracy.
To achieve the goals above, the present invention uses following scheme:
In an embodiment of the invention, a kind of human facial expression recognition method, comprising:
The facial characteristics point data indicated by coordinate value is acquired according to setting frame rate in recognition time section;
Acquisition is able to reflect the physiological sensor data of affective state while acquiring the facial characteristics point data;
Facial expression, the facial expression number are identified using facial expression data and the corresponding physiological sensor data According to including the facial characteristics point data.
In an embodiment of the invention, facial expression recognition apparatus, comprising:
Face feature point data acquisition unit, for passing through coordinate value according to setting frame rate acquisition in recognition time section The facial characteristics point data of expression;
Physiological sensor data unit is able to reflect emotion for acquiring while acquiring the facial characteristics point data The physiological sensor data of state;
Facial expression data recognition unit, for being known using facial expression data and the corresponding physiological sensor data Other facial expression, the facial expression data include the facial characteristics point data.
In an embodiment of the invention, electronic equipment, including memory, processor and storage are on a memory and can be The computer program run on processor, the processor realize the step of above-described embodiment the method when executing described program Suddenly.
In an embodiment of the invention, computer readable storage medium is stored thereon with computer program, the program quilt The step of processor realizes above-described embodiment the method when executing.
Human facial expression recognition method, facial expression recognition apparatus, electronic equipment and computer-readable storage medium of the invention Matter by acquiring facial characteristic point data and the physiological sensor data for being able to reflect affective state simultaneously, and utilizes two kinds of numbers Facial expression is identified according to comprehensive analysis, can obtain more accurate recognition result, to more accurately identify the table of face The mood and emotion of feelings and its expression.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.In the accompanying drawings:
Fig. 1 is the flow diagram of the human facial expression recognition method of one embodiment of the invention;
Fig. 2 is the method flow signal that the facial characteristics point data indicated by coordinate value is acquired in one embodiment of the invention Figure;
Fig. 3 is the flow diagram of the human facial expression recognition method of another embodiment of the present invention;
Fig. 4 is that the side of facial expression data and physiological sensor data identification facial expression is utilized in one embodiment of the invention Method flow diagram;
Fig. 5 is the flow diagram of the human facial expression recognition method of one embodiment of the invention;
Fig. 6 is the schematic diagram of reference axis used by head position in one embodiment of the invention;
Fig. 7 is the schematic diagram of scatter plot in one embodiment of the invention;
Fig. 8 is the structural schematic diagram of the facial expression recognition apparatus of one embodiment of the invention.
Specific embodiment
Understand in order to make the object, technical scheme and advantages of the embodiment of the invention clearer, with reference to the accompanying drawing to this hair Bright embodiment is described in further details.Here, the illustrative embodiments of the present invention and their descriptions are used to explain the present invention, but simultaneously It is not as a limitation of the invention.
Fig. 1 is the flow diagram of the human facial expression recognition method of one embodiment of the invention.As shown in Figure 1, some implementations The human facial expression recognition method of example, comprising:
Step S110: it is counted according to the acquisition of setting frame rate by the facial characteristics that coordinate value indicates in recognition time section According to;
Step S120: acquisition is able to reflect the physiology sensing of affective state while acquiring the facial characteristics point data Device data;
Step S130: identifying facial expression using facial expression data and the corresponding physiological sensor data, described Facial expression data includes the facial characteristics point data.
In above-mentioned steps S110, which may include that the period of one or many facial expressions occurs, can With the timing since taking place facial expression, timing is terminated to generation facial expression is stopped.Facial characteristics point data can benefit It is shot with camera with certain frame rate to acquire, the light that camera is based on can be visible light or black light.It can Select some one or more points for being able to reflect emotional change as face feature point, example using the face in identified object Such as, the point near eyebrow, point near the corners of the mouth etc..During acquiring facial characteristic point data, available facial characteristics The D coordinates value of point, to indicate the variation of face feature point.
Before carrying out data acquisition, it can permit user in human-computer interaction interface and carry out parameter setting, to determine subject The information of person, for example, project name, subject number, subject name, subject gender etc. are input to human-computer interaction interface.Inhomogeneity Different judgment criterias can be used in subsequent identification for the subject of type (such as gender), to be finely divided identification to subject.
In above-mentioned steps S120, the physiological sensor data for being able to reflect affective state may include the number such as pulse, breathing According to, can be by EGC sensor (for example, 3 lead EGC sensors) or pulse transducer (for example, light for pulse data Power Capacity pulse transducer) acquisition.Wherein, 3 lead EGC sensors refer to the crosslinking electrode being attached to around heart, photoelectricity volume Pulse transducer refers to the pulse signal acquisition equipment at the positions such as finger tip, ear-lobe, wrist.Facial characteristics point data and physiology sensing Device data can correspond to each other on same time shaft, be convenient for subsequent analysis.
In above-mentioned steps S130, facial expression data can also include other in addition to including the facial characteristics point data Data, such as limbs data, specifically, such as variation, hand motion of head position etc..Here, facial characteristics point data can Using the chief component as facial expression data.Facial characteristics point data for a certain moment, in facial expression data It is corresponding with physiological sensor data.The recognition result of facial expression can correspond to a certain affective state, or corresponding a variety of emotions Shape probability of state, wherein affective state may include anxiety, loosen, and more specifically divides, may include happy, sad, raw Gas, it is surprised, frightened, suspect, despise, calmness etc..
Facial characteristics point data and the physiological sensor data comprehensive analysis of synchronization be can use to improve facial table The authenticity of feelings identification, for example, at a time, being identified in the corners of the mouth characteristic point appearance in the facial characteristics point data of object The feature raised is likely to happily according to the judging result of facial characteristics point data, if but identified object at this time pulse Signal does not show happy signal characteristic, then can be determined that the identified object belongs to the case where puting on a false smile, that is, simultaneously It does not generate and belongs to happy affective state, therefore, in conjunction with physiological sensor data, more realistically human facial expression recognition can be obtained As a result.
In the present embodiment, by acquiring facial characteristic point data and the biosensor number for being able to reflect affective state simultaneously According to, and facial expression is identified using two kinds of aggregation of data analyses, more accurate recognition result can be obtained.
Fig. 2 is the method flow signal that the facial characteristics point data indicated by coordinate value is acquired in one embodiment of the invention Figure.As shown in Fig. 2, above-mentioned steps S110, that is, indicated according to setting frame rate acquisition by coordinate value in recognition time section Facial characteristics point data, it may include:
Step S111: using depth camera to set frame rate acquisition face-image in recognition time section;
Step S112: identifying the D coordinates value of the face feature point in the face-image, obtains facial characteristics points According to.
In above-mentioned steps S111, the depth camera can acquire three-dimensional face-image, can be RGB (RGB) Camera, such as can be monocular RGB camera, binocular RGB camera etc..In some embodiments, the resolution ratio of the depth camera 320 × 240 can be greater than or equal to, phase between each face feature point can be met when acquiring multiple face feature points with this The demand of differentiation;The setting frame rate can be greater than or equal to 10 frames/second, can satisfy with this and capture the subtle change of facial expression The demand of change.It, can be using the trunnion axis vertical with human eye direct-view direction as X for three-dimensional coordinate in above-mentioned steps S112 Axis, to be used as Y-axis straight up and with the vertical axis in direction of human eye direct-view, using along human eye look at straight axis that direction is established as Z axis.In the present embodiment, by acquiring the D coordinates value of face feature point using depth camera, facial table can not only be obtained Variation on the two-dimensional surface of feelings, additionally it is possible to variation of the facial expression on third dimension direction is obtained, so that the face obtained is special It is more acurrate to levy point data.
In some embodiments, the facial characteristics point data, it may include: jaw, right angle of mandible, point, lower-left before right Jaw angle, left front jaw, right eyebrow points outside, right eyebrow central point, side point in right eyebrow, side point in left eyebrow, left eyebrow central point, on the outside of left eyebrow Point, the nasion, nose, nose bottom right boundary point, nose bottom boundaries point, nose lower-left boundary point, on the outside of right eye, on the inside of right eye, on the inside of left eye, Left eye outside, right labial angle, the right tip point of upper lip, upper lip central point, the left tip point of upper lip, left labial angle point, lower lip left edge point, under Lip central point, lower lip right hand edge point, lower lip upper extreme point, lower lip bottom end point, angle point under angle point, right eye, angle point on left eye on right eye And the D coordinates value of one or more face feature points under left eye in angle point.The specific location of those characteristic points is visually specific Situation specifically determines.In the present embodiment, the data of 34 face feature points, the spy for finding and considering can be acquired simultaneously Sign point is more, is more advantageous to the fine identification of facial expression.
Fig. 3 is the flow diagram of the human facial expression recognition method of another embodiment of the present invention.As shown in figure 3, shown in Fig. 1 Human facial expression recognition method, may also include that
Step S140: the head position data indicated by angle value, and recording surface are acquired in the recognition time section Portion's expression and action data;The head position data, comprising: based on head analog coordinate axis indicate pitch angle, yaw angle and Roll angle;The facial expression behavioral data, comprising: the number of facial expression occurs and the facial expression that occurs every time continues Time;The facial expression data further includes the head position data and the facial expression behavioral data.
In above-mentioned steps S140, the head position can by head analog coordinate axis to pitch angle, yaw angle, turn over Roll angle carries out data acquisition;The X-axis of head position can be the trunnion axis vertical with human eye direct-view direction, and Y-axis can be vertically Upwards and with the vertical axis in direction of human eye direct-view, Z axis can be the axis established along human eye direct-view direction, then pitch angle can be with It is the angle that head surrounds X-direction rotation, yaw angle can be head and surround the angle that Y direction rotates through, and roll angle can To be that head surrounds the angle that rotates through of Z-direction.Can use such as wear-type gyroscope acquire head pitch angle, partially Boat angle, roll angle.Since the posture on head can reflect the affective state of identified object to a certain extent, for example, ought not open It can bow when the heart, when laugh can face upward head, head can be turned to side when surprised, so being used for face together by acquiring head position Expression Recognition can make recognition result more acurrate.
Whether can be changed by identifying the coordinate of face feature point to determine whether facial expression occurs, for example, When identified object is tranquility, the three-dimensional coordinate of each face feature point can be initial value, in continuous acquisition face figure As and during being identified, once the variable quantity for finding the coordinate value of some or multiple characteristic points is more than a certain range, It is considered that facial expression has occurred, when the coordinate value of those characteristic points restores to initial value, it is believed that the face table After feelings, and so on the available number that facial expression occurs, while can recorde the facial expression occurred every time Duration.Number in case of certain facial expression is more than certain amount or significantly more than other types facial expression Number, it is believed that the probability that this kind of facial expression actually occurs is larger.If the duration of certain facial expression is more than Certain time length or the duration for being longer than other kinds of facial expression, it is believed that this kind of facial expression actually occurs Probability is larger.
In the present embodiment, by acquisition head position data and facial expression and action data are recorded, are used as facial expression Data, synthesis are analyzed, and the accuracy of identification can be further increased with this.
In some embodiments, above-mentioned steps S120, that is, acquisition can while acquiring the facial characteristics point data Reflect the physiological sensor data of affective state, specifically, it may include: it is acquired while acquiring the facial characteristics point data The pulse signal at least one position around heart, in finger tip, ear-lobe and wrist, as physiological sensor data.Wherein, institute Stating pulse signal includes original pulse analog signal, carries out the HRV number that HRV analyzes the available corresponding moment according to pulse signal According to.
In the present embodiment, around heart, the pulse/heart rate data at the positions such as finger tip, ear-lobe and wrist relatively be can accurately reflect It is tested emotional change, so recognition result can be made more acurrate.
Fig. 4 is that the side of facial expression data and physiological sensor data identification facial expression is utilized in one embodiment of the invention Method flow diagram.As shown in figure 4, above-mentioned steps S130, that is, according to pre-defined rule and using facial expression data and accordingly The physiological sensor data identifies facial expression, it may include:
Step S131: carrying out HRV to the pulse signal in the physiological sensor data and analyze to obtain power density spectrum, and Emotional state classification is determined according to range belonging to the power density spectrum;
Step S132: mapping to obtain mood potency value according to the facial expression data, segments emotional state to determine;
Step S133: in the case where the subdivision emotional state belongs to the emotional state classification, by the subdivision feelings Not-ready status is as human facial expression recognition result.
It, can be by carrying out the extraction of HRV frequecy characteristic to initial data in above-mentioned steps S131, and then determination is mapped Affective state classification.HRV analysis is carried out to the pulse signal in the physiological sensor data, specifically can include: according to The available RR interval data of original pulse signal is carried out according to the available heart rate data of RR interphase according to heart rate data Frequency-domain analysis obtains power spectral density.The available corresponding emotional state classification of the range according to corresponding to power spectral density. Wherein, the corresponding relationship of power spectral density range and emotional state classification, can be according to the pulse data and stimulation being largely tested The corresponding relationship of material (emotional state that can learn subject) is obtained by analysis.For example, can use LOMB-SCARGLE The algorithm of cyclic graph after analysis under available asymmetric data state, is tested tranquil shape to data acquisition is largely had a fling at The power spectral line of HRV can be in such as 0-0.4Hz range under state.Then the nonlinear data of acquisition can be analyzed.It can The corresponding power spectral limit of great amount of samples under different emotions status categories is determined in a manner of through scatter plot, and according to the power Spectral limit determines affective state classification corresponding to data point in the physiological sensor data.Scatter plot horizontal axis is in 0.7- The state of 0.8Hz belongs to subject and is in the state loosened;The state that scatter plot horizontal axis is in 0.55-0.65Hz belongs to subject In nervous state.Affective state classification may include anxiety and loosen, and in other embodiments, can divide thinner class It not, such as can also include calmness.
It, can be by (coordinate value or the coordinate change of each face feature point of various facial expression datas in above-mentioned steps S132 Change value, the duration of facial expression, facial expression frequency) it is divided respectively according to corresponding preset range, Then the division range according to belonging to various facial expression datas maps to obtain mood potency value, and is judged by mood potency value The subdivision affective state of subject, for example, happily, anger, indignation etc..Wherein, the range of mood potency value can between -1~+1, Wherein, if the range of mood potency value is between -1~0, it is believed that subject affective state is passive states;If mood potency The range of value is between 0~+1, it is believed that subject affective state is positive state.Wherein, mood potency value=happy correspondence The other expressions of value-in corresponding maximum value (can be in addition to surprised), the mood on current point in time may determine that just with this Property or negativity.Wherein, different types of facial expression data corresponds to different weights, can integrate each facial expression data with this and obtain To subdivision emotional state.Wherein, different facial expression datas, such as (coordinate value or coordinate become the data of different face feature points Change amount), correspondence obtains respective mood potency value, it can be overlapped to obtain final mood potency value via setting weight, Then the corresponding recognition result of facial expression data is judged according to the final mood potency value.
In above-mentioned steps S133, if subdivision emotional state belongs to the emotional state classification, illustrate face feature point Recognition result is consistent with the recognition result of physiological sensor data, then can be using the subdivision emotional state as human facial expression recognition As a result;If inconsistent, illustrate that recognition result may be inaccurate, need to do further judgement.
In the present embodiment, by carrying out the macrotaxonomy of emotional state classification first with pulse data, facial characteristics is recycled Point data is finely divided class, facial characteristics point data and pulse data effectively can be combined comprehensive analysis with this, from And it can more accurately identify facial expression.
In some embodiments, it is described in the case that the range belonging to the power density spectrum is 0.7Hz~0.8Hz Emotional state classification is relaxation state;In the case that the range belonging to the power density spectrum is 0.55Hz~0.65Hz, institute Stating emotional state classification is tense situation.Power density spectrum, that is, scatter plot horizontal axis range in the present embodiment, is to pass through analysis What great amount of samples obtained, it can be used in accurately analyzing biosensor (pulse/heart rate signal) data, obtain facial table Feelings recognition result.
To make those skilled in the art be best understood from the present invention, it will illustrate implementation of the invention with specific embodiment below Process.
Fig. 5 is the flow diagram of the human facial expression recognition method of one embodiment of the invention.As shown in figure 5, some implementations The human facial expression recognition method of example, it may include:
Step S101: human-computer interaction interface parameter setting is carried out;
The parameter setting of human-computer interaction interface may include that project name, subject number, subject name, subject gender is defeated Enter to human-computer interaction interface, to determine the specifying information of subject.
Step S102: facial expression data acquisition is carried out;
It is acquired by facial expression of the camera to subject.Camera employed in the collection process of facial expression data It can be RGB camera, minimum resolution can be set as 320 × 240, and minimum frame rate can be set as 10 frames/second.Face Expression data can include: the acquisition of head coordinate position data, the acquisition of facial characteristics point data and the acquisition of facial expression behavioral data.
The behavioral data conduct of the coordinate of head position, the coordinate of 34 face feature points and facial expression can be passed through Facial expression data collected.During facial expression data acquisition, it should ensure that facial expression is all visible;If The characteristic point of facial expression is blocked, and the tracking of facial expression will be obstructed, at this time can only receiving portion facial expression it is special Levy the collection result of point;It can be according to the HRV of acquisition if it cannot identify expression information while facial expression is blocked The affective state that the information judgement subject of (heart rate variability) belongs to anxiety or loosens.
Wherein, the coordinate of head position includes: pitch angle, yaw angle, roll angle.Fig. 6 is head in one embodiment of the invention The schematic diagram of reference axis used by portion position, as shown in fig. 6, simulation is equipped with head reference axis: X-axis, Y-axis and Z axis on head. X-axis is to establish vertical trunnion axis with human eye direct-view direction, and Y-axis is upward and vertical with the direction of the human eye direct-view axis of numerical value;Z Axis is the axis established along human eye direct-view direction.Pitch angle pitch is the angle that head surrounds X-axis rotation;Yaw angle yaw is head Portion surrounds the angle that Y-axis rotates through;Roll angle roll is the angle that head surrounds that Z axis rotates through.
Above-mentioned 34 face feature points can include: jaw, right angle of mandible, point, left angle of mandible, left front jaw, right eyebrow before right Points outside, right eyebrow central point, side point in right eyebrow, side point, left eyebrow central point, left eyebrow points outside, the nasion, nose, nose are right in left eyebrow Lower boundary point, nose bottom boundaries point, nose lower-left boundary point, right eye outside, right eye inside, left eye inside, left eye outside, right labial angle, The right tip point of upper lip, upper lip central point, the left tip point of upper lip, left labial angle point, lower lip left edge point, lower lip central point, lower lip are right Marginal point, lower lip upper extreme point, lower lip bottom end point, angle point under angle point, right eye on right eye, angle point under angle point, left eye on left eye.
The behavioral data of facial expression can include: at the beginning of facial expression, end time, duration and generation Number.
Step S103: physiological sensor data acquisition is carried out;
During acquiring facial expression data, the physiological sensor data at corresponding moment can be acquired simultaneously.Physiology Sensing data may include heart rate related data, pulse rate data.It can use photoelectricity volume pulse transducer to collect Heart rate/pulse original signal of subject, for example, can be acquired to the pulse signal at the positions such as finger tip, ear-lobe, wrist; Alternatively, the pulse data that 3 crosslinking electrode EGC sensors collect subject can be used.
Step S104: facial expression data and physiological sensor data are handled;
(1) physiological sensor data is handled
Heart rate/pulse the initial data collected can be filtered carries out HRV (heart rate variability) point again Analysis, can first obtain HRV data and be further analyzed again;Or HRV (heart rate variability) analysis is most directly carried out, pass through HRV Power density spectrum can be obtained in analysis, and affective state can be mapped to after dividing to power density spectrum, obtains heart rate/pulse original The corresponding recognition result of beginning data.
Specifically, it can be calculated by the time-domain analysis of HRV SDNN (standard deviation of normal sinus heartbeat interphase), or Person can calculate the power density spectrum of HRV by frequency domain method.It is mapped using the division of the power density spectrum of SDNN or HRV To affective state.It is tense situation for example, first determining the state that subject is according to the range of the value of the power spectral line of HRV Or relaxation state, major class can be analyzed and be classified to facial expression accordingly.To pass through the low frequency function for calculating pulse signal Rate LF and high frequency power HF carries out the judgement of emotional state, wherein Fast Fourier Transform (FFT) can be used to be obtained for frequency, low Frequency power LF can be the frequency power of 0.04-0.14Hz range, and HF can be the frequency power of 0.14-0.4Hz range, mood State can be indicated with LF/ (LF+HF).For example, when the line of power spectrum is in 0.7Hz~0.8Hz range, corresponding emotion shape State may belong to subject and be in the state loosened;When the line of power spectrum is in 0.55Hz~0.65Hz range, corresponding emotion State may belong to the state that subject is in nervous.
It, can be based on LOMB-SCARGLE cyclic graph to heart rate by the original signal of photoelectricity volume pulse collection to heart rate Original signal carry out HRV frequency domain character extract.Wherein, LOMB-SCARGLE cyclic graph is the cyclic graph in classical spectrum estimate Improved analysis spectrum method, Ke Yiyun have been for cannot carry out direct spectrum analysis to non-homogeneous signal on the basis of method With the curve of the original signal of least square method fitting heart rate, the journey that root-mean-square error removes estimation model mechanical periodicity is reused Degree.The frequency domain character that HRV is extracted by way of LOMB-SCARGLE cyclic graph, for the time series x (t of a HRVi), i =123 ... N, it is assumed here that frequency representation f1、f2、f3、…、fi、…、fNAngular frequency corresponding with it is expressed as w=2 π fi, Then LOMB-SCARGLE cyclic graph can be defined by following formula:
WhereinAnd σ2It is HRV time series x (t respectivelyi) mean value and variance, τ is time offset.PX(ω) is angle The periodic signal power of frequencies omega, time offset τ (constant) make time ti、tjWhen translating a constant, power spectrum PX(ω) is protected It holds constant.
It is acquired according to the algorithm of LOMB-SCARGLE cyclic graph to data are largely had a fling at, it is available asymmetric after analysis Under data mode, the power spectral line of HRV can be in 0-0.4Hz range in the state that subject is tranquil.It then can be to the non-of acquisition Linear data is analyzed.The analysis of nonlinear data can be carried out by the way of Poincare scatter plot.
Wherein, Poincare scatter plot is the distribution that all adjacent RR interphase point positions are marked under rectangular coordinate system Figure, can react the whole characteristic of HRV, while can also react the instantaneous variation of heart rate.Poincare scatter plot is to remember After the RR interphase for recording one section of electrocardiogram (ECG) data, using first RR interphase as abscissa, second RR interval data as ordinate, Fixed first point is then using time value between second RR as abscissa, second point of the time value as ordinate between third RR, And so on is drawn out by a series of point and is not formed for time value all RR can react the scatter plot of HRV characteristic.
Connect it is lower can by by test acquisition great amount of samples data draw scatter plot, with determine loosen under tense situation Interval range value.Fig. 7 is the schematic diagram of scatter plot in one embodiment of the invention.As shown in fig. 7, adopting according to a large amount of data Collection and scatter plot living are analyzed, available: the state that scatter plot horizontal axis be in 0.7-0.8Hz belongs to subject in loosening State;The state that scatter plot horizontal axis is in 0.55-0.65Hz belongs to the state that subject is in nervous.
(2) facial expression data is handled
Facial expression data can be uploaded by human-computer interaction interface and carries out intelligent recognition, or most directly transmits facial table Feelings data carry out intelligent recognition.It, can be by each face in facial expression data in carrying out facial expression data treatment process The coordinate value or changes in coordinates value of characteristic point, the duration of facial expression, facial expression frequency are respectively according to scheduled Coordinate value range divides or changes in coordinates value divides, duration ranges divide, number division maps to corresponding affective state. Wherein, subdivision affective state can specifically divide anxiety or loosen, and specifically may include happy (Joy), sadness (Sadness), angry (Anger), surprised (Surprise), frightened (Fear), suspect (Disgust), despise (Contempt), Participation (Engage) etc..For example, mapping mode may include the changes in coordinates value of the right labial angle in face feature point (for example, mould of diverse vector) corresponding affective state in -0.1mm~0mm range is anger, in 0mm~0.5mm range Corresponding affective state is happy;Duration in the corresponding affective state of 0.5s~1s range be it is happy, in 0s~0.5s model Enclosing corresponding affective state is to despise.The similar emotion state of expression can correspond to same range.
It can use the side of ROC (Receiver Operating Characteristic, Receiver Operating Characteristics) curve Method evaluates the various facial expression datas after division, wherein the value range of ROC value is between 0~1, and value is closer to 1 It is then more accurate, the accuracy of measurement data can be ensured with this.
It can be imitated by obtaining mood potency value after handling the data of acquisition via human-computer interaction interface, and by mood The affective state of value judgement subject, mapping obtain affective state, obtain intelligent recognition result.It can be by mood potency to each Kind of facial expression data is given a mark, is evaluated, to judge the affective state (obtaining recognition result) of subject, it is e.g. passive or Actively, the range of mood potency value can be between -1~+1, wherein if the range of mood potency value between -1~0, can be with Think that being tested affective state is passive states;If the range of mood potency value is between 0~+1, it is believed that subject affective state For positive state.Wherein, corresponding maximum value (can be in addition to surprised) in mood potency value=happy corresponding value-other expressions (mood potency value can determine specific value section by the experiment of stimulus material.For example, looking for n by test acquisition data, remember Collection model data are recorded, is then classified according to different stimulations, finds out the data of corresponding mood as the benchmark tested later), It may determine that the positivity or negativity of the mood on current point in time with this.
The result of comprehensive facial expression data processing and physiological sensor data processing as a result, for example, if subdivision mood State belongs to the emotional state classification, illustrates the recognition result of face feature point and the recognition result one of physiological sensor data It causes, then it can be using the subdivision emotional state as human facial expression recognition result;If inconsistent, illustrate that recognition result may be not allowed Really, it needs to do further judgement.
In the present embodiment, by human-computer interaction interface parameter setting, facial expression data is acquired, at facial expression data Reason, and physical signs sensor-pulse transducer is combined to carry out comprehensive analysis, it is more accurate by the auxiliary energy of pulse transducer Identification face expression and face expression affective state.It is auxiliary with pulse transducer by accurately identifying for facial expression It helps, it can be determined that the affective state for going out people is the state of positive state or passiveness.Therefore, face can be more accurately identified The mood and emotion of expression and its expression, and can judge that the affective state of people is long-pending by accurately identifying for facial expression Pole or passive.
Based on inventive concept identical with human facial expression recognition method shown in FIG. 1, the embodiment of the invention also provides one Kind facial expression recognition apparatus, as described in following example.The principle solved the problems, such as due to the facial expression recognition apparatus and face Portion's expression recognition method is similar, therefore the implementation of the facial expression recognition apparatus may refer to the reality of human facial expression recognition method It applies, overlaps will not be repeated.
Fig. 8 is the structural schematic diagram of the facial expression recognition apparatus of one embodiment of the invention.As shown in figure 8, some implementations The facial expression recognition apparatus of example, it may include:
Face feature point data acquisition unit 210, for passing through seat according to setting frame rate acquisition in recognition time section The facial characteristics point data that scale value indicates;
Physiological sensor data unit 220 is able to reflect for acquiring while acquiring the facial characteristics point data The physiological sensor data of affective state;
Facial expression data recognition unit 230, for utilizing facial expression data and the corresponding biosensor number According to identification facial expression, the facial expression data includes the facial characteristics point data.
In some embodiments, face feature point data acquisition unit 210, it may include:
Image capture module, for utilizing depth camera in recognition time section to set frame rate acquisition face-image;
Feature point recognition module, the D coordinates value of the face feature point in the face-image, obtains face for identification Portion's characteristic point data.
In some embodiments, the facial characteristics point data, comprising: jaw, right angle of mandible, point, lower-left jaw before right Angle, left front jaw, right eyebrow points outside, right eyebrow central point, side point in right eyebrow, side point in left eyebrow, left eyebrow central point, left eyebrow points outside, The nasion, nose, nose bottom right boundary point, nose bottom boundaries point, nose lower-left boundary point, right eye outside, right eye inside, left eye inside, a left side Eye outside, right labial angle, the right tip point of upper lip, upper lip central point, the left tip point of upper lip, left labial angle point, lower lip left edge point, lower lip Central point, lower lip right hand edge point, lower lip upper extreme point, lower lip bottom end point, angle point under angle point, right eye on right eye, on left eye angle point and The D coordinates value of the face feature point of angle point under left eye.
In some embodiments, facial expression recognition apparatus shown in Fig. 8, may also include that
Head position and behavioral data acquisition unit, in the recognition time section acquisition indicated by angle value Head position data, and record facial expression and action data;The head position data, comprising: be based on head analog coordinate axis Pitch angle, yaw angle and the roll angle of expression;The facial expression behavioral data, comprising: the number of facial expression and every occurs The duration of the facial expression of secondary generation;The facial expression data further includes the head position data and the facial table Feelings behavioral data.
In some embodiments, physiological sensor data unit 220, it may include:
Pulse data acquisition module, for while acquire the facial characteristics point data acquisition heart around, finger tip, The pulse signal at least one position in ear-lobe and wrist, as physiological sensor data.
In some embodiments, facial expression data recognition unit 230, it may include:
First identification module is analyzed to obtain power for carrying out HRV to the pulse signal in the physiological sensor data Density spectra, and the range according to belonging to the power density spectrum determines emotional state classification;
Second identification module obtains mood potency value for mapping according to the facial expression data, segments feelings to determine Not-ready status;
Comprehensive identification module, in the case where the subdivision emotional state belongs to the emotional state classification, by institute Subdivision emotional state is stated as human facial expression recognition result.
In some embodiments, it is described in the case that the range belonging to the power density spectrum is 0.7Hz~0.8Hz Emotional state classification is relaxation state;In the case that the range belonging to the power density spectrum is 0.55Hz~0.65Hz, institute Stating emotional state classification is tense situation.
The embodiment of the present invention also provides a kind of electronic equipment, including memory, processor and storage are on a memory and can The computer program run on a processor, the processor realize the step of above-described embodiment the method when executing described program Suddenly.The electronic equipment may include computer, mobile phone, tablet computer, special equipment etc..
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with computer program, the program The step of above-described embodiment the method is realized when being executed by processor.
In conclusion human facial expression recognition method, facial expression recognition apparatus, electronic equipment and the meter of the embodiment of the present invention Calculation machine readable storage medium storing program for executing, by acquiring facial characteristic point data and the biosensor number for being able to reflect affective state simultaneously According to, and facial expression is identified using two kinds of aggregation of data analyses, more accurate recognition result can be obtained, thus more accurate Identification face expression and its expression mood and emotion.
In the description of this specification, reference term " one embodiment ", " specific embodiment ", " some implementations Example ", " such as ", the description of " example ", " specific example " or " some examples " etc. mean it is described in conjunction with this embodiment or example Particular features, structures, materials, or characteristics are included at least one embodiment or example of the invention.In the present specification, Schematic expression of the above terms may not refer to the same embodiment or example.Moreover, the specific features of description, knot Structure, material or feature can be combined in any suitable manner in any one or more of the embodiments or examples.Each embodiment Involved in the step of sequence be used to schematically illustrate implementation of the invention, sequence of steps therein is not construed as limiting, can be as needed It appropriately adjusts.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects Describe in detail it is bright, it should be understood that the above is only a specific embodiment of the present invention, the guarantor being not intended to limit the present invention Range is protected, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in this Within the protection scope of invention.

Claims (10)

1. a kind of human facial expression recognition method characterized by comprising
The facial characteristics point data indicated by coordinate value is acquired according to setting frame rate in recognition time section;
Acquisition is able to reflect the physiological sensor data of affective state while acquiring the facial characteristics point data;
Facial expression, the facial expression data packet are identified using facial expression data and the corresponding physiological sensor data Include the facial characteristics point data.
2. human facial expression recognition method as described in claim 1, which is characterized in that according to setting frame speed in recognition time section Rate acquires the facial characteristics point data indicated by coordinate value, comprising:
Using depth camera to set frame rate acquisition face-image in recognition time section;
The D coordinates value for identifying the face feature point in the face-image obtains facial characteristics point data.
3. human facial expression recognition method as described in claim 1, which is characterized in that the facial characteristics point data, comprising: right Preceding jaw, right angle of mandible, point, left angle of mandible, left front jaw, right eyebrow points outside, right eyebrow central point, side point in right eyebrow, in left eyebrow Side point, left eyebrow central point, left eyebrow points outside, the nasion, nose, nose bottom right boundary point, nose bottom boundaries point, nose lower-left boundary point, the right side Eye outside, right eye inside, left eye inside, left eye outside, right labial angle, the right tip point of upper lip, upper lip central point, the left tip of upper lip Point, left labial angle point, lower lip left edge point, lower lip central point, lower lip right hand edge point, lower lip upper extreme point, lower lip bottom end point, on right eye Angle point under angle point, right eye, on left eye under angle point and left eye the face feature point of angle point D coordinates value.
4. human facial expression recognition method as described in claim 1, which is characterized in that further include:
The head position data indicated by angle value are acquired in the recognition time section, and record facial expression and action number According to;The head position data, comprising: pitch angle, yaw angle and the roll angle indicated based on head analog coordinate axis;The face Portion's expression and action data, comprising: duration of facial expression that the number of facial expression occurs and occurs every time;The face Expression data further includes the head position data and the facial expression behavioral data.
5. human facial expression recognition method as described in claim 1, which is characterized in that acquiring the facial characteristics point data Acquisition is able to reflect the physiological sensor data of affective state simultaneously, comprising:
It is acquired while acquiring the facial characteristics point data around heart, at least one of finger tip, ear-lobe and wrist portion The pulse signal of position, as physiological sensor data.
6. human facial expression recognition method as claimed in claim 5, which is characterized in that utilize facial expression data and corresponding institute State physiological sensor data identification facial expression, comprising:
It carries out HRV to the pulse signal in the physiological sensor data to analyze to obtain power density spectrum, and according to the power Range belonging to density spectra determines emotional state classification;
It is mapped to obtain mood potency value according to the facial expression data, segments emotional state to determine;
In the case where the subdivision emotional state belongs to the emotional state classification, using the subdivision emotional state as face Expression Recognition result.
7. human facial expression recognition method as claimed in claim 6, which is characterized in that the range belonging to the power density spectrum In the case where for 0.7Hz~0.8Hz, the emotional state classification is relaxation state;The range belonging to the power density spectrum In the case where for 0.55Hz~0.65Hz, the emotional state classification is tense situation.
8. a kind of facial expression recognition apparatus characterized by comprising
Face feature point data acquisition unit, for being indicated according to setting frame rate acquisition by coordinate value in recognition time section Facial characteristics point data;
Physiological sensor data unit is able to reflect affective state for acquiring while acquiring the facial characteristics point data Physiological sensor data;
Facial expression data recognition unit, for identifying face using facial expression data and the corresponding physiological sensor data Portion's expression, the facial expression data include the facial characteristics point data.
9. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor Machine program, which is characterized in that the processor is realized when executing described program such as any one of claim 1 to 7 the method Step.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor It is realized when execution such as the step of any one of claim 1 to 7 the method.
CN201910092813.0A 2019-01-30 2019-01-30 Human facial expression recognition method and device Pending CN109800734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910092813.0A CN109800734A (en) 2019-01-30 2019-01-30 Human facial expression recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910092813.0A CN109800734A (en) 2019-01-30 2019-01-30 Human facial expression recognition method and device

Publications (1)

Publication Number Publication Date
CN109800734A true CN109800734A (en) 2019-05-24

Family

ID=66560587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910092813.0A Pending CN109800734A (en) 2019-01-30 2019-01-30 Human facial expression recognition method and device

Country Status (1)

Country Link
CN (1) CN109800734A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831846A (en) * 2006-04-20 2006-09-13 上海交通大学 Face posture identification method based on statistical model
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
CN107392124A (en) * 2017-07-10 2017-11-24 珠海市魅族科技有限公司 Emotion identification method, apparatus, terminal and storage medium
CN107992199A (en) * 2017-12-19 2018-05-04 广东小天才科技有限公司 A kind of Emotion identification method, system and electronic equipment for electronic equipment
DE102016225222A1 (en) * 2016-12-16 2018-06-21 Bayerische Motoren Werke Aktiengesellschaft Method and device for influencing an interaction process
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN108564007A (en) * 2018-03-27 2018-09-21 深圳市智能机器人研究院 A kind of Emotion identification method and apparatus based on Expression Recognition
CN109145700A (en) * 2017-06-19 2019-01-04 卡西欧计算机株式会社 Expression decision maker, expression determination method and recording medium
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831846A (en) * 2006-04-20 2006-09-13 上海交通大学 Face posture identification method based on statistical model
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
DE102016225222A1 (en) * 2016-12-16 2018-06-21 Bayerische Motoren Werke Aktiengesellschaft Method and device for influencing an interaction process
CN109145700A (en) * 2017-06-19 2019-01-04 卡西欧计算机株式会社 Expression decision maker, expression determination method and recording medium
CN107392124A (en) * 2017-07-10 2017-11-24 珠海市魅族科技有限公司 Emotion identification method, apparatus, terminal and storage medium
CN107992199A (en) * 2017-12-19 2018-05-04 广东小天才科技有限公司 A kind of Emotion identification method, system and electronic equipment for electronic equipment
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN108564007A (en) * 2018-03-27 2018-09-21 深圳市智能机器人研究院 A kind of Emotion identification method and apparatus based on Expression Recognition
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈玉霏: ""基于中医情志理论的悲喜情绪识别模式研究"", 《中国博士学位论文全文数据库 医药卫生科技辑》 *
魏育林 著: "《亚健康音乐条例基础》", 31 May 2011 *

Similar Documents

Publication Publication Date Title
Jung et al. Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing
Zhang et al. EgoGesture: A new dataset and benchmark for egocentric hand gesture recognition
Laurentini et al. Computer analysis of face beauty: A survey
CN104720748B (en) A kind of sleep stage determines method and system
Sharma et al. Modeling stress using thermal facial patterns: A spatio-temporal approach
CN106264568A (en) Contactless emotion detection method and device
CN104484644B (en) A kind of gesture identification method and device
Irani et al. Thermal super-pixels for bimodal stress recognition
CN109993093A (en) Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
CN107977634A (en) A kind of expression recognition method, device and equipment for video
US10973440B1 (en) Mobile control using gait velocity
CN113837153B (en) Real-time emotion recognition method and system integrating pupil data and facial expressions
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN110414546A (en) Use intermediate loss function training image signal processor
Lou et al. A review on automated facial nerve function assessment from visual face capture
WO2019104008A1 (en) Interactive electronic content delivery in coordination with rapid decoding of brain activity
CN108335749A (en) Depression data analysing method and device
CN108932060A (en) Gesture three-dimensional interaction shadow casting technique
CN106991409A (en) A kind of Mental imagery EEG feature extraction and categorizing system and method
Bhatia et al. A video-based facial behaviour analysis approach to melancholia
WO2022100187A1 (en) Mobile terminal-based method for identifying and monitoring emotions of user
CN116883608B (en) Multi-mode digital person social attribute control method and related device
CN109800734A (en) Human facial expression recognition method and device
CN112912925A (en) Program, information processing device, quantification method, and information processing system
Wu et al. Using deep learning and smartphone for automatic detection of fall and daily activities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination