CN111611860B - Micro-expression occurrence detection method and detection system - Google Patents

Micro-expression occurrence detection method and detection system Download PDF

Info

Publication number
CN111611860B
CN111611860B CN202010321480.7A CN202010321480A CN111611860B CN 111611860 B CN111611860 B CN 111611860B CN 202010321480 A CN202010321480 A CN 202010321480A CN 111611860 B CN111611860 B CN 111611860B
Authority
CN
China
Prior art keywords
data
electroencephalogram
time
expression
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010321480.7A
Other languages
Chinese (zh)
Other versions
CN111611860A (en
Inventor
刘光远
赵兴骢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University
Original Assignee
Southwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University filed Critical Southwest University
Priority to CN202010321480.7A priority Critical patent/CN111611860B/en
Publication of CN111611860A publication Critical patent/CN111611860A/en
Application granted granted Critical
Publication of CN111611860B publication Critical patent/CN111611860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Image Analysis (AREA)

Abstract

A micro expression occurrence detection method, 1) recording normal electroencephalogram data and normal facial video data of a tested person before stimulating and inducing micro expression; 2) stimulating and inducing the micro-expression, and recording electroencephalogram data and facial video data; 3) marking time stamp data, and matching electroencephalogram time data and facial video time data; 4) obtaining the starting time T of the brain electricity time data reactionsmStart-stop frames and top frames of facial expression changes; 5) and judging whether the micro expression occurs. According to the method, the electroencephalogram signal and the facial image information are associated through the timestamp, the electroencephalogram signal and the facial video data are processed respectively, whether the micro expression occurs or not is judged by combining the processing result, and the judgment result is high in accuracy.

Description

Micro-expression occurrence detection method and detection system
Technical Field
The invention relates to the technical field of electroencephalogram signal and face video signal processing, in particular to a method for detecting whether micro-expression occurs.
Background
Micro-expression, which is a transient facial expression occurring in 1/25 s-1/2 s, is considered as a natural emotional expression that is difficult to control when people are depressed or try to hide real emotions, and thus is a very important clue in lie detection, and the importance and wide application scenes of micro-expression are receiving more and more extensive attention.
At present, the detection of micro-expressions is mainly through an expression recognition method, and the method can separate a specific expression state from a given static image or dynamic video sequence to a certain extent, so that the psychological mood of a recognized object is determined, and the understanding and recognition of the facial expressions by a computer are realized. With the development of the electroencephalogram technology, the high time dynamic resolution and the high sensitivity to brain activities displayed by the electroencephalogram technology enable a more direct means for detecting the occurrence of micro-expression, but the detection of expression and emotion only through electroencephalogram signals is limited to single detection mode and low accuracy.
The bulletin number is CN109344816A, and discloses a method for detecting facial actions in real time based on electroencephalogram signals, wherein electroencephalogram signals and corresponding facial action pictures are correlated by time, so that electroencephalogram signals corresponding to each frame of picture can be extracted, a BP neural network is established by extracting electroencephalogram characteristic information, a facial action detection model is established, and the purpose of identifying three types of actions of a face through the electroencephalogram signals is achieved. The disadvantages are that: 1. the patent does not disclose a specific mode of time-correlating electroencephalogram signals and face action pictures, and the electroencephalogram signal processing and face picture identification have certain delay, so that the problem of data synchronization cannot be solved; 2. the face motion detection by establishing the BP neural network requires a large amount of calculation and data processing and cannot process a large amount of data; 3. the essence of the patent is that the facial image is identified through the electroencephalogram signals, the electroencephalogram signals and the facial image are not identified respectively and then judged in a combined mode, and the accuracy is not enough.
Disclosure of Invention
The invention aims to provide a micro-expression occurrence detection method, which relates electroencephalogram signals and facial image information through timestamps, processes the electroencephalogram signals and facial video data respectively, judges whether micro-expressions occur or not by combining processing results, and has high judgment result accuracy.
The aim of the invention is realized by the technical scheme, which comprises the steps before stimulating and inducing the micro expression and the steps after stimulating and inducing the micro expression, normal brain electricity data and normal face video data of a tested person are recorded before stimulating and inducing the micro expression, and the steps after stimulating and inducing the micro expression comprise:
1) stimulating and inducing the micro-expression, and recording electroencephalogram data and facial video data;
2) marking time stamp data for each segment of electroencephalogram data and each frame of face video data, and matching to generate electroencephalogram time data and face video time data;
3) processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data to obtain the electroencephalogram microexpression occurrence time T judged by the electroencephalogram data and the electroencephalogram time datasmObtaining start and stop frames and top frames of facial expression changes judged by the facial video data and the facial video time data;
4) Judging the occurrence time T of the micro-expression according to the electroencephalogram processed in the step 3)smAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
Further, recording normal electroencephalogram data and normal facial video data of the tested person, wherein the specific method for recording the electroencephalogram data and the facial video data in the step 1) comprises the following steps:
acquiring and recording normal electroencephalogram data and electroencephalogram data at a sampling rate of 1024Hz from 128 electrode recordings by using a Biosemi Active system; normal face video data and face video data were acquired and recorded at a rate of 80 frames per second by a high-speed camera of the Biosemi Active system.
Further, the specific method for marking timestamp data for each section of electroencephalogram data and each frame of face video data in the step 2) comprises the following steps:
the time synchronization module of the Biosemi Active system is utilized to synchronously transmit the timestamp data in the time synchronization module to the electroencephalogram acquisition module and the high-speed camera acquisition module of the Biosemi Active system, so that each segment of electroencephalogram data acquired by the electroencephalogram acquisition module and each frame of facial video data acquired by the high-speed camera comprise the synchronous timestamp data, namely, the electroencephalogram time data and the facial video time data are generated.
Further, the specific steps of processing the electroencephalogram data and the electroencephalogram time data in the step 3) are as follows:
3-1) calculating the PSD normal values of normal Gamma wave bands of the left temporal lobe channel D23, the right temporal lobe channel A09 and the forehead lobe channel B26 in normal state by taking normal electroencephalogram data as baseline data, wherein the PSD normal values represent the power carried by unit frequency waves, and the PSD normal value calculation formula is
Figure BDA0002461597070000021
X (k) represents the fourier transform of a sequence of length N, k representing the frequency;
3-2) setting the duration of a sliding window as W2 s, taking 2 x (1/fs) as the sliding time and fs as the electroencephalogram sampling frequency aiming at the electroencephalogram time data; calculating PSD sliding window time length values of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 in a 2s sliding window in a Gamma frequency band, and comparing the PSD sliding window time length values with normal PSD values of corresponding channels; if any channel PSD value of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 is higher than the normal PSD value, the electroencephalogram data are supposed to change, and the step is turned to be 3-3); the PSD value of any one of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 is not processed when being lower than the normal average PSD value, and the electroencephalogram data are determined to be unchanged;
3-3) taking the data in 2s of the sliding window duration W of the change of the electroencephalogram data, firstly calculating the energy value E in the 2s, and the energy formula is
Figure BDA0002461597070000031
Where x (N) is the signal amplitude, N is the data length, i.e. 2s of data, and the average of the energy values is taken as the threshold value G: G-1/2E; comparing the energy value E with a threshold value G if E>G, and a given sampling point En can continuously reach a threshold value within 5ms (E1.. En)>G) Then, Tn is preliminarily considered as the reaction starting point Ts; at the same time, a contrast threshold value PR is set, and the formula is
Figure BDA0002461597070000032
Calculating the comparison value PRn of n sampling points before the starting point Ts, if | PNn-PNN-1If | ═ 0, the moment corresponding to the first sampling point PRn is considered as a reaction starting point Ts; if the reaction starting point Ts is found successfully, turning to the step 3-4); if the reaction starting point Ts is not found successfully, returning to the step 3-2);
3-4) respectively taking the brain area channel starting time T of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 at the time of the reaction starting point TssIf at time TsD23-TsB26>0 or TsD23-TsA09>0, and must satisfy
Figure BDA0002461597070000033
The time point, namely the occurrence time T of the situation of the cerebellar ammeter is recordedsm(ii) a If the condition is not met, returning to the step 3-2);
3-5) obtaining the electroencephalogram micro-expression occurrence time T under the electroencephalogram datasm
Further, the specific steps of processing the face video data and the face video time data in step 3) are as follows:
3-6) detecting the human face, and detecting the specific position of the human face from the original image of each frame;
3-7) carrying out face alignment and face reference point positioning on the face in the face video acquisition; automatically positioning a face key characteristic CLM local constraint model by adopting a CLM local constraint model according to an input face image;
3-8) extracting expression characteristics based on deformation: utilizing CLM to label the facial feature points, obtaining the coordinates of the facial reference points, calculating the related slope information between the facial reference points, and extracting expression features based on deformation; simultaneously tracking key points in the three regions, extracting corresponding displacement information, extracting distance information between specific feature points of the expression pictures, subtracting the distance from the calm pictures to obtain change information of the distance, and extracting expression features based on movement;
3-9) obtaining a start-stop frame and a top frame according to the facial feature data extraction result; setting a threshold value R by comparing the distance between the feature points with the distance difference k of the calm picture, and judging the first frame image exceeding k > R as an initial frame; and judging the frame image with the maximum value of the k value as the top frame number by comparing the images after the initial frame, and taking the first frame when k is less than R as the termination frame.
Further, step 3-6) is to detect the human face, and the specific steps of detecting the specific position of the human face from the original image of each frame are as follows:
3-6-1) extracting a response image by adopting a Local Binary Pattern (LBP);
3-6-2) processing the response image by adopting an AdaBoost algorithm to separate a human face area; the LBP algorithm firstly scans each pixel point of an original image line by line, binarizes adjacent points of 3 x 3 around each pixel point by taking the gray value of the point as a threshold value, and forms an 8-bit binary number according to the sequence, and takes the value (0-255) of the binary number as the response of the point.
Further, the specific steps of automatically positioning a face key characteristic CLM local constraint model by using the CLM local constraint model according to the input face image in the step 3-7) are as follows:
3-7-1) modeling the shape of the human face model: for M pictures, each picture has N characteristic points, and the coordinate of each characteristic point is (x)i、yi) The vector composed of the coordinates of N feature points on one image is represented by x ═ x1y1x2y2…xNyn]TThe mean face coordinates of all images are available:
Figure BDA0002461597070000041
calculating the difference between the shape of each sample image and the average face coordinate to obtain a shape change matrix X with zero mean, performing PCA (principal component analysis) conversion on the matrix X to obtain the main components of face change, and recording the characteristic value as lambda iThe corresponding feature vector is pi(ii) a Selecting the eigenvector corresponding to the largest k eigenvalues to form an orthogonal matrix P ═ (P)1,p2,…,pk) (ii) a The weight vector b of the shape change is equal to (b)1,b2,…,bk)TEach component of b represents its magnitude in the direction of the corresponding feature vector:
Figure BDA0002461597070000042
for any face detection image, the sample shape vector can be expressed as:
Figure BDA0002461597070000043
3-7-2) establishing a patch model for each feature point: taking a patch area with a fixed size around each feature point, and marking the patch area containing the feature points as a positive sample; then, intercepting a patch with the same size in a non-characteristic point area and marking the patch as a negative sample; there are a total of r patches per feature point, which are grouped into a vector (x)(1),x(2),…x(r))TFor each image in the sample set, there is
Figure BDA0002461597070000044
Wherein y is(i)1, 2, … r, wherein y is { -1, 1} i { -1, 2, … r(i)1 is a positive sample mark, y(i)-1 is a negative sample marker; the trained linear support vector machine is:
Figure BDA0002461597070000045
wherein xiSubspace vector, alpha, representing a sample setiIs a weight coefficient, Ms is the number of support vectors for each feature point, b is an offset; the following can be obtained: y is(i)=WT·x(i)+θ,WT=[W1 W2…Wn]Is the weight coefficient of each support vector, θ is the cheap quantity;
3-7-3) fitting face points: a similar response map R (X, Y) is generated for each feature point by performing a local search through the bounding region of the currently estimated feature point position. Fitting a quadratic function to the response plot, assuming R (X, Y) is domain-wide (X) 0y0) Where a maximum is obtained, a quadratic function r (x) may be used0,y0)=a(x-x0)2+b(y-y0)2+ c fits this position. Where a, b, c are coefficients of a quadratic function using least square method delta-min sigmax,y[R(x,y)-r(x,y)]2The minimum error between the quadratic functions R (x, y) and R (x, y) can be found; the deformation constraint cost function is added to form an objective function for searching the characteristic points, and the objective function can be expressed as follows:
Figure BDA0002461597070000051
optimizing the objective function each time to obtain a new feature point position, and then iteratively updating until the maximum value is converged.
Further, the determination rule in step 5) is:
judging whether the micro expression occurs or not according to the electroencephalogram activity reaction moment, and if so, judging the occurrence moment T according to the micro expressionsmSearching for the change of the expression in the time threshold TL for the time starting point, and finally judging that the micro expression occurs if the expression occurring in 500ms is judged to occur according to the time of the starting and ending frames of the expression; if no expression occurring within 500ms appears, the micro expression is not judged to occur finally.
Further, the time threshold TL is 500ms to 1000 ms.
It is another object of the present invention to provide a microexpression detection system.
The purpose of the invention is realized by the technical scheme, which comprises the following steps:
The data acquisition module is used for recording normal electroencephalogram data and normal facial video data before the micro expression is induced by stimulation; recording the electroencephalogram data and the facial video data after the micro expression is induced by stimulation;
the time matching module is used for marking the time stamp of each section of electroencephalogram data and each frame of face video data to generate electroencephalogram time data and face video time data;
the data processing module is used for processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data and calculating the occurrence time T of the electroencephalogram micro-expressionssmThrough whichStarting and stopping frames and top frames of facial expression changes judged by the facial video data and the facial video time data;
a microexpression judgment module for judging the microexpression occurrence time T according to the electroencephalogramsmAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. according to the invention, the electroencephalogram data and the face video data are associated through the timestamp, so that the problem of delay of electroencephalogram signals and face picture identification is solved, and data synchronization is realized; 2. the method combines the change time periods of the electroencephalogram data and the facial video data to judge whether the micro expression occurs, is simple, and saves a large amount of calculation time and resources; 3. according to the method and the device, the electroencephalogram signals and the facial video data are processed respectively, whether the micro expression occurs or not is judged by combining the processing results, and the judgment result is high in accuracy.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
The drawings of the invention are illustrated below.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Example (b):
a method for detecting occurrence of micro-expression includes recording normal electroencephalogram data and normal facial video data of a tested person before micro-expression is stimulated, recording left temporal lobe channel D23 and channel D23 in 128-channel electroencephalogram data, recording right temporal lobe channel A09 and channel A09 in 128-channel electroencephalogram data, wherein the channel D23 and the channel D23 are channels with the most representative left temporal lobe, and the channel B26 is a channel with the most representative right temporal lobe, and is a channel B26 and a channel B26 in the most representative frontal lobe. The face data was trained using 66 face coordinates, and face coordinates of 49 points were provided in its output, and face points were extracted after correcting the head pose in a calm state. An alignment is made between the neutral face of each subject and the average face of all subjects and all tracking points of the sequence are recorded using the alignment. The reference point is generated by averaging the coordinates of the internal angles of the eye and nose landmarks. The distance of 38 points, including the eyebrows, eyes and lips, to the reference point is calculated and averaged.
The specific method after the stimulation induces the micro-expression comprises the following steps:
1) stimulating and inducing the micro-expression, and recording electroencephalogram data and facial video data;
2) marking time stamp data for each section of electroencephalogram data and each frame of face video data, and matching to generate electroencephalogram time data and face video time data;
3) processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data to obtain Tsm,TsmJudging the moment of micro-expression occurrence through EEG to obtain a start-stop frame and a top frame of facial expression change;
4) t after treatment according to step 3)smAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
The specific method for recording the electroencephalogram data and the facial video data in the step 1) comprises the following steps: acquiring and recording normal electroencephalogram data and electroencephalogram data at a 1024Hz sampling rate from 128 electrode records by using a Biosemi Active system; normal face video data and face video data were acquired and recorded at a rate of 80 frames per second by a high-speed camera of the Biosemi Active system.
The specific method for marking the timestamp data for each section of electroencephalogram data and each frame of face video data in the step 2) is as follows: the time synchronization module of the Biosemi Active system is utilized to synchronously transmit the timestamp data in the time synchronization module to the electroencephalogram acquisition module and the high-speed camera acquisition module of the Biosemi Active system, so that each segment of electroencephalogram data acquired by the electroencephalogram acquisition module and each frame of facial video data acquired by the high-speed camera comprise the synchronous timestamp data, namely, the electroencephalogram time data and the facial video time data are generated.
The specific steps for processing the electroencephalogram data and the electroencephalogram time data are as follows:
3-1) calculating the PSD normal values of normal Gamma wave bands of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 in normal state by taking normal electroencephalogram data as baseline data, wherein the PSD normal values represent the power carried by unit frequency waves, and the PSD normal value calculation formula is
Figure BDA0002461597070000071
X (k) represents the fourier transform of a sequence of length N, k representing the frequency;
3-2) setting the duration W of a sliding window to be 2s for the electroencephalogram time data, and taking 2 (1/fs) as the sliding time, wherein fs is the electroencephalogram sampling frequency; calculating PSD sliding window time length values of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 in a 2s sliding window in a Gamma frequency band, and comparing the PSD sliding window time length values with normal PSD values of corresponding channels; if any channel PSD value of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 is higher than the normal PSD value, the electroencephalogram data are supposed to change, and the step is turned to be 3-3); the PSD value of any one of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 is not processed when being lower than the normal average PSD value, and the electroencephalogram data are determined to be unchanged;
3-3) taking the data in 2s of the sliding window duration W assuming the change of the electroencephalogram data, and firstly calculating the energy value E in 2siEnergy formula
Figure BDA0002461597070000072
Wherein Xi(k) And (3) performing FFT (fast Fourier transform) on the EEG signal, wherein k is the data length, namely 2s of data, and the mean value of energy values is taken as a threshold value G: G-1/2E; comparing the energy value E with a threshold value G if E>G, and a given sampling point En can continuously reach a threshold value within 5ms (E1.. En)>G) Then Tn was initially considered to be the start time T of the reaction in the different brain region channelss(ii) a At the same time, the user can select the desired position,setting a contrast threshold value PR according to the formula
Figure BDA0002461597070000073
Calculating the comparison value PRn of n sampling points before the starting point Ts, if | PNn-PNN-1If | ═ 0, the time corresponding to the first sampling point PRn is considered as the starting time T of the reaction in different brain area channels; if the reaction starting point Ts is found successfully, turning to the step 3-4); if the reaction starting point Ts is not found successfully, returning to the step 3-2);
3-4) at the reaction start TsThe time is the starting time T of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26sIf at time TsD23-TsB26>0 or TsD23-TsA09>0, i.e., the starting time of the D23 channel precedes the starting time of the B26 channel, or the starting time of the D23 channel precedes the starting time of the A09 channel, and must satisfy
Figure BDA0002461597070000081
Recording the time T at which the microexpression is determined to have occurred by EEGsm(ii) a If the condition is not met, returning to the step 3-2);
3-5) obtaining the moment T of occurrence of the microexpression discriminated by EEGsm
The specific steps of processing the face video data and the face video time data in the step 3) are as follows:
3-6) detecting the human face, and detecting the specific position of the human face from the original image of each frame;
3-7) carrying out face alignment and face reference point positioning on the face in the face video acquisition; automatically positioning a face key characteristic CLM local constraint model by adopting a CLM local constraint model according to an input face image;
3-8) extracting expression characteristics based on deformation: utilizing CLM to label the facial feature points, obtaining the coordinates of the facial reference points, calculating the related slope information between the facial reference points, and extracting expression features based on deformation; simultaneously tracking key points in the three regions, extracting corresponding displacement information, extracting distance information between specific feature points of the expression pictures, subtracting the distance from the calm pictures to obtain change information of the distance, and extracting expression features based on movement;
3-9) obtaining a start-stop frame and a top frame according to the facial feature data extraction result; setting a threshold value R by comparing the distance between the feature points with the distance difference k of the calm picture, and judging the first frame image exceeding k > R as an initial frame; and judging the frame image with the maximum value of the k value as the top frame number by comparing the images after the initial frame, and taking the first frame when k is less than R as the termination frame.
Step 3-6) detecting the human face, wherein the specific steps of detecting the specific position of the human face from the original image of each frame are as follows:
3-6-1) extracting a response image by adopting a Local Binary Pattern (LBP);
3-6-2) processing the response image by adopting an AdaBoost algorithm to separate a human face area; the LBP algorithm firstly scans each pixel point of an original image line by line, binarizes adjacent points of 3 x 3 around each pixel point by taking the gray value of the point as a threshold value, and forms an 8-bit binary number according to the sequence, and takes the value (0-255) of the binary number as the response of the point.
Step 3-7) according to the input face image, adopting a CLM local constraint model to automatically position a face key characteristic CLM local constraint model, and the specific steps are as follows:
3-7-1) modeling the shape of the human face model: for M pictures, each picture has N characteristic points, and the coordinate of each characteristic point is (x)i、yi) The vector composed of the coordinates of N feature points on one image is represented by x ═ x1 y1 x2 y2…xN yn]TThe mean face coordinates of all images are available:
Figure BDA0002461597070000091
calculating the difference between the shape of each sample image and the average face coordinate to obtain a zero-mean shape change matrix X, and performing PCA (principal component analysis) conversion on the matrix X to obtain the main components of face change Eigenvalue is λiThe corresponding feature vector is pi(ii) a Selecting the eigenvectors corresponding to the largest k eigenvalues to form an orthogonal matrix P ═ P1,p2,…,pk) (ii) a Weight vector b of shape change is (b)1,b2,…,bk)TEach component of b represents its magnitude in the direction of the corresponding eigenvector:
Figure BDA0002461597070000092
for any face detection image, the sample shape vector can be expressed as:
Figure BDA0002461597070000093
3-7-2) establishing a patch model for each feature point: taking a patch area with a fixed size around each feature point, and marking the patch area containing the feature points as a positive sample; then, intercepting a patch with the same size in a non-characteristic point area and marking the patch as a negative sample; there are a total of r patches per feature point, which are grouped into a vector (x)(1),x(2),…x(r))TFor each image in the sample set, there is
Figure BDA0002461597070000094
Wherein y is(i)1, 2, … n, wherein y is { -1, 1} i { -1, 2, … n(i)1 is a positive sample mark, y(i)-1 is a negative sample marker; the trained linear support vector machine is:
Figure BDA0002461597070000095
wherein xiSubspace vector, alpha, representing a sample setiIs a weight coefficient, Ms is the number of support vectors for each feature point, b is an offset; the following can be obtained: y is(i)=WT·x(i)+θ,WT=[W1 W2 … Wn]Is the weight coefficient of each support vector, θ is the cheap quantity;
3-7-3) fitting face points: generating facies for each feature point by locally searching a bounding region of currently estimated feature point locations Like the response plot R (X, Y). Fitting a quadratic function to the response plot, assuming R (X, Y) is domain-wide (X)0,y0) Where a maximum is obtained, a quadratic function r (x) may be used0,y0)=a(x-x0)2+b(y-y0)2+ c fits this position. Where a, b, c are coefficients of a quadratic function, using a least square method δ ═ min Σx,y[R(x,y)-r(x,y)]2The minimum error between the quadratic functions R (x, y) and R (x, y) can be found; the deformation constraint cost function is added to form an objective function for searching the feature points, and the objective function can be expressed as:
Figure BDA0002461597070000096
optimizing the objective function each time to obtain a new feature point position, and then iteratively updating until the maximum value is converged.
The judgment rule in the step 5) is as follows: judging whether the micro expression occurs or not through the brain electrical activity reaction, and if so, judging the occurrence time T of the micro expression according to the occurrence time T of the micro expressionsmSearching a time threshold TL for a time starting point, wherein the time threshold is generally the change of expressions in 500 ms-1000 ms, and if the expression occurring in 500ms is judged to occur according to the time of starting and ending frames of the expressions, finally judging that the micro expression occurs; if no expression occurring within 500ms appears, the micro expression is not judged to occur finally.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (9)

1. A method for detecting the occurrence of micro expression comprises the steps before stimulating and inducing micro expression and after stimulating and inducing micro expression, and the steps before stimulating and inducing micro expression, namely recording the normal electroencephalogram data and the normal facial video data of a tested person, and is characterized in that the method comprises the following steps after stimulating and inducing micro expression:
1) Stimulating and inducing the micro-expression, and recording electroencephalogram data and facial video data;
2) marking time stamp data for each section of electroencephalogram data and each frame of face video data, and matching to generate electroencephalogram time data and face video time data;
3) processing the electroencephalogram data, the electroencephalogram time data, the facial video data and the facial video time data to obtain the electroencephalogram microexpression occurrence time T judged by the electroencephalogram data and the electroencephalogram time datasmObtaining start-stop frames and top frames of facial expression changes judged by the facial video data and the facial video time data;
4) judging the occurrence time T of the micro-expression according to the electroencephalogram processed in the step 3)smJudging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change;
the specific steps for processing the electroencephalogram data and the electroencephalogram time data in the step 3) are as follows:
3-1) calculating the PSD normal values of normal Gamma wave bands of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 in normal state by taking normal electroencephalogram data as baseline data, wherein the PSD normal values represent the power carried by unit frequency waves, and the PSD normal value calculation formula is
Figure FDA0003638797540000011
X (k) represents the fourier transform of a sequence of length N, k representing the frequency;
3-2) setting the duration of a sliding window as W-2 s, taking 2 (1/fs) as the sliding time and fs as the electroencephalogram sampling frequency aiming at the electroencephalogram time data; calculating PSD sliding window time length values of a left temporal lobe channel D23, a right temporal lobe channel A09 and a forehead leaf channel B26 in a Gamma frequency band in a 2s sliding window, and comparing the PSD sliding window time length values with normal PSD values of corresponding channels; if any channel PSD value of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 is higher than the normal PSD value, the electroencephalogram data are supposed to change, and the step is turned to be 3-3); the PSD value of any one of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 is not processed when being lower than the normal average PSD value, and the electroencephalogram data are determined to be unchanged;
3-3) taking the sliding window time length W of the change of the electroencephalogram data as the data in 2s, firstly calculating the energy value E in the 2s, wherein the energy formula is
Figure FDA0003638797540000012
Where x (N) is the signal amplitude, N is the data length, i.e. 2s of data, and the mean value of the energy values is taken as the threshold value G: g-1/2 × E; comparing the energy value E with a threshold value G if E>G, and a given sampling point En within 5ms can continuously reach a threshold value E1>G, preliminarily considering Tn as a reaction starting point Ts; at the same time, a contrast threshold value PR is set, and the formula is
Figure FDA0003638797540000013
Calculating the contrast value PR of n sampling points before the starting point TsnComparison with the first n-1 sample points PRn-1If PRn-PRn-1If 0, the first sampling point PR is considerednThe corresponding moment is a reaction starting point Ts; if the reaction starting point Ts is found successfully, turning to the step 3-4); if the reaction starting point Ts is not found successfully, returning to the step 3-2);
3-4) respectively taking the brain area channel starting time T of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 at the time of the reaction starting point TssIf at time TsD23-TsB26>0 or TsD23-TsA09>0, and must satisfy
Figure FDA0003638797540000021
The time point, namely the occurrence time T of the situation of the cerebellar ammeter is recordedsm(ii) a If the condition is not met, returning to the step 3-2);
3-5) obtaining the electroencephalogram micro-expression occurrence time T under the electroencephalogram datasm
2. The method for detecting the occurrence of the micro-expression according to claim 1, wherein the normal electroencephalogram data and the normal facial video data of the tested person are recorded, and the specific method for recording the electroencephalogram data and the facial video data in the step 1) comprises the following steps:
acquiring and recording normal electroencephalogram data and electroencephalogram data at a 1024Hz sampling rate from 128 electrode records by using a Biosemiactive system; normal face video data and face video data were acquired and recorded at a rate of 80 frames per second by a high-speed camera of the BiosemiActive system.
3. The micro-expression occurrence detection method of claim 1, wherein the specific method for marking the timestamp data for each segment of electroencephalogram data and each frame of facial video data in step 2) is as follows:
the time synchronization module of the Biosemi Active system is utilized to synchronously transmit the timestamp data in the time synchronization module to the electroencephalogram acquisition module and the high-speed camera acquisition module of the Biosemi Active system, so that each segment of electroencephalogram data acquired by the electroencephalogram acquisition module and each frame of facial video data acquired by the high-speed camera comprise the synchronous timestamp data, namely, the electroencephalogram time data and the facial video time data are generated.
4. The method for detecting occurrence of micro-expression according to claim 1, wherein the step 3) of processing the face video data and the face video time data comprises the following specific steps:
3-6) detecting the human face, and detecting the specific position of the human face from the original image of each frame;
3-7) carrying out face alignment and face reference point positioning on the face in the face video acquisition; automatically positioning key characteristics of the face by adopting a CLM local constraint model according to an input face image;
3-8) extracting expression characteristics based on deformation: utilizing CLM to label the facial feature points, obtaining the coordinates of the facial reference points, calculating the related slope information between the facial reference points, and extracting expression features based on deformation; simultaneously tracking key points in the three regions, extracting corresponding displacement information, extracting distance information between specific feature points of the expression pictures, subtracting the distance from the calm pictures to obtain change information of the distance, and extracting expression features based on movement;
3-9) obtaining a start-stop frame and a top frame according to the facial feature data extraction result; setting a threshold value R by comparing the distance between the feature points with the distance difference k of the calm picture, and judging the first frame image exceeding k > R as an initial frame; and judging the frame image with the maximum value of the k value as the top frame number by comparing the images after the initial frame, and taking the first frame when k is less than R as the termination frame.
5. The micro-expression occurrence detection method of claim 4, wherein the step 3-6) of detecting the human face comprises the specific steps of:
3-6-1) extracting a response image by adopting a Local Binary Pattern (LBP);
3-6-2) processing the response image by adopting an AdaBoost algorithm to separate a human face area; the LBP algorithm firstly scans each pixel point of an original image line by line, binaryzation is carried out on adjacent points of 3 x 3 around each pixel point by taking the gray value of the point as a threshold value, an 8-bit binary number is formed according to the sequence, and the value of the binary number is 0-255 as the response of the point.
6. The micro-expression occurrence detection method according to claim 5, wherein the specific steps of automatically positioning key features of the face by using the CLM local constraint model according to the input face image in the step 3-7) are as follows:
3-7-1) modeling the shape of the human face model: for M pictures, each picture has N characteristic points, and the coordinate of each characteristic point is (x)i、yi) The vector composed of the coordinates of N feature points on one image is represented by x ═ x1 y1 x2 y2…xN yn]TExpressed, the average face coordinates of all images are available:
Figure FDA0003638797540000031
calculating the difference between the shape of each sample image and the average face coordinate to obtain a shape change matrix X with zero mean, performing PCA (principal component analysis) conversion on the matrix X to obtain the main components of face change, and recording the characteristic value as lambdaiThe corresponding feature vector is pi(ii) a Selecting the eigenvectors corresponding to the largest k eigenvalues to form an orthogonal matrix P ═ P1,p2,…,pk) (ii) a Weight vector b of shape change is (b)1,b2,…,bk)TEach component of b represents its magnitude in the direction of the corresponding eigenvector:
Figure FDA0003638797540000032
for any face detection image, the sample shape vector can be expressed as:
Figure FDA0003638797540000033
3-7-2) establishing a patch model for each feature point: taking a patch area with a fixed size around each feature point, and marking the patch area containing the feature points as a positive sample; then, intercepting a patch with the same size in a non-characteristic point area and marking the patch as a negative sample; there are a total of r patches per feature point, which are grouped into a vector (x)(1),x(2),…x(r))TFor each image in the sample set, there is
Figure FDA0003638797540000034
Wherein y is(i)1, 2, … r, wherein y(i)1 is a positive sample label, y(i)1 is a negative sample mark; the trained linear support vector machine is:
Figure FDA0003638797540000041
wherein x isiSubspace vector, alpha, representing a sample setiIs a weight coefficient, Ms is the number of support vectors for each feature point, b is an offset; the following can be obtained:y(i)=WT·x(i)+θ,WT=[W1 W2 … Wn]is the weight coefficient of each support vector, θ is the offset;
3-7-3) fitting face points: performing local search through a limited area of the current estimated feature point position, and generating a similar response graph R (X, Y) for each feature point; fitting a quadratic function to the response plot, assuming R (X, Y) is domain-wide (X)0,y0) Where a maximum is obtained, a quadratic function r (x) may be used0,y0)=a(x-x0)2+b(y-y0)2+ c fitting this position; where a, b, c are coefficients of a quadratic function, using a least square method δ ═ min Σx,y[R(x,y)-r(x,y)]2The minimum error between the quadratic functions R (x, y) and R (x, y) can be found; the deformation constraint cost function is added to form an objective function for searching the feature points, and the objective function can be expressed as:
Figure FDA0003638797540000042
optimizing the objective function each time to obtain a new feature point position, and then iteratively updating until the maximum value is converged.
7. The method for detecting the occurrence of micro expressions according to claim 6, wherein the determination rule for determining whether micro expressions occur in step 4) is as follows:
Judging whether the micro expression occurs according to the electroencephalogram activity reaction time, and if so, judging the occurrence time T according to the micro expressionsmSearching for the change of the expression in the time threshold TL for the time starting point, and finally judging that the micro expression occurs if the expression occurring in 500ms is judged to occur according to the time of the starting and ending frames of the expression; if no expression occurs within 500ms, the micro expression is not finally judged to occur.
8. The method of claim 7, wherein the time threshold TL is 500ms to 1000 ms.
9. A system for microexpression detection using the detection method of any one of claims 1 to 8, wherein said detection system comprises:
the data acquisition module is used for recording normal electroencephalogram data and normal facial video data before the micro expression is induced by stimulation; recording the electroencephalogram data and the facial video data after the micro expression is induced by stimulation;
the time matching module is used for marking the time stamp of each section of electroencephalogram data and each frame of face video data to generate electroencephalogram time data and face video time data;
the data processing module is used for processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data and calculating the occurrence time T of the electroencephalogram micro-expressions smJudging a start-stop frame and a top frame of facial expression change through the facial video data and the facial video time data;
a microexpression judging module for judging the occurrence time T of the microexpression according to the electroencephalogramsmAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
CN202010321480.7A 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system Active CN111611860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010321480.7A CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010321480.7A CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Publications (2)

Publication Number Publication Date
CN111611860A CN111611860A (en) 2020-09-01
CN111611860B true CN111611860B (en) 2022-06-28

Family

ID=72204767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010321480.7A Active CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Country Status (1)

Country Link
CN (1) CN111611860B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258778B (en) * 2020-10-12 2022-09-06 南京云思创智信息科技有限公司 Micro-expression real-time alarm video recording method
CN112329663B (en) * 2020-11-10 2023-04-07 西南大学 Micro-expression time detection method and device based on face image sequence
CN112949495A (en) * 2021-03-04 2021-06-11 安徽师范大学 Intelligent identification system based on big data

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874672A (en) * 2017-02-17 2017-06-20 北京太阳电子科技有限公司 A kind of method and mobile terminal for showing EEG data
CN106974621A (en) * 2017-03-16 2017-07-25 小菜儿成都信息科技有限公司 A kind of vision induction motion sickness detection method based on EEG signals gravity frequency
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face
CN107874756A (en) * 2017-11-21 2018-04-06 博睿康科技(常州)股份有限公司 The precise synchronization method of eeg collection system and video acquisition system
CN109344816A (en) * 2018-12-14 2019-02-15 中航华东光电(上海)有限公司 A method of based on brain electricity real-time detection face action
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 A kind of acquisition methods and device of mood data
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN109901711A (en) * 2019-01-29 2019-06-18 西安交通大学 By the asynchronous real-time brain prosecutor method of the micro- expression EEG signals driving of weak Muscle artifacts
CN109934145A (en) * 2019-03-05 2019-06-25 浙江强脑科技有限公司 Mood degree assists method of adjustment, smart machine and computer readable storage medium
CN109984759A (en) * 2019-03-15 2019-07-09 北京数字新思科技有限公司 The acquisition methods and device of individual emotional information
CN110680313A (en) * 2019-09-30 2020-01-14 北京工业大学 Epileptic period classification method based on pulse group intelligent algorithm and combined with STFT-PSD and PCA

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070191691A1 (en) * 2005-05-19 2007-08-16 Martin Polanco Identification of guilty knowledge and malicious intent
NL2001805C2 (en) * 2008-07-15 2010-01-18 Stichting Katholieke Univ Method for processing a brain wave signal and brain computer interface.
CN102906752B (en) * 2010-01-18 2017-09-01 艾欧敏达有限公司 Method and system for the weighted analysis of neurophysiological data
US10285634B2 (en) * 2015-07-08 2019-05-14 Samsung Electronics Company, Ltd. Emotion evaluation
WO2019216504A1 (en) * 2018-05-09 2019-11-14 한국과학기술원 Method and system for human emotion estimation using deep physiological affect network for human emotion recognition

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874672A (en) * 2017-02-17 2017-06-20 北京太阳电子科技有限公司 A kind of method and mobile terminal for showing EEG data
CN106974621A (en) * 2017-03-16 2017-07-25 小菜儿成都信息科技有限公司 A kind of vision induction motion sickness detection method based on EEG signals gravity frequency
CN107874756A (en) * 2017-11-21 2018-04-06 博睿康科技(常州)股份有限公司 The precise synchronization method of eeg collection system and video acquisition system
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face
CN109344816A (en) * 2018-12-14 2019-02-15 中航华东光电(上海)有限公司 A method of based on brain electricity real-time detection face action
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 A kind of acquisition methods and device of mood data
CN109901711A (en) * 2019-01-29 2019-06-18 西安交通大学 By the asynchronous real-time brain prosecutor method of the micro- expression EEG signals driving of weak Muscle artifacts
CN109934145A (en) * 2019-03-05 2019-06-25 浙江强脑科技有限公司 Mood degree assists method of adjustment, smart machine and computer readable storage medium
CN109984759A (en) * 2019-03-15 2019-07-09 北京数字新思科技有限公司 The acquisition methods and device of individual emotional information
CN110680313A (en) * 2019-09-30 2020-01-14 北京工业大学 Epileptic period classification method based on pulse group intelligent algorithm and combined with STFT-PSD and PCA

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Emotion Recognition and Dynamic Functional Connectivity Analysis Based on EEG;Xucheng Liu et al;《IEEE Access》;20191002;143293 - 143302 *
基于EEG警觉状态检测方法及实验研究;张美妍;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20200215(第02期);E080-26 *
基于深度信念网络脑电信号表征情绪状态的识别研究;杨豪等;《生物医学工程学杂志》;20180425(第02期);182-190 *
基于脑电信号的视频诱发情绪识别;段若男;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150715(第07期);I136-107 *
模拟阅读"脑-机接口刺激物样式对正确率的影响;贾贝等;《计算机与数字工程》;20150220;第43卷(第02期);286-290 *

Also Published As

Publication number Publication date
CN111611860A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111611860B (en) Micro-expression occurrence detection method and detection system
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
Xu et al. Microexpression identification and categorization using a facial dynamics map
EP1766555B1 (en) Single image based multi-biometric system and method
Savvides et al. Efficient design of advanced correlation filters for robust distortion-tolerant face recognition
CN109359548A (en) Plurality of human faces identifies monitoring method and device, electronic equipment and storage medium
Khan et al. Saliency-based framework for facial expression recognition
KR102284096B1 (en) System and method for estimating subject image quality using visual saliency and a recording medium having computer readable program for executing the method
MX2013002904A (en) Person image processing apparatus and person image processing method.
CN113139439B (en) Online learning concentration evaluation method and device based on face recognition
Chen et al. WiFace: Facial expression recognition using Wi-Fi signals
US9135562B2 (en) Method for gender verification of individuals based on multimodal data analysis utilizing an individual's expression prompted by a greeting
CN116825365A (en) Mental health analysis method based on multi-angle micro-expression
Phuong et al. An eye blink detection technique in video surveillance based on eye aspect ratio
Bœkgaard et al. In the twinkling of an eye: Synchronization of EEG and eye tracking based on blink signatures
Blumrosen et al. Towards automated recognition of facial expressions in animal models
CN112287863A (en) Computer portrait recognition system
JP3980464B2 (en) Method for extracting nose position, program for causing computer to execute method for extracting nose position, and nose position extracting apparatus
CN116883900A (en) Video authenticity identification method and system based on multidimensional biological characteristics
CN106156775B (en) Video-based human body feature extraction method, human body identification method and device
Liu et al. A3GAN: An attribute-aware attentive generative adversarial network for face aging
Subasic et al. Expert system segmentation of face images
Xie et al. Detection of weak small image target based on brain-computer interface
Park Face Recognition: face in video, age invariance, and facial marks
JP3401511B2 (en) Computer-readable recording medium and image characteristic point extracting apparatus, in which a program for causing a computer to execute the method for extracting characteristic points of an image and the method for extracting the characteristic points of the image are recorded.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant