CN113133765A - Multi-channel fusion slight negative expression detection method and device for flexible electronics - Google Patents

Multi-channel fusion slight negative expression detection method and device for flexible electronics Download PDF

Info

Publication number
CN113133765A
CN113133765A CN202110362355.5A CN202110362355A CN113133765A CN 113133765 A CN113133765 A CN 113133765A CN 202110362355 A CN202110362355 A CN 202110362355A CN 113133765 A CN113133765 A CN 113133765A
Authority
CN
China
Prior art keywords
slight negative
expression
data
negative expression
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110362355.5A
Other languages
Chinese (zh)
Inventor
谭小慧
庄美琪
邹星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202110362355.5A priority Critical patent/CN113133765A/en
Publication of CN113133765A publication Critical patent/CN113133765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychology (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multichannel fusion slight negative expression detection method and a device based on flexible electronics, wherein the method comprises the following steps: step S1: collecting expression data and carrying out classified coding on the expression data; step S2: extracting features of the expression data to obtain training data; step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model; step S4: and continuously acquiring expression data, inputting the expression data into the trained slight negative expression classification model to obtain an initial slight negative expression classification result, and smoothing the initial slight negative expression classification result to obtain a slight negative expression classification result. According to the method, the recognition of the slight negative expression and the strength is carried out by detecting the electromyographic signals of the face, the accuracy and the speed of the recognition of the slight negative expression under the limited conditions of light, posture and the like are improved, and the emotion self-adaption design of the user can be made according to the recognition result.

Description

Multi-channel fusion slight negative expression detection method and device for flexible electronics
Technical Field
The invention relates to the technical field of expression recognition, in particular to a multichannel fusion slight negative expression detection method and device based on flexible electronics.
Background
The expression valence describes the attraction and rejection degree of a person to things, and is divided into positive emotions and negative emotions, and the emotions of tension, anxiety, anger, depression, sadness, pain and the like are called negative emotions in psychology. Negative emotions can be generated when negative emotions occur, and even the negative emotions can inhibit life and work, so that the research and identification of the emotions by using scientific technology have great significance.
In 1971, Ekman et al studied 6 basic expressions (happiness, sadness, surprise, disgust, fear, anger) of human beings and described in detail how Facial muscles corresponding to each expression change (FACS) while defining Facial Action Units (AU), and then most of the studies on Facial expressions of human faces were developed based on the Facial Action units. The action units are based on anatomy, facial muscles are divided into muscle groups which are not interfered with each other, each action unit controls one muscle group, and when the corresponding muscle deforms, the action units appear. And the strength of the action units is defined in 5 levels, from weak to strong, the strength is A to E, so that a plurality of action units with different strengths can be combined to represent various different expressions.
In the field of virtual reality, wearable equipment such as a helmet is usually used as a medium for realizing human-computer interaction, emotion supervision of a user is one of important ways for improving immersion, but due to the limitation of the helmet, only expression information of the upper part of the face can be acquired, in a patent of a wearable augmented reality remote video system and a video call method, an augmented reality intelligent glasses system is provided, and equipment such as an optical fiber scanning projector, a binocular infrared gesture recognition camera, an eye tracker, a binocular front-view wide-angle camera and the like is arranged in the equipment for scanning and acquiring the face information. In the invention, a sensing device is arranged in intelligent VR glasses based on facial expression recognition to collect electromyographic signals, and the invention indicates that a small electrode is selected, so that the selection flexibility is improved, but a large skin contact resistance is brought.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-channel fusion slight negative expression detection method and device based on flexible electronics.
The technical solution of the invention is as follows: a multi-channel fusion slight negative expression detection method based on flexible electronics comprises the following steps:
step S1: collecting expression data and carrying out classified coding on the expression data;
step S2: extracting features of the expression data to obtain training data;
step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model;
step S4: and continuously acquiring expression data, inputting the expression data into the trained slight negative expression classification model to obtain an initial slight negative expression classification result, and smoothing the initial slight negative expression classification result to obtain a slight negative expression classification result.
Compared with the prior art, the invention has the following advantages:
1. the invention provides an emotion recognition method mainly based on an eyebrow and eyebrow electromyographic signal, which is used for recognizing slight negative expressions and strength by detecting the electromyographic signal of a face, so that the accuracy and speed of recognizing the slight negative expressions under limited conditions of light, posture and the like are improved, and the emotion self-adaptive design of a user can be made according to a recognition result so as to improve the immersion and the interactivity.
2. According to the invention, the flexible electronic device with extremely low invasion is adopted for collection, so that the facial expression of a user is not influenced by the sensing equipment to change, the immersion of the user is improved, and the cost of the equipment is not increased.
Drawings
FIG. 1 is a flowchart of a multi-channel fused mild negative expression detection method based on flexible electronics according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a flexible electronic sensor in an embodiment of the invention;
FIG. 3 is a schematic diagram of the distribution of facial action units according to the present invention;
FIG. 4 is a table of 4 types of negative expressions and their corresponding facial action units according to an embodiment of the present invention;
FIG. 5 is a waveform diagram of facial electromyographic signals according to an embodiment of the present invention;
fig. 6 is a block diagram of a step S2 in the multichannel fused mild negative expression detection method based on flexible electronics according to the embodiment of the present invention: extracting features of the expression data to obtain a flow chart of the training data;
FIG. 7 is a diagram of facial electromyographic signal characteristic values according to an embodiment of the present invention;
FIG. 8 is a block diagram of a step S3 of a multi-channel fused mild negative expression detection method based on flexible electronics according to an embodiment of the present invention; inputting the training data into a slight negative expression classification model for training to obtain a flow chart of the trained slight negative expression classification model;
FIG. 9 is a parameter diagram of a random forest classifier model according to an embodiment of the present invention;
FIG. 10 is a waveform diagram of facial electromyography signals in accordance with an embodiment of the present invention;
FIG. 11 is a waveform diagram of a smoothed facial electromyographic signal according to an embodiment of the present invention;
FIG. 12 is a block diagram illustrating an exemplary target tracking apparatus for intelligently sorting candidate frames according to an embodiment of the present disclosure.
Detailed Description
The invention provides a multi-channel fusion slight negative expression detection method and device based on flexible electronics, which are used for identifying slight negative expressions and intensity by detecting electromyographic signals of a face, and improving the accuracy and speed of the identification of the slight negative expressions under the limited conditions of light rays, postures and the like.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings.
Example one
As shown in fig. 1, the method for detecting a multi-channel fused slight negative expression based on flexible electronics provided by the embodiment of the invention includes the following steps:
step S1: collecting expression data and carrying out classified coding on the expression data;
step S2: extracting features of the expression data to obtain training data;
step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model;
step S4: and continuously acquiring expression data, inputting the expression data into the trained slight negative expression classification model to obtain an initial slight negative expression classification result, and smoothing the initial slight negative expression classification result to obtain a slight negative expression classification result.
The invention realizes the recognition of the slight negative expression of the face based on flexible electronics. The expression deformation intensity of the slight expression is not easy to be distinguished by naked eyes of people, is weaker than the basic expression, usually occurs under the condition of artificially inhibiting the emotion, and has uncertain duration and variable length. The recognition of the slight expression has important contribution in the fields of reconnaissance and the like, so in order to ensure the accuracy of recognition, the recognition is combined with the electromyographic signals, and the sensor is attached to the head-mounted equipment, so that the man-machine interaction and the immersion of the virtual reality technology can be improved. The embodiment of the invention adopts the flexible electronic sensor which is an ultrathin strain sensor, and compared with the traditional sensing device, the sensor has great advantages and can bear bending and stretching to a certain degree, as shown in figure 2, the flexible electronic sensor can be perfectly attached to the surface of the muscle of a human body, is suitable for the complex curved surface of the muscle, has the characteristic of lightness and thinness, and has great development space in the fields of medical health, brain-computer fusion and the like. The comprehensive effect of the electrical activity of the superficial muscle and the nerve trunk on the skin surface of the electromyographic signals on the surface of the human face can reflect the activity of the nerve muscle to a certain degree. Therefore, when the facial expression of a person changes, the electromyographic signals of the face correspondingly change. Similarly, because the expression of the human face is formed by combining different muscles, the facial action unit combination with negative expression is investigated, and 6 action units are counted. As shown in fig. 3, are: AU1 for controlling the front side of eyebrow; AU 2: controlling the back side of the eyebrow to rise; AU4, controlling frown muscle and degree of frown; AU5 lifting the upper eyelid to widen the palpebral fissure; AU6, lifting cheek, tightening external ring of orbicularis oculi; AU7 tightening eyelid and reducing eyelid. 4 types of negative expressions (anger, sadness, surprise, fear) can be obtained according to different combinations of the above 6 types of facial action units. Since each expression has different degrees of expression, such as extreme anger and slight anger, which are two extremes of angry expression, negative expressions are divided according to five intensities (a-E) of AU, each negative expression is divided into 5 different degrees of expression, a-C are divided into light expressions, 12 light expressions in total, and 13 expressions are included in addition to a neutral expression. The method is to detect and classify the 13 kinds of slight expressions based on electromyographic signals.
In one embodiment, the step S1: collecting expression data and carrying out classified coding on the expression data; the embodiment of the invention adopts the flexible electronic sensor to collect expression data, and the flexible electronic sensor has good flexibility and fitting property and good fitting degree with human skin. The data acquisition circuit adopts a 3-order Butterworth filter, the center frequency is set to be 50Hz, and the suppression bandwidth is 6H. And a 10Hz high-pass filter is arranged to filter the baseline drift, and a 1kHz low-pass filter is arranged to remove high-frequency noise.
In the embodiment of the present invention, the number of the acquisition personnel participating in the training data is not less than 20, and the acquisition personnel is taught the facial action units and the explanation of the negative expressions from the muscle level in the acquisition process, as shown in fig. 4, the facial action units corresponding to the 4 types of negative expressions are shown:
angry expression: frown (AU4 force), upper eyelid is lifted, eyelid fissure is widened (AU5 force), lower eyelid is lifted (AU7 force);
sadness expression: lifting the upper eyebrow (AU1 forced), frowning the eyebrow (AU4 forced), lifting the cheek, contracting the eyelids (AU6 forced), lifting the lower eyelid (AU7 forced);
surprised expression: lifting eyebrow (AU1, AU2 exerting force), eyelid crack widening (AU5 exerting force);
fear expression: lifting eyebrow (AU1, AU2 exerting force), frown (AU4 exerting force), upper eyelid, eyelid fissure widening (AU5 exerting force), and lower eyelid lifting (AU7 exerting force);
after training collection personnel, a helmet pasted with a flexible electronic sensor is worn, and myoelectric signals are captured through the sensor arranged on the helmet.
An expression composed of intensities a-C is defined as a mild expression according to 5 weak to strong intensity characteristics (a-E) of the facial action unit. Dividing according to 4 negative emotions (anger, sadness, surprise and fear), finely dividing each type of negative expression into 3 types of expressions, classifying 13 types of expressions (12 types of light expressions and neutral expressions) in total, and coding the expressions by 1-13: anger (3 types of mild expressions are coded as 1-3 from weak to strong, respectively); sadness (3 types of mild expressions are coded as 4-6 from weak to strong respectively); surprise (3 types of mild expressions are respectively coded as 7-9 from weak to strong); fear (3 types of mild expressions are coded as 10-12 from weak to strong respectively); the neutral expression is coded as 13.
As shown in FIG. 5, the collected data is E [ I ] [6], where I is the amount of data in the time domain.
As shown in fig. 6, in one embodiment, the step S2: carrying out feature extraction on the expression data to obtain training data, wherein the training data comprises:
step S21: calculating integral myoelectricity values of the samples by taking a preset number of sampling points as intervals;
inputting the collected expression data into a script, traversing E [ I ] [6], and calculating an integrated myoelectric value IEMG of 200 sampling points [ A, B ] (wherein B is A +200) by taking 50 sampling points as a cycle interval, wherein a calculation formula (1) is as follows:
Figure BDA0003006060250000051
wherein, a is a sampling interval upper limit, a is {0,50,100. }, B is a sampling interval lower limit (B is a +200), j belongs to [0,5] and represents signal channels of 6 action units of AU 1-AU 2 and AU 4-AU 7; n [ N ] [ j ] is an array obtained after electromyographic signals calculate integrated electromyographic values, wherein N is the number of sampling intervals.
Step S22: calculating the root mean square effective value of the sample, wherein the calculation formula (2) is as follows:
Figure BDA0003006060250000052
and M [ n ] [ j ] is an array obtained after the root mean square effective value is calculated, wherein n is the number of sampling intervals.
Step S23: combining the integral myoelectricity value and the root-mean-square effective value to obtain a training array;
after the processing, the integral myoelectric value N [ N ] [ j ] and the root-mean-square effective value M [ N ] [ j ] are horizontally combined to obtain a training array D [ N ] [12], wherein 1-6 columns of the D [ N ] [12] are integral myoelectric value arrays of 6 channels, 7-12 columns of the D [ N ] [12] are root-mean-square effective value arrays of 6 channels, and each row represents the statistical parameter characteristic of one sampling interval. As shown in fig. 7, the electromyographic signal visualization line graph is a visualization line graph of an expression, and is a visualization line graph obtained by extracting an integral electromyographic value from the data of fig. 5, so that the characteristics of the data can be clearly seen.
Step S24: and denoising the training array, and marking the corresponding category of the slight negative expression of each datum in the training array to obtain final training data.
In order to ensure the accuracy of the training data, the data needs to be denoised. Since the former data may have a certain fluctuation due to interference factors before each acquisition process, the obviously noisy data with large fluctuation of the front-end data and the tail-end data can be deleted by setting a threshold value.
After noise data is deleted, a training array D is marked, each row of data represents a statistical parameter of a sampling interval, and therefore the data needs to be marked with corresponding slight negative expression numbers in a row unit to obtain a mark T [ n ] [1] of the training array, wherein n is the number of the sampling intervals. For example, belonging to a neutral expression, it is labeled 13. All data are classified into 13 types of table, T [ n ] [1], the meaning of the array row is the same as that of D, but only one column is labeled.
As shown in fig. 8, in one embodiment, the step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model, wherein the training data comprises the following steps:
step S31: inputting training data into a slight negative expression classification model for training, wherein the slight negative expression classification model adopts a random forest classifier model;
because the random forest supports high-dimensional data, random extraction on variables and data enables the random forest to be difficult to generate an overfitting phenomenon, and meanwhile, the random forest has good noise resistance and high training speed. Therefore, the embodiment of the invention adopts a random forest classifier model. The random forest classifier model is a classifier set { h (x, theta) composed of tree-structured classifiersk) K is 1, … }, where θkIs a random vector which is independently and equally distributed, and each tree obtains a category under the input x, and the most possible category is selected according to the voting.
Inputting D [ N ] [12] and T [ N ] [1] into a random forest classifier model, wherein the random forest classifier model randomly and repeatedly extracts samples from D [ N ] [12] for N times when generating each decision tree, each decision tree has about 1/3 samples which are not extracted, and the verification process is as the following formula (3):
Figure BDA0003006060250000061
these samples were oob samples per tree, and were calculated to be oob-error (out-of-bag error rate) instead of the commonly used k-fold cross validation.
Step S23: and optimizing parameters of the random forest classifier model by a grid searching method until the recognition accuracy reaches a threshold value to obtain a trained slight negative expression classification model.
Inputting the training array D and the label T of the training array into a random forest classifier, optimizing parameters of the classification model with the slight negative expression by a grid search method, and finding out the classifier with the highest classification accuracy, wherein the adjusted parameters are shown in FIG. 9.
Firstly, the number of trees in an optimal forest, namely the number of base estimators (n _ estimators), is found, the larger the parameter is, the better the model effect is, but after the number reaches a certain degree aiming at different problems, the accuracy of the random forest classifier is not increased, and the calculation amount and the memory overhead are influenced, so that the training time is increased, and therefore a proper n _ estimators is found by balancing the training difficulty and the classification accuracy.
After n _ estimators are determined, the influence parameters of the basis evaluator are adjusted, first the splitting condition (criterion): the default values of the parameters are gini and entrypy, and the proper criterion is found by traversing the two values.
Then, the maximum feature number (max _ feature) participating in judgment during the splitting is adjusted, a value range (general range: 28-47) is set, and the appropriate max _ feature is found by traversing by taking 1 as a unit.
Then, the maximum depth (max _ depth) is adjusted, a value range (general range: 10-100) is set, and traversal is carried out by taking 10 as a unit to find out the proper max _ depth.
Then, the minimum number of samples (min _ samples _ split) required by classification is adjusted, a value range (general range: 2-11) is set, and traversal is performed by taking 1 as a unit to find out the appropriate min _ samples _ split.
The minimum sample number (min _ samples _ leaf) of the leaf node is adjusted, a value range (general range: 1-10) is set, and traversal is performed by taking 1 as a unit to find out the appropriate min _ samples _ leaf.
The maximum leaf node number (max _ leaf _ nodes) is adjusted, the value range (general range: 2500-.
The most suitable number of the base evaluators and the optimal parameters of the evaluators are found through the adjustment of the parameters, so that the recognition accuracy reaches a preset threshold value, and a trained slight negative expression classification model with high recognition accuracy is formed.
In one embodiment, the step S4 of smoothing the classification result of the slight negative expression includes:
and deleting the data with wrong classification in the classification result of the slight negative expression within a preset time threshold.
In step S4, the emotion data are continuously collected to obtain the electromyographic signals of the real-time slightly negative expressions and processed in real time, and the detected data are input into the classification model of the slightly negative expressions, so as to obtain the classification result of the initial slightly negative expressions. Because the random forest classifier model used by the slight negative expression classification model cannot achieve the purpose of correct classification, the classification result at this time is unsmooth data, and a frame skipping condition may exist in the time domain of the same expression, that is, the expression is classified into an error result, so that a frame threshold needs to be set, the classification result in a certain time interval is subjected to smoothing processing, and the processed result has the meaning of continuous data. And calculating the smoothed data of each frame. If continuous expression classification is performed, data smoothing is required, and as shown in fig. 10, data is smoothed within a certain time threshold, and data identified as erroneous is removed. Classification result of initial slight negative expressions R [ n ]][1]In consideration of the fact that the occurrence time of the micro expression is 1/25 s-1/5 s, the slight expression is not clearly defined, and the duration time of one expression cannot be ensured, one second is divided into 5 frames, and 40 data points are one frame. Setting the sliding window of the data smoothing process as 20, and calculating the data after each frame of smoothing process as PtThe calculation formula (4) is as follows:
Figure BDA0003006060250000071
the results after smoothing are shown in fig. 11. PtThe final classification result of the slight negative expression is obtained.
The invention provides an emotion recognition method mainly based on an eyebrow and eyebrow electromyographic signal, which is used for recognizing slight negative expressions and strength by detecting the electromyographic signal of a face, so that the accuracy and speed of recognizing the slight negative expressions under limited conditions of light, posture and the like are improved, and the emotion self-adaptive design of a user can be made according to a recognition result so as to improve the immersion and the interactivity.
Example two
As shown in fig. 12, an embodiment of the present invention provides a multi-channel fusion mild negative expression detection apparatus based on flexible electronics, including the following modules:
the expression data acquisition module 51 is used for acquiring expression data and performing classified coding on the expression data;
the extracted data feature module 52 is configured to perform feature extraction on the expression data to obtain training data;
the training slight negative expression classification model module 53 is used for inputting training data into the slight negative expression classification model for training to obtain a trained slight negative expression classification model;
and the expression classification module 54 is configured to continuously acquire expression data, input the acquired light negative expression classification model to obtain an initial light negative expression classification result, and smooth the initial light negative expression classification result to obtain a light negative expression classification result.
In an embodiment, the above module 51 for collecting facial expression data includes:
the flexible electronic sensor is used for acquiring electromyographic signals of the face;
and the head-mounted equipment is used for placing the flexible electronic sensor on the corresponding muscles of the face of the human face.
In the embodiment of the invention, the flexible electronic sensor is attached to the head-wearing equipment, and the sensor is an ultrathin strain sensor, so that the sensor has great advantages compared with the traditional sensing device, can bear bending and stretching to a certain degree, can be perfectly attached to the surface of muscle of a human body, and is suitable for the complex curved surface of the muscle. When the facial expression of a person changes, the electromyographic signals of the face correspondingly change, so that the flexible electronic sensor can detect the change and transmit the facial electromyographic signals to the head-mounted equipment.
The head-mounted equipment can be placed on the head of a human face, so that the built-in flexible electronic sensor is placed on the corresponding facial muscle. The position of the device and the position of the flexible electronic sensor can be adjusted to adapt to the positions of the facial muscles of different people. The head-mounted device sends the facial myoelectric signals transmitted back by the flexible electronic sensor to the data extraction characteristic module for subsequent data processing.
The embodiment of the invention adopts the flexible electronic device with extremely low invasion to collect data, so that the user can not influence the change of facial expression due to the sensing equipment, the immersion of the user is improved, and the cost of the equipment is not increased.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (6)

1. A multi-channel fusion slight negative expression detection method based on flexible electronics is characterized by comprising the following steps:
step S1: collecting expression data and carrying out classified coding on the expression data;
step S2: extracting features of the expression data to obtain training data;
step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model;
step S4: and continuously acquiring expression data, inputting the expression data into the trained slight negative expression classification model to obtain an initial slight negative expression classification result, and smoothing the initial slight negative expression classification result to obtain a slight negative expression classification result.
2. The method for detecting the multi-channel fusion slight negative expression based on the flexible electronics as claimed in claim 1, wherein the step S2: and performing feature extraction on the expression data to obtain training data, wherein the method comprises the following steps:
step S21: calculating the integral myoelectricity value of the sample by taking a preset number of sampling points as an interval;
step S22: calculating a root mean square effective value of the samples;
step S23: combining the integral myoelectricity value and the root-mean-square effective value to obtain a training array;
step S24: and denoising the training array, and marking the corresponding category of the slight negative expression of each datum in the training array to obtain final training data.
3. The method for detecting the multi-channel fusion slight negative expression based on the flexible electronics as claimed in claim 1, wherein the step S3: inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model, wherein the training data comprises the following steps:
step S31: inputting the training data into a slight negative expression classification model for training, wherein the slight negative expression classification model adopts a random forest classifier model;
step S23: and optimizing parameters of the random forest classifier model by a grid searching method until the recognition accuracy reaches a threshold value to obtain the trained slightly negative expression classification model.
4. The method for detecting the multi-channel fused slight negative expression based on the flexible electronics as claimed in claim 1, wherein the step S4 of smoothing the classification result of the slight negative expression comprises:
and deleting the data with wrong classification in the classification result of the slight negative expression within a preset time threshold.
5. The multi-channel fusion slight negative expression detection device based on the flexible electronics is characterized by comprising the following modules:
the expression data acquisition module is used for acquiring expression data and performing classified coding on the expression data;
the data feature extraction module is used for extracting features of the expression data to obtain training data;
the training slight negative expression classification model module is used for inputting the training data into a slight negative expression classification model for training to obtain a trained slight negative expression classification model;
and the expression classification module is used for continuously acquiring expression data, inputting the expression data into the trained slight negative expression classification model to obtain an initial slight negative expression classification result, and smoothing the initial slight negative expression classification result to obtain a slight negative expression classification result.
6. The device of claim 5, wherein the expression data collecting module comprises:
the flexible electronic sensor is used for acquiring electromyographic signals of the face;
and the head-mounted equipment is used for placing the flexible electronic sensor on the corresponding human face muscle.
CN202110362355.5A 2021-04-02 2021-04-02 Multi-channel fusion slight negative expression detection method and device for flexible electronics Pending CN113133765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110362355.5A CN113133765A (en) 2021-04-02 2021-04-02 Multi-channel fusion slight negative expression detection method and device for flexible electronics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110362355.5A CN113133765A (en) 2021-04-02 2021-04-02 Multi-channel fusion slight negative expression detection method and device for flexible electronics

Publications (1)

Publication Number Publication Date
CN113133765A true CN113133765A (en) 2021-07-20

Family

ID=76811423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110362355.5A Pending CN113133765A (en) 2021-04-02 2021-04-02 Multi-channel fusion slight negative expression detection method and device for flexible electronics

Country Status (1)

Country Link
CN (1) CN113133765A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017029323A (en) * 2015-07-30 2017-02-09 Kddi株式会社 Device, terminal, and program for identifying face expression by using myoelectric signal
CN106774906A (en) * 2016-12-22 2017-05-31 南京邮电大学 A kind of rehabilitation robot interactive control method based on Emotion identification
JP2017140198A (en) * 2016-02-09 2017-08-17 Kddi株式会社 Apparatus for identifying facial expression with high accuracy by using myoelectric signal, and device, program and method thereof
WO2017184274A1 (en) * 2016-04-18 2017-10-26 Alpha Computing, Inc. System and method for determining and modeling user expression within a head mounted display
US20180107275A1 (en) * 2015-04-13 2018-04-19 Empire Technology Development Llc Detecting facial expressions
US20180239956A1 (en) * 2017-01-19 2018-08-23 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
US20190138096A1 (en) * 2017-08-22 2019-05-09 Silicon Algebra Inc. Method for detecting facial expressions and emotions of users
US20200327312A1 (en) * 2019-04-10 2020-10-15 Industry University Cooperation Foundation Hanyang University Electronic device, avatar facial expression system and controlling method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180107275A1 (en) * 2015-04-13 2018-04-19 Empire Technology Development Llc Detecting facial expressions
JP2017029323A (en) * 2015-07-30 2017-02-09 Kddi株式会社 Device, terminal, and program for identifying face expression by using myoelectric signal
JP2017140198A (en) * 2016-02-09 2017-08-17 Kddi株式会社 Apparatus for identifying facial expression with high accuracy by using myoelectric signal, and device, program and method thereof
WO2017184274A1 (en) * 2016-04-18 2017-10-26 Alpha Computing, Inc. System and method for determining and modeling user expression within a head mounted display
CN106774906A (en) * 2016-12-22 2017-05-31 南京邮电大学 A kind of rehabilitation robot interactive control method based on Emotion identification
US20180239956A1 (en) * 2017-01-19 2018-08-23 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
US20190138096A1 (en) * 2017-08-22 2019-05-09 Silicon Algebra Inc. Method for detecting facial expressions and emotions of users
US20200327312A1 (en) * 2019-04-10 2020-10-15 Industry University Cooperation Foundation Hanyang University Electronic device, avatar facial expression system and controlling method thereof

Similar Documents

Publication Publication Date Title
Benalcázar et al. Hand gesture recognition using machine learning and the Myo armband
Coyle et al. A time-series prediction approach for feature extraction in a brain-computer interface
CN110472512B (en) Face state recognition method and device based on deep learning
US20130096453A1 (en) Brain-computer interface devices and methods for precise control
Jaramillo et al. Real-time hand gesture recognition with EMG using machine learning
Neacsu et al. Automatic EMG-based hand gesture recognition system using time-domain descriptors and fully-connected neural networks
CN110495893B (en) System and method for multi-level dynamic fusion recognition of continuous brain and muscle electricity of motor intention
Wei et al. EMG and visual based HMI for hands-free control of an intelligent wheelchair
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
Yang et al. sEMG-based continuous hand gesture recognition using GMM-HMM and threshold model
Shin et al. Korean sign language recognition using EMG and IMU sensors based on group-dependent NN models
CN113208593A (en) Multi-modal physiological signal emotion classification method based on correlation dynamic fusion
Wu et al. sEMG measurement position and feature optimization strategy for gesture recognition based on ANOVA and neural networks
Devaraj et al. Hand gesture signal classification using machine learning
CN107480635B (en) Glance signal identification method and system based on bimodal classification model fusion
CN113920568A (en) Face and human body posture emotion recognition method based on video image
Song et al. Adaptive common spatial pattern for single-trial EEG classification in multisubject BCI
CN112732092A (en) Surface electromyogram signal identification method based on double-view multi-scale convolution neural network
CN109144238A (en) A kind of man-machine interactive system and its exchange method based on eye electricity coding
CN111079465A (en) Emotional state comprehensive judgment method based on three-dimensional imaging analysis
Savadi et al. Face based automatic human emotion recognition
CN109308118B (en) Chinese eye writing signal recognition system based on EOG and recognition method thereof
KR102267741B1 (en) Deep learning based emotional recognition system and methods using PPG signals
CN110705656A (en) Facial action recognition method based on EEG sensor
CN113133765A (en) Multi-channel fusion slight negative expression detection method and device for flexible electronics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210720

WD01 Invention patent application deemed withdrawn after publication