CN112472088A - Emotional state evaluation method and device, intelligent terminal and storage medium - Google Patents

Emotional state evaluation method and device, intelligent terminal and storage medium Download PDF

Info

Publication number
CN112472088A
CN112472088A CN202011139913.3A CN202011139913A CN112472088A CN 112472088 A CN112472088 A CN 112472088A CN 202011139913 A CN202011139913 A CN 202011139913A CN 112472088 A CN112472088 A CN 112472088A
Authority
CN
China
Prior art keywords
video
emotional state
facial
score
emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011139913.3A
Other languages
Chinese (zh)
Other versions
CN112472088B (en
Inventor
周永进
武帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011139913.3A priority Critical patent/CN112472088B/en
Publication of CN112472088A publication Critical patent/CN112472088A/en
Application granted granted Critical
Publication of CN112472088B publication Critical patent/CN112472088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Child & Adolescent Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an emotional state assessment method, an emotional state assessment device, an intelligent terminal and a storage medium, wherein the method comprises the following steps: acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video; inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score; and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores. The emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state score model, the emotion state score is objectively and conveniently obtained, human factor interference is avoided, and objective basis is provided for subsequent analysis and judgment work.

Description

Emotional state evaluation method and device, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an emotional state assessment method and device, an intelligent terminal and a storage medium.
Background
The emotion plays a role in playing a great role in the life of people, and influences the thinking, decision and behavior of people to a great extent. With the increase of the social competitive pressure, if the people face the heavy mental pressure, the insomnia is easy to cause and the incidence rate of the psychological diseases such as anxiety neurosis, depression and the like is increased when the people are in bad mood for a long time, and the health and even the life of the people are threatened. Therefore, for people who are easy to lose control of emotion, the emotion recognition is used for early finding whether the emotion is abnormal or not, so that the mental stress of the people is helped to be relieved, and the physical and mental health conditions of the human body are improved.
In the prior art, the emotional state assessment method mainly uses the modes of inquiry observation, monitoring equipment for measuring physiological parameters, scale scoring and the like. The inquiry observation is mainly that subjective judgment is carried out by an inquiry mode; the monitoring device is easy to influence the physiological parameters by the mental or emotional state, which causes the distortion of the physiological parameters, for example, the blood pressure measured by a sphygmomanometer is easy to cause anxiety stress and cause the rise of the blood pressure; the scale scoring takes long time, and meanwhile, the scoring accuracy is influenced by human experience and subjective judgment.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The invention provides an emotional state assessment method, an emotional state assessment device, an intelligent terminal and a storage medium, aiming at solving the problem that the emotional state assessment method in the prior art has the influence of human experience and subjective judgment.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides an emotional state assessment method, where the method includes:
acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score;
and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
In one implementation, the pre-processing the face video includes:
after the face video of the person to be tested is obtained, filtering the face video part, screening out a video segment without a face image, and cutting off the video segment without the face image to obtain the target face video.
In one implementation, the inputting the target face video into a preset emotional state score model to obtain an emotional state score corresponding to a facial feature of the target face video includes:
after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image;
acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and outputting the emotional state score and the corresponding facial features.
In one implementation, the emotional state scoring model is generated by:
collecting a video sample, wherein the video sample comprises facial images with various emotional expressions;
acquiring the facial image from the video sample, and acquiring facial features according to the facial image, wherein the facial features are used for reflecting emotional expressions corresponding to the facial image;
determining an emotional state score corresponding to the facial feature according to a preset scoring table, and generating a corresponding relation between the facial feature and the emotional state score;
and inputting the corresponding relation into a preset network model for training to generate the emotional state scoring model.
In one implementation, the facial features include emotional features, facial keypoint motion features, and facial motion parts of the facial image.
In one implementation, the score table stores a correspondence between various facial features and emotional state scores corresponding to each facial feature, and the correspondence is a one-to-one correspondence.
In one implementation, the preset network model includes any one of a free support vector machine model, a machine learning model, and a neural network learning model.
In a second aspect, an embodiment of the present invention further provides an emotional state assessment apparatus, where the apparatus includes:
the video acquisition unit is used for acquiring a face video of a person to be tested and preprocessing the face video to obtain a target face video;
the score determining unit is used for inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the face feature of the target face video, and the emotion state scoring model is trained on the basis of the corresponding relation between the face feature and the emotion state score;
and the data recording unit is used for recording the emotional state scores and correspondingly recording the facial features corresponding to the emotional state scores.
In a third aspect, the present invention also provides an intelligent terminal, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors comprise instructions for performing the emotional state assessment method according to any one of the above aspects.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the emotional state assessment method according to any one of the above.
The invention has the beneficial effects that: according to the method, the emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state scoring model, and after the facial video of the person to be tested is input into the emotion state scoring model, the corresponding emotion state score can be directly obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an emotional state assessment method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a generation flow of an emotional state scoring model in the emotional state assessment method according to the embodiment of the present invention.
Fig. 3 is a schematic block diagram of an emotional state assessment apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In the prior art, the evaluation method of the mental emotional state mainly uses the modes of inquiry observation, monitoring equipment for measuring physiological parameters, scale scoring and the like. The inquiry observation is mainly that subjective judgment is carried out by an inquiry mode; the monitoring device is easy to influence the physiological parameters by the mental or emotional state, which causes the distortion of the physiological parameters, for example, the blood pressure measured by a sphygmomanometer is easy to cause anxiety stress and cause the rise of the blood pressure; the scale scoring takes long time, and meanwhile, the scoring accuracy is influenced by human experience and subjective judgment.
Therefore, in order to solve the problems in the prior art, embodiments of the present invention provide an emotional state assessment method, in the embodiments of the present invention, an emotional state score corresponding to a facial feature in a facial video is mainly determined by a preset emotional state score model, and when a facial video of a person to be tested is input to the emotional state score model, a corresponding emotional state score can be directly obtained.
With the progress of medical concept, psychological problems caused by physiological diseases are more and more emphasized by clinicians. The psychological treatment is actively carried out according to the symptoms, which is helpful for alleviating the symptoms of the diseases, enhancing the compliance of patients during treatment and improving the happiness of life. Otherwise, it is not favorable for the patient to recover. The emotion, as a high-level function of the brain, reflects the cognitive and mental states of a person. The computer-aided system is therefore used to make decisions about the person's mental and emotional state in order to provide the physician with objective and accurate information about the mental and emotional state. The embodiment provides an emotional state assessment method based on a computer-aided system, and the method can be applied to an intelligent terminal, as shown in fig. 1, and includes the following steps:
s100, obtaining a face video of a person to be tested, and preprocessing the face video to obtain a target face video.
In this embodiment, the emotional state score corresponding to the facial feature in the face video needs to be determined according to the face video, so in this embodiment, the face video of the person to be tested needs to be acquired first, and the face video is preprocessed. In a specific implementation, the present embodiment may employ a video capturing device to capture a facial video of the person to be tested, for example, a photographing device to capture the facial video. Since there may be a video segment without a face image in the face video of the testee obtained by shooting, the present embodiment really needs to analyze only the video segment with the face image. When there is a video segment without a face image in the face video (i.e., there is an invalid video segment), the processing efficiency in the subsequent steps is affected. Therefore, in order to make the face video cleaner (i.e. there is no video segment without a face image), in this embodiment, it is necessary to screen out a video segment without a face image from the face video segment, and cut off the video segment without a face image to obtain the target face video. The target face video is a video segment only having a face image, so that the face features can be extracted more quickly when feature extraction is performed on the target face video in the subsequent steps. In one embodiment, after the face video is acquired, the face video may be subjected to noise reduction processing, so as to improve the quality of the face video.
Step S200, inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is formed by training based on the corresponding relation between the face feature and the emotion state score.
The emotional state scoring model in this embodiment is preset and can be directly invoked. And after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image. And then, acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model. Since the emotional state scoring model in this embodiment is trained based on the correspondence between the facial features and the emotional state scores, the correspondence between the facial features and the emotional state scores inevitably exists, and therefore, the emotional state scores can be automatically determined by the emotional state scoring model. After the emotional state score is determined, the embodiment outputs the emotional state score and the corresponding facial features.
In one embodiment, the embodiment includes the following steps when generating the emotional state scoring model, as shown in fig. 2:
step S201, after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image;
step S202, acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and step S203, outputting the emotional state score and the corresponding facial features.
Specifically, firstly, a video sample is collected, the video sample comprises facial images with various different emotional expressions, the facial images are obtained from the video sample, and facial features are obtained according to the facial images, and the facial features are used for reflecting the emotional expressions corresponding to the facial images; for example, the emotional expression may be: normal (nature), happy (happy), fear (fear), surprise (surrise), disgust (disgust), sadness (sadness), angor (anger). Each emotional expression will have a respective facial feature. Then, according to a preset scoring table, determining an emotional state score corresponding to the facial features, and generating a corresponding relation between the facial features and the emotional state score. In a specific implementation, the score table is preset, and various facial features and emotional state scores corresponding to the facial features are stored in the score table. After the facial features are obtained from the facial image, the emotion state scores corresponding to the facial features can be determined according to the scoring table, then a corresponding relation is established between the determined emotion state scores and the facial features, the corresponding relation is input into a preset network model for training, and the emotion state scoring model is generated.
In one embodiment, the correspondence between the various facial features in the score table and the emotional state score corresponding to each facial feature may be a one-to-one correspondence. The score table can be a Hamilton anxiety table, facial features corresponding to Hamilton anxiety symptoms and emotional state scores corresponding to each facial feature are stored in the Hamilton anxiety table, and the trained emotional state score model can automatically acquire the facial features and the emotional state scores corresponding to the Hamilton anxiety symptoms from a facial video. In order to determine the emotional state score under different scenes or different pathologies, the score table can be replaced, for example, a Hamilton anxiety table is replaced by a table for evaluating emotional states, such as a Hamilton depression table (HAMD-17), or a table for evaluating cognitive states, such as a unified Parkinson's disease rating table (UPDRS). Therefore, the application scene of the embodiment can be expanded, and the emotional state score assessment of the facial features of different diseases can be realized.
In one embodiment, the facial features include emotional features, facial key point motion features, and facial motion parts of the facial image. And (3) extracting facial features of multiple dimensions from the facial video so as to train a more accurate emotional state scoring model. Specifically, the emotional characteristics in the present embodiment include seven basic emotions: normal (nature), happy (happy), fear (fear), surprise (surrise), disgust (disgust), sadness (sadness), angor (anger). And the facial video may also include a distribution of mixed emotions, i.e., including but not limited to a combination of two or more of the basic emotions. And the emotional characteristics in the present embodiment may also include state changes over time of various emotions. The facial key point motion features in the present embodiment include single or multiple frequency features of facial key points and single or multiple motion trajectory features of facial key points. Of course, the action site may be a single feature, or a combination of two or more features. In this way, the facial features may include multiple dimensions, thereby training a more accurate emotional state scoring model.
Of course, in the embodiment, in training the emotional state scoring model, another data source (such as audio) or a combination of multiple data sources may be used instead of using the facial video data as the training sample. The network model used in training the emotional state scoring model in this embodiment may be any one of a free support vector machine model, a machine learning model, and a neural network learning model.
And step S300, recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
And after obtaining the emotional state score through the emotional state scoring model, recording the emotional state score, and correspondingly recording the facial features corresponding to the emotional state score to realize the recording of data so as to facilitate the checking and reappearance of the data.
Therefore, the emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state score model, and after the facial video of the person to be tested is input into the emotion state score model, the corresponding emotion state score can be directly obtained. The emotion state score obtained by the embodiment has the same value with the scale evaluation score of the corresponding emotion state score scale, can be used as an objective basis for evaluating the diagnosis and treatment mental emotion state, and can be reproduced in result, so that the influence of subjective factors of people is eliminated. For example: the evaluation score related to anxiety obtained by the technical scheme is consistent with the evaluation score obtained by using a Hamilton anxiety scale.
Exemplary device
As shown in fig. 3, an embodiment of the present invention provides an emotional state assessment apparatus, including: a video acquisition unit 310, a score determination unit 320, and a data recording unit 330. Specifically, the video obtaining unit 310 is configured to obtain a face video of a person to be tested, and pre-process the face video to obtain a target face video. The score determining unit 320 inputs the target face video into a preset emotional state score model, and obtains an emotional state score corresponding to the facial features of the target face video, wherein the emotional state score model is trained based on the corresponding relationship between the facial features and the emotional state score. The data recording unit 330 records the emotional state score and correspondingly records the facial features corresponding to the emotional state score. .
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 4. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and an image sensor which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a method of emotional state assessment. The display screen of the intelligent terminal can be an OLED display screen, a liquid crystal display screen or an electronic ink display screen and other various display screens, and the image sensor of the intelligent terminal is arranged inside the intelligent terminal in advance and used for acquiring the facial video of a tested person. The image sensor may also be a webcam located on a network.
It will be understood by those skilled in the art that the block diagram shown in fig. 4 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one embodiment, an intelligent terminal is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score;
and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the invention discloses an emotional state assessment method, an emotional state assessment device, an intelligent terminal and a storage medium, wherein the method comprises the following steps: acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video; inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score; and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores. The emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state score model, the emotion state score is objectively and conveniently obtained, human factor interference is avoided, and objective basis is provided for subsequent analysis and judgment work.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method of emotional state assessment, the method comprising:
acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score;
and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
2. The emotional state assessment method of claim 1, wherein the preprocessing the facial video comprises:
after the face video of the person to be tested is obtained, filtering the face video part, screening out a video segment without a face image, and cutting off the video segment without the face image to obtain the target face video.
3. The method for assessing an emotional state according to claim 1, wherein the inputting the target facial video into a preset emotional state scoring model to obtain an emotional state score corresponding to the facial features of the target facial video comprises:
after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image;
acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and outputting the emotional state score and the corresponding facial features.
4. The emotional state assessment method of claim 3, wherein the emotional state score model is generated in a manner comprising:
collecting a video sample, wherein the video sample comprises facial images with various emotional expressions;
acquiring the facial image from the video sample, and acquiring facial features according to the facial image, wherein the facial features are used for reflecting emotional expressions corresponding to the facial image;
determining an emotional state score corresponding to the facial feature according to a preset scoring table, and generating a corresponding relation between the facial feature and the emotional state score;
and inputting the corresponding relation into a preset network model for training to generate the emotional state scoring model.
5. The emotional state assessment method of claim 4, wherein the facial features comprise emotional features, facial key point motion features, and facial motion parts of the facial image.
6. The emotional state assessment method according to claim 4, wherein the score table stores therein a correspondence of various facial features and the emotional state score corresponding to each of the facial features, the correspondence being a one-to-one correspondence.
7. The emotional state assessment method according to claim 4, wherein the preset network model comprises any one of a free support vector machine model, a machine learning model, and a neural network learning model.
8. An emotional state assessment apparatus, the apparatus comprising:
the video acquisition unit is used for acquiring a face video of a person to be tested and preprocessing the face video to obtain a target face video;
the score determining unit is used for inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the face feature of the target face video, and the emotion state scoring model is trained on the basis of the corresponding relation between the face feature and the emotion state score;
and the data recording unit is used for recording the emotional state scores and correspondingly recording the facial features corresponding to the emotional state scores.
9. An intelligent terminal comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs being configured to be executed by the one or more processors comprises instructions for performing the method of any of claims 1-7.
10. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-7.
CN202011139913.3A 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium Active CN112472088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011139913.3A CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011139913.3A CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112472088A true CN112472088A (en) 2021-03-12
CN112472088B CN112472088B (en) 2022-11-29

Family

ID=74926844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011139913.3A Active CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112472088B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194323A (en) * 2021-04-27 2021-07-30 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN114757499A (en) * 2022-03-24 2022-07-15 慧之安信息技术股份有限公司 Working quality analysis method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140315168A1 (en) * 2013-02-12 2014-10-23 Emotient Facial expression measurement for assessment, monitoring, and treatment evaluation of affective and neurological disorders
CN107205731A (en) * 2015-02-13 2017-09-26 欧姆龙株式会社 Health control servicing unit and health control householder method
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN110717542A (en) * 2019-10-12 2020-01-21 广东电网有限责任公司 Emotion recognition method, device and equipment
US20200134296A1 (en) * 2018-10-25 2020-04-30 Adobe Inc. Automated image capture based on emotion detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140315168A1 (en) * 2013-02-12 2014-10-23 Emotient Facial expression measurement for assessment, monitoring, and treatment evaluation of affective and neurological disorders
CN107205731A (en) * 2015-02-13 2017-09-26 欧姆龙株式会社 Health control servicing unit and health control householder method
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system
US20200134296A1 (en) * 2018-10-25 2020-04-30 Adobe Inc. Automated image capture based on emotion detection
CN110717542A (en) * 2019-10-12 2020-01-21 广东电网有限责任公司 Emotion recognition method, device and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴春芳: "头晕患者的情绪障碍因素分析", 《当代医学》 *
岳涛等: "理性情绪疗法对眩晕患者焦虑抑郁的影响", 《护理实践与研究》 *
李金虹等: "帕金森病患者焦虑及抑郁与其他症状关系的研究", 《中国全科医学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194323A (en) * 2021-04-27 2021-07-30 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN113194323B (en) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN114757499A (en) * 2022-03-24 2022-07-15 慧之安信息技术股份有限公司 Working quality analysis method based on deep learning
CN114757499B (en) * 2022-03-24 2022-10-21 慧之安信息技术股份有限公司 Working quality analysis method based on deep learning

Also Published As

Publication number Publication date
CN112472088B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN111225612A (en) Neural obstacle identification and monitoring system based on machine learning
CN112472088B (en) Emotional state evaluation method and device, intelligent terminal and storage medium
US20160029965A1 (en) Artifact as a feature in neuro diagnostics
CN110974258A (en) Systems and methods for diagnosing depression and other medical conditions
CN108305680B (en) Intelligent Parkinson's disease auxiliary diagnosis method and device based on multivariate biological characteristics
CN115299947A (en) Psychological scale confidence evaluation method and system based on multi-modal physiological data
Revett et al. Feature selection in Parkinson's disease: A rough sets approach
CN111191639B (en) Vertigo type identification method and device based on eye shake, medium and electronic equipment
CN116601720A (en) Medical diagnostic system and method for artificial intelligence based health conditions
WO2023012818A1 (en) A non-invasive multimodal screening and assessment system for human health monitoring and a method thereof
CN114241565A (en) Facial expression and target object state analysis method, device and equipment
CN113855021B (en) Depression tendency evaluation method and device
Mantri et al. Real time multimodal depression analysis
Mantri et al. Cumulative video analysis based smart framework for detection of depression disorders
CN113705435A (en) Behavior acquisition method, behavior acquisition device, behavior acquisition terminal and storage medium for symptom evaluation
Saraguro et al. Analysis of hand movements in patients with Parkinson’s Disease using Kinect
KR20160022578A (en) Apparatus for testing brainwave
JP7517431B2 (en) Information processing device, control method, and program
JP7507025B2 (en) DIAGNOSIS SUPPORT SYSTEM, DIAGNOSIS SUPPORT METHOD, AND DIAGNOSIS SUPPORT PROGRAM
CN117633606B (en) Consciousness detection method, equipment and medium based on olfactory stimulus and facial expression
Villa Monte et al. A support system for the diagnosis of balance pathologies
US20220151482A1 (en) Biometric ocular measurements using deep learning
Yu et al. An Accelerometer Based Gait Analysis System to Detect Gait Abnormalities in Cerebralspinal Meningitis Patients
MONTE et al. A Support System for the Diagnosis of Balance Pathologies
CN118136232A (en) Multi-mode deep learning-based parkinsonism early detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant