CN112472088B - Emotional state evaluation method and device, intelligent terminal and storage medium - Google Patents

Emotional state evaluation method and device, intelligent terminal and storage medium Download PDF

Info

Publication number
CN112472088B
CN112472088B CN202011139913.3A CN202011139913A CN112472088B CN 112472088 B CN112472088 B CN 112472088B CN 202011139913 A CN202011139913 A CN 202011139913A CN 112472088 B CN112472088 B CN 112472088B
Authority
CN
China
Prior art keywords
facial
video
features
emotional state
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011139913.3A
Other languages
Chinese (zh)
Other versions
CN112472088A (en
Inventor
周永进
武帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011139913.3A priority Critical patent/CN112472088B/en
Publication of CN112472088A publication Critical patent/CN112472088A/en
Application granted granted Critical
Publication of CN112472088B publication Critical patent/CN112472088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Child & Adolescent Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an emotional state assessment method, an emotional state assessment device, an intelligent terminal and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a face video of a person to be tested, and preprocessing the face video to obtain a target face video; inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the facial features of the target face video, wherein the emotion state scoring model is formed by training based on the corresponding relation between the facial features and the emotion state score; and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores. The emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state score model, the emotion state score is objectively and conveniently obtained, human factor interference is avoided, and objective basis is provided for subsequent analysis and judgment work.

Description

Emotional state evaluation method and device, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an emotional state assessment method and device, an intelligent terminal and a storage medium.
Background
The emotion plays a role in promoting weight in the life of people, and influences the thinking, decision making and behaviors of people to a great extent. With the increase of the social competitive pressure, if the people face the heavy mental pressure, the insomnia is easy to cause and the incidence rate of the psychological diseases such as anxiety neurosis, depression and the like is increased when the people are in bad mood for a long time, and the health and even the life of the people are threatened. Therefore, people who are easy to lose control of emotion can find whether the emotion is abnormal or not as soon as possible through emotion recognition, so that the method helps to relieve mental stress of the people and improve physical and mental health conditions of human bodies.
In the prior art, the emotional state assessment method mainly uses the modes of inquiry observation, monitoring equipment for measuring physiological parameters, scale scoring and the like. The inquiry observation is mainly that subjective judgment is carried out by an inquiry mode; the monitoring device is easy to influence the physiological parameters by the mental or emotional state, which causes the distortion of the physiological parameters, for example, the blood pressure measured by a sphygmomanometer is easy to cause anxiety stress and cause the rise of the blood pressure; the scale scoring takes long time, and meanwhile, the scoring accuracy is influenced by human experience and subjective judgment.
Thus, there is a need for improvement and development of the prior art.
Disclosure of Invention
The invention aims to solve the technical problem that the emotional state evaluation method, the emotional state evaluation device, the intelligent terminal and the storage medium are provided to solve the problem that the emotional state evaluation method in the prior art has the influence of human experience and subjective judgment.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides an emotional state assessment method, where the method includes:
the method comprises the steps of obtaining a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the facial features of the target face video, wherein the emotion state scoring model is formed by training based on the corresponding relation between the facial features and the emotion state score;
and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
In one implementation, the pre-processing the face video includes:
and after the face video of the person to be tested is obtained, filtering the face video part, screening out a video segment without a face image, and cutting off the video segment without the face image to obtain the target face video.
In one implementation manner, the inputting the target face video into a preset emotional state scoring model to obtain an emotional state score corresponding to the facial features of the target face video includes:
after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image;
acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and outputting the emotional state score and the corresponding facial features.
In one implementation, the emotional state scoring model is generated by:
collecting a video sample, wherein the video sample comprises facial images with various emotional expressions;
acquiring the facial image from the video sample, and acquiring facial features according to the facial image, wherein the facial features are used for reflecting emotional expressions corresponding to the facial image;
determining an emotional state score corresponding to the facial feature according to a preset scoring table, and generating a corresponding relation between the facial feature and the emotional state score;
and inputting the corresponding relation into a preset network model for training to generate the emotional state scoring model.
In one implementation, the facial features include emotional features, facial keypoint motion features, and facial motion parts of the facial image.
In one implementation, the score table stores a correspondence between various facial features and emotional state scores corresponding to each facial feature, and the correspondence is a one-to-one correspondence.
In one implementation, the preset network model includes any one of a free support vector machine model, a machine learning model, and a neural network learning model.
In a second aspect, an embodiment of the present invention further provides an emotional state assessment apparatus, where the apparatus includes:
the video acquisition unit is used for acquiring a face video of a person to be tested and preprocessing the face video to obtain a target face video;
the score determining unit is used for inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the facial features of the target face video, and the emotion state scoring model is formed by training based on the corresponding relation between the facial features and the emotion state score;
and the data recording unit is used for recording the emotional state scores and correspondingly recording the facial features corresponding to the emotional state scores.
In a third aspect, the present invention also provides an intelligent terminal, which includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by the one or more processors include a program for executing the method for assessing an emotional state as described in any of the above.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the emotional state assessment method according to any one of the above.
The invention has the beneficial effects that: according to the method, the emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state scoring model, and after the facial video of the person to be tested is input into the emotion state scoring model, the corresponding emotion state score can be directly obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and it is also possible for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an emotional state assessment method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a generation flow of an emotional state scoring model in the emotional state assessment method according to the embodiment of the present invention.
Fig. 3 is a schematic block diagram of an emotional state assessment apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
It should be noted that, if directional indications (such as upper, lower, left, right, front, rear, 8230; \8230;) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components in a specific posture (as shown in the figure), the motion situation, etc., and if the specific posture is changed, the directional indications are correspondingly changed.
In the prior art, the evaluation method of the mental emotional state mainly uses the modes of inquiry observation, monitoring equipment for measuring physiological parameters, scale scoring and the like. The inquiry observation is mainly that subjective judgment is carried out by an inquiry mode; the monitoring device is easy to influence the physiological parameters by the mental or emotional state, which causes the distortion of the physiological parameters, for example, the blood pressure measured by a sphygmomanometer is easy to cause anxiety stress and cause the rise of the blood pressure; the scale scoring takes long time, and meanwhile, the scoring accuracy is influenced by human experience and subjective judgment.
Therefore, in order to solve the problems in the prior art, an embodiment of the present invention provides an emotional state assessment method, in which a preset emotional state score model is mainly used to determine an emotional state score corresponding to a facial feature in a facial video, and when the facial video of a person to be tested is input to the emotional state score model, the corresponding emotional state score can be directly obtained.
With the progress of medical concept, psychological problems caused by physiological diseases are also receiving more and more attention from clinicians. The psychological treatment is actively carried out according to the symptoms, which is beneficial to relieving the disease symptoms, enhancing the compliance of patients during treatment and improving the life happiness. Otherwise, it is not favorable for the patient to recover. The emotion, as a high-level function of the brain, reflects the cognitive and mental states of a person. A computer-aided system is therefore used to make a judgment of a person's mental and emotional state in order to provide the physician with objective, accurate information of the mental and emotional state. The embodiment provides an emotional state assessment method based on a computer-aided system, which can be applied to an intelligent terminal, as shown in fig. 1, and includes the following steps:
s100, obtaining a face video of a person to be tested, and preprocessing the face video to obtain a target face video.
In this embodiment, the emotional state score corresponding to the facial feature in the face video needs to be determined according to the face video, so in this embodiment, the face video of the person to be tested needs to be acquired first, and the face video is preprocessed. In a specific implementation, the present embodiment may employ a video capturing device to capture a facial video of the person to be tested, for example, a photographing device to capture the facial video. Since there may be a video segment without a face image in the face video of the testee obtained by shooting, the present embodiment really needs to analyze only the video segment with the face image. When there is a video segment without a face image in the face video (i.e., there is an invalid video segment), the processing efficiency in the subsequent steps is affected. Therefore, in order to make the face video cleaner (i.e. there is no video segment without face images), in this embodiment, it is necessary to screen out a video segment without face images from the face video segment, and cut out the video segment without face images to obtain the target face video. The target face video is a video segment only having a face image, so that the face features can be extracted more quickly when feature extraction is performed on the target face video in the subsequent steps. In one embodiment, after the face video is acquired, the face video may be subjected to noise reduction processing, so as to improve the quality of the face video.
Step S200, inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is formed by training based on the corresponding relation between the face feature and the emotion state score.
The emotional state scoring model in this embodiment is preset and can be directly invoked. And after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image. And then, acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model. Since the emotional state scoring model in this embodiment is trained based on the correspondence between the facial features and the emotional state scores, the correspondence between the facial features and the emotional state scores inevitably exists, and therefore, the emotional state scores can be automatically determined by the emotional state scoring model. After the emotional state score is determined, the embodiment outputs the emotional state score and the corresponding facial features.
In one implementation, the present embodiment, when generating the emotional state scoring model, as shown in fig. 2, includes the following steps:
step S201, after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target image;
step S202, acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and step S203, outputting the emotional state score and the corresponding facial features.
Specifically, firstly, a video sample is collected, the video sample comprises facial images with various different emotional expressions, the facial images are obtained from the video sample, facial features are obtained according to the facial images, and the facial features are used for reflecting the emotional expressions corresponding to the facial images; for example, the emotional expression may be: normal (nature), happy (happension), fear (fear), surprise (surrise), aversion (dispost), sadness (sadness), anger (anger). Each emotional expression will have a respective facial feature. And then, according to a preset scoring table, determining an emotional state score corresponding to the facial feature, and generating a corresponding relation between the facial feature and the emotional state score. In a specific implementation, the score table is preset, and various facial features and emotional state scores corresponding to the facial features are stored in the score table. After the facial features are obtained from the facial image, the emotion state scores corresponding to the facial features can be determined according to the scoring table, then a corresponding relation is established between the determined emotion state scores and the facial features, the corresponding relation is input into a preset network model for training, and the emotion state scoring model is generated.
In one embodiment, the correspondence between the various facial features in the scoring table and the emotional state score corresponding to each facial feature may be a one-to-one correspondence. The scoring table can be a Hamilton anxiety table, facial features corresponding to Hamilton anxiety symptoms and emotional state scores corresponding to each facial feature are stored in the Hamilton anxiety table, and the facial features and the emotional state scores corresponding to the Hamilton anxiety symptoms which are trained can be automatically obtained from a facial video through the trained emotional state scoring model. In order to determine the emotional state score under different scenes or different pathologies, the score table can be replaced, for example, a Hamilton anxiety table is replaced by a table for evaluating emotional states, such as a Hamilton depression table (HAMD-17), or a table for evaluating cognitive states, such as a unified Parkinson's disease rating table (UPDRS). Therefore, the application scene of the embodiment can be expanded, and the emotional state score assessment of the facial features of different diseases can be realized.
In one embodiment, the facial features include emotional features, facial key point motion features, and facial motion parts of the facial image. Facial features of multiple dimensions are extracted from the facial video, so that a more accurate emotional state scoring model is trained. Specifically, the emotional characteristics in the present embodiment include seven basic emotions: normal (nature), happy (happy), fear (fear), surprise (surrise), disgust (disgust), sadness (sadness), angor (anger). And a distribution of mixed emotions may also be included in the face video, i.e., including but not limited to a combination of two or more of the basic emotions. And the emotional characteristics in the present embodiment may also include state changes over time of various emotions. The facial key point motion features in the present embodiment include single or multiple frequency features of facial key points and single or multiple motion trajectory features of facial key points. Of course, the action site may be a single type of feature, or a combination of two or more types of features. In this way, the facial features may include multiple dimensions, thereby training a more accurate emotional state scoring model.
Of course, in the embodiment, in training the emotional state scoring model, another data source (such as audio) or a combination of multiple data sources may be used instead of using the facial video data as the training sample. The network model used in training the emotional state scoring model in this embodiment may be any one of a free support vector machine model, a machine learning model, and a neural network learning model.
And step S300, recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
And after obtaining the emotional state score through the emotional state scoring model, recording the emotional state score, and correspondingly recording the facial features corresponding to the emotional state score to realize the recording of data so as to check and reproduce the data.
Therefore, the emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state scoring model, and when the facial video of the person to be tested is input into the emotion state scoring model, the corresponding emotion state score can be directly obtained. The emotion state score obtained by the embodiment has the same value with the scale evaluation score of the corresponding emotion state score scale, can be used as an objective basis for evaluating the diagnosis and treatment mental emotion state, and can be reproduced in result, so that the influence of subjective factors of people is eliminated. For example: the evaluation score related to anxiety obtained by the technical scheme is consistent with the evaluation score obtained by using a Hamilton anxiety scale.
Exemplary device
As shown in fig. 3, an embodiment of the present invention provides an emotional state assessment apparatus, including: a video acquisition unit 310, a score determination unit 320, and a data recording unit 330. Specifically, the video obtaining unit 310 is configured to obtain a face video of a person to be tested, and pre-process the face video to obtain a target face video. The score determining unit 320 inputs the target face video into a preset emotional state scoring model, and obtains an emotional state score corresponding to the facial features of the target face video, wherein the emotional state scoring model is trained based on the corresponding relationship between the facial features and the emotional state scores. The data recording unit 330 records the emotional state score and correspondingly records the facial features corresponding to the emotional state score. .
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 4. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and an image sensor which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a method of emotional state assessment. The display screen of the intelligent terminal can be an OLED display screen, a liquid crystal display screen or an electronic ink display screen and other various display screens, and the image sensor of the intelligent terminal is arranged inside the intelligent terminal in advance and used for acquiring the facial video of a tested person. The image sensor may also be a webcam located on a network.
It will be understood by those skilled in the art that the block diagram shown in fig. 4 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one embodiment, an intelligent terminal is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the facial features of the target face video, wherein the emotion state scoring model is formed by training based on the corresponding relation between the facial features and the emotion state score;
and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases or other media used in the embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the invention discloses an emotional state assessment method, an emotional state assessment device, an intelligent terminal and a storage medium, wherein the method comprises the following steps: acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video; inputting the target face video into a preset emotion state score model to obtain an emotion state score corresponding to the face feature of the target face video, wherein the emotion state score model is trained on the basis of the corresponding relation between the face feature and the emotion state score; and recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores. The emotion state score corresponding to the facial features in the facial video is determined through the preset emotion state score model, the emotion state score is objectively and conveniently obtained, human factor interference is avoided, and objective basis is provided for subsequent analysis and judgment work.
It will be understood that the invention is not limited to the examples described above, but that modifications and variations will occur to those skilled in the art in light of the above teachings, and that all such modifications and variations are considered to be within the scope of the invention as defined by the appended claims.

Claims (9)

1. A method of emotional state assessment, the method comprising:
acquiring a face video of a person to be tested, and preprocessing the face video to obtain a target face video;
inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the facial features of the target face video, wherein the emotion state scoring model is formed by training based on the corresponding relation between the facial features and the emotion state score;
recording the emotional state scores, and correspondingly recording the facial features corresponding to the emotional state scores;
the step of inputting the target face video into a preset emotional state scoring model to obtain the facial features of the target face video comprises the following steps:
the emotion state scoring model extracts facial features of multiple dimensions from the target facial video, wherein the facial features comprise emotion features of facial images in the target facial video, facial key point motion features and facial action parts, the emotion features comprise various emotions changing along with time, the facial key point motion features comprise frequency features of single or multiple facial key points and motion track features of single or multiple facial key points, and the facial action parts are features of single or multiple types of combination.
2. The emotional state assessment method of claim 1, wherein the preprocessing the facial video comprises:
and after the face video of the person to be tested is obtained, filtering the face video part, screening out a video segment without a face image, and cutting off the video segment without the face image to obtain the target face video.
3. The emotional state assessment method according to claim 1, wherein the inputting the target face video into a preset emotional state scoring model to obtain an emotional state score corresponding to the facial features of the target face video comprises:
after the target face video is input into the emotion state scoring model, the emotion state scoring model carries out face feature extraction on the target face video;
acquiring the facial features, and acquiring emotion state scores corresponding to the facial features by using the emotion state scoring model;
and outputting the emotional state score and the corresponding facial features.
4. The emotional state assessment method of claim 3, wherein the emotional state score model is generated in a manner comprising:
collecting a video sample, wherein the video sample comprises facial images with various emotional expressions;
acquiring the facial images with different emotional expressions from the video sample, and acquiring facial features according to the facial images with different emotional expressions, wherein the facial features are used for reflecting the emotional expressions corresponding to the facial images with different emotional expressions;
determining an emotional state score corresponding to the facial feature according to a preset scoring table, and generating a corresponding relation between the facial feature and the emotional state score;
and inputting the corresponding relation into a preset network model for training to generate the emotional state scoring model.
5. The emotional state assessment method according to claim 4, wherein the score table stores a correspondence between each of the facial features and the emotional state score corresponding to each of the facial features, the correspondence being a one-to-one correspondence.
6. The emotional state assessment method according to claim 4, wherein the predetermined network model comprises any one of a support vector machine model and a neural network learning model.
7. An emotional state assessment apparatus, comprising:
the video acquisition unit is used for acquiring a face video of a person to be tested and preprocessing the face video to obtain a target face video;
the score determining unit is used for inputting the target face video into a preset emotion state scoring model to obtain an emotion state score corresponding to the face feature of the target face video, the emotion state scoring model is trained on the basis of the corresponding relation between the face feature and the emotion state score, and the target face video is input into the preset emotion state scoring model to obtain the face feature of the target face video, and the score determining unit comprises: the emotion state scoring model extracts facial features of multiple dimensions from the target facial video, wherein the facial features comprise emotion features of facial images in the target facial video, facial key point motion features and facial action parts, the emotion features comprise various emotions changing along with time, the facial key point motion features comprise single or multiple frequency features of facial key points and single or multiple motion track features of facial key points, and the facial action parts are single or multiple combined features;
and the data recording unit is used for recording the emotional state scores and correspondingly recording the facial features corresponding to the emotional state scores.
8. An intelligent terminal comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs being configured to be executed by the one or more processors comprises instructions for performing the method of any of claims 1-6.
9. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-6.
CN202011139913.3A 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium Active CN112472088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011139913.3A CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011139913.3A CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112472088A CN112472088A (en) 2021-03-12
CN112472088B true CN112472088B (en) 2022-11-29

Family

ID=74926844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011139913.3A Active CN112472088B (en) 2020-10-22 2020-10-22 Emotional state evaluation method and device, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112472088B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194323B (en) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN114757499B (en) * 2022-03-24 2022-10-21 慧之安信息技术股份有限公司 Working quality analysis method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205731A (en) * 2015-02-13 2017-09-26 欧姆龙株式会社 Health control servicing unit and health control householder method
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN110717542A (en) * 2019-10-12 2020-01-21 广东电网有限责任公司 Emotion recognition method, device and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014127065A2 (en) * 2013-02-12 2014-08-21 Emotient Facial expression measurement for assessment, monitoring, and treatment evaluation of affective and neurological disorders
US10755087B2 (en) * 2018-10-25 2020-08-25 Adobe Inc. Automated image capture based on emotion detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205731A (en) * 2015-02-13 2017-09-26 欧姆龙株式会社 Health control servicing unit and health control householder method
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system
CN110717542A (en) * 2019-10-12 2020-01-21 广东电网有限责任公司 Emotion recognition method, device and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
头晕患者的情绪障碍因素分析;吴春芳;《当代医学》;20141125(第33期);全文 *
帕金森病患者焦虑及抑郁与其他症状关系的研究;李金虹等;《中国全科医学》;20160615(第17期);全文 *
理性情绪疗法对眩晕患者焦虑抑郁的影响;岳涛等;《护理实践与研究》;20160625(第12期);全文 *

Also Published As

Publication number Publication date
CN112472088A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN109157231B (en) Portable multichannel depression tendency evaluation system based on emotional stimulation task
CN111225612A (en) Neural obstacle identification and monitoring system based on machine learning
CN112472088B (en) Emotional state evaluation method and device, intelligent terminal and storage medium
CN108305680B (en) Intelligent Parkinson's disease auxiliary diagnosis method and device based on multivariate biological characteristics
US20160029965A1 (en) Artifact as a feature in neuro diagnostics
CN110974258A (en) Systems and methods for diagnosing depression and other medical conditions
EP1829025A1 (en) Method and system of indicating a condition of an individual
CN109715049A (en) For the multi-modal physiological stimulation of traumatic brain injury and the agreement and signature of assessment
CN115299947A (en) Psychological scale confidence evaluation method and system based on multi-modal physiological data
Rajinikanth et al. Hand-sketchs based Parkinson's disease screening using lightweight deep-learning with two-Fold training and fused optimal features
WO2023012818A1 (en) A non-invasive multimodal screening and assessment system for human health monitoring and a method thereof
CN114305418A (en) Data acquisition system and method for depression state intelligent evaluation
CN116759075A (en) Psychological disorder inquiry method, device, equipment and medium
JP2021112479A (en) Electrocardiographic signal analyzer and electrocardiographic signal analysis program
CN113855021B (en) Depression tendency evaluation method and device
CN114241565A (en) Facial expression and target object state analysis method, device and equipment
Mantri et al. Real time multimodal depression analysis
Mantri et al. Cumulative video analysis based smart framework for detection of depression disorders
CN116601720A (en) Medical diagnostic system and method for artificial intelligence based health conditions
KR20210157444A (en) Machine Learning-Based Diagnosis Method Of Schizophrenia And System there-of
Saraguro et al. Analysis of hand movements in patients with Parkinson’s Disease using Kinect
KR20160022578A (en) Apparatus for testing brainwave
Sivanesan et al. A Novel Scheme for detection of Parkinson’s disorder from Hand-eye Co-ordination behavior and DaTscan Images
JP7507025B2 (en) DIAGNOSIS SUPPORT SYSTEM, DIAGNOSIS SUPPORT METHOD, AND DIAGNOSIS SUPPORT PROGRAM
Yu et al. An Accelerometer Based Gait Analysis System to Detect Gait Abnormalities in Cerebralspinal Meningitis Patients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant