CN113610067B - Emotional state display method, device and system - Google Patents

Emotional state display method, device and system Download PDF

Info

Publication number
CN113610067B
CN113610067B CN202111178893.5A CN202111178893A CN113610067B CN 113610067 B CN113610067 B CN 113610067B CN 202111178893 A CN202111178893 A CN 202111178893A CN 113610067 B CN113610067 B CN 113610067B
Authority
CN
China
Prior art keywords
image
feature
expression
emotion
emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111178893.5A
Other languages
Chinese (zh)
Other versions
CN113610067A (en
Inventor
栗觅
胡斌
吕胜富
康嘉明
杨闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111178893.5A priority Critical patent/CN113610067B/en
Publication of CN113610067A publication Critical patent/CN113610067A/en
Priority to PCT/CN2021/133513 priority patent/WO2023060720A1/en
Application granted granted Critical
Publication of CN113610067B publication Critical patent/CN113610067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Abstract

The disclosure provides an emotional state display method, device and system, wherein the method comprises the following steps: acquiring a face image of the tested person based on the emotional stimulation signal; inputting the face image into a first network model to obtain an emotion index; carrying out feature extraction on the facial image to obtain an expression feature image; enhancing an interested area in the emotion characteristic image according to the emotion index to obtain a target characteristic image; and superposing the target characteristic image on the facial image to obtain an expression mode image for showing the emotional state. The method can preliminarily confirm the abnormal risk of the emotional state of the tested person by obtaining the emotion index through the first network model, can further obtain image information related to the emotional state by carrying out feature extraction and partial enhancement on the facial image, and can intuitively display the emotional state of the tested person through an expression mode image obtained by superimposing the target feature image on the facial image.

Description

Emotional state display method, device and system
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to an emotional state display method, device and system.
Background
With the development of economic society, the requirements of people on the quality and efficiency of learning, working and living are improved, and the psychological pressure of people is increased. Prolonged psychological stress, if not relieved, can produce psychological abnormalities that progress to anxious and depressed mood. Reports from internet-new web 2020, 12 months and 29 days: after the patient emerges from the intensive care unit, anxiety symptoms occur in about 40% of people, and depression symptoms occur in about 30% of people.
Whether psychological stress, anxiety, depression, and affective disorders are manifested: the main emotional manifestations of stress are feeling psychological stress, palpitation, dysphoria, emotional instability, etc.; the main emotional manifestations of anxiety are fear, distraction, lifting the gallbladder and the like; the main emotional manifestations of depression are depressed mood, feelings of distress, sadness, anhedonia, decreased interest, etc.
Abnormal emotional states such as stress, anxiety and depression, if the severity of the abnormal emotional states cannot be accurately checked and evaluated in a timely manner and psychological intervention is performed in a timely manner, may lead to the development of anxiety or depression.
Currently, various psychological self-rating scales (such as a GAT-7 self-rating anxiety scale, a PHQ-9 self-rating depression scale, and the like) are mainly used for assessing and judging the psychological state, the assessment accuracy of the psychological self-rating scales is low due to the lack of emotional indexes directly related to the mood, assessment results of the psychological self-rating scales are usually displayed through numerical values, a great deal of professional knowledge is usually needed for analyzing and understanding the numerical values, the analysis process is greatly influenced by subjective factors, and an assessed person cannot intuitively know the psychological state of the assessed person.
Disclosure of Invention
The embodiment of the disclosure provides an emotional state display method, device and system, which can improve the accuracy of emotional state display.
Therefore, the embodiment of the disclosure provides the following technical scheme:
in a first aspect, an embodiment of the present disclosure provides an emotional state display method, including:
acquiring a face image of the tested person based on the emotional stimulation signal;
inputting the facial image into a first network model to obtain an emotion index;
extracting the features of the facial image to obtain an expression feature image;
enhancing an interested area in the expression characteristic image according to the emotion index to obtain a target characteristic image;
and superposing the target characteristic image on the facial image to obtain an expression mode image for displaying an emotional state.
Optionally, the enhancing the region of interest in the expression feature image according to the emotion index to obtain a target feature image includes:
calculating a weight coefficient of the expression feature image according to the emotion index;
multiplying each pixel point in the expression characteristic image by the weight coefficient to obtain the target characteristic image;
wherein the higher the sentiment index, the larger the weighting factor.
According to the emotional state display method provided by the embodiment, the relevance between the expression mode image and the emotional state can be further improved by calculating the weight coefficient according to the emotion index.
Optionally, calculating the weight coefficient of the expression feature image according to the emotion index includes:
judging whether the emotion index is larger than a set value or not;
if yes, inputting the face image into a second network model to obtain a first prediction vector;
acquiring a first gradient map of the expression feature image according to the first prediction vector;
calculating the average gradient value of the first gradient map as the weight coefficient of the expression feature image;
if not, inputting the face image into a third network model to obtain a second prediction vector;
acquiring a second gradient map of the expression feature image according to the second prediction vector;
and calculating the average gradient value of the second gradient map as the weight coefficient of the expression feature image.
According to the emotional state display method provided by the embodiment, the weight coefficient is calculated through the first gradient map and the second gradient map, so that the weight coefficient can reflect the importance degree of the corresponding expression characteristic image.
Optionally, the extracting the features of the facial image to obtain an expression feature image includes:
inputting the facial image into a fourth network model comprising a plurality of convolution kernels to obtain sub-feature maps in one-to-one correspondence with the convolution kernels;
generating weight vectors corresponding to the convolution kernels one by one according to the distribution of the expressions in the facial image;
and carrying out weighted fusion on the sub-feature graphs according to the weight vectors corresponding to the sub-feature graphs to obtain the expression feature images.
According to the emotional state display method provided by the embodiment, the self-feature graphs are weighted and fused, so that the respective enhancement is realized according to the relevance between the sub-feature graphs and the emotional state.
Optionally, before inputting the facial image into the first network model to obtain the emotion index, the method further includes:
down-sampling the face image;
and performing noise reduction processing on the face image subjected to the down-sampling processing.
According to the emotional state display method provided by the embodiment, downsampling processing is performed on the face image before denoising processing, so that the denoising effect is improved.
Optionally, the face image is plural;
and superposing a plurality of target characteristic images corresponding to the facial images on the facial images to obtain expression mode images for showing emotional states.
According to the emotional state display method provided by the embodiment, the target feature image generated by superposing the plurality of face images on the face image is helpful for enhancing data related to the emotional state in the expression mode image.
Optionally, the obtaining of the facial image of the subject based on the emotional stimulation signal comprises:
acquiring a plurality of emotions corresponding to the emotional states;
and acquiring a face image of the tested person based on the emotion stimulating signals corresponding to the emotions one by one.
In a second aspect, an embodiment of the present disclosure provides a mental state display apparatus, including:
the acquisition module is used for acquiring a facial image of the tested person based on the emotional stimulation signal;
the first data processing module is used for inputting the facial image into a first network model to obtain an emotion index;
the feature extraction module is used for extracting features of the facial image to obtain an expression feature image;
the second data processing module is used for adjusting the intensity of the expression characteristic image according to the emotion index to obtain a target characteristic image;
and the third data processing module is used for superposing the target characteristic image on the facial image to obtain an expression mode image for showing an emotional state.
In a third aspect, an embodiment of the present disclosure provides a mental state display system, including:
the emotion stimulation module is used for providing video or audio with set emotion for the testee to watch;
the facial image acquisition module is used for acquiring a facial image of the tested person when watching the video or listening to the audio;
the data processing module is used for processing the facial image to obtain an expression mode image for displaying the emotional state and sending the expression mode image to the feedback module;
and the feedback module is used for displaying the expression mode image to the testee.
Optionally, the feedback module includes an emotion abnormality determination module and a display module;
the data processing module is also used for acquiring an emotion index according to the facial image and sending the emotion index to the emotion abnormity judging module;
the emotion abnormity judging module is used for generating an emotion abnormity risk level according to the emotion index and sending the risk level to the display module;
the display module is used for displaying the expression mode image and displaying an early warning image according to the emotional anomaly risk level.
One or more technical solutions provided in the embodiments of the present disclosure have the following advantages:
the emotion state display method provided by the embodiment of the disclosure can preliminarily confirm the abnormal risk of the emotion state of the tested person by obtaining the emotion index through the first network model, can further obtain image information related to the emotion state by performing feature extraction and partial enhancement on the facial image, and can intuitively display the emotion state of the tested person through the expression mode image obtained by superimposing the target feature image on the facial image.
Drawings
Fig. 1 is a flowchart of an emotional state presentation method according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a structure of an emotional state presentation device according to an embodiment of the present disclosure;
fig. 3 is an emotional state display system according to an embodiment of the disclosure.
Reference numerals:
21: an acquisition module; 22: a first data processing module; 23: a feature extraction module; 24: a second data processing module; 25: a third data processing module;
31: an emotional stimulation module; 32: a facial image acquisition module; 33: a data processing module; 34: and a feedback module.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be described in further detail below with reference to the accompanying drawings in conjunction with the detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The described embodiments of the present disclosure are only some, and not all, embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the description of the present disclosure, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, technical features involved in different embodiments of the present disclosure described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flowchart of an emotional state presentation method according to an embodiment of the present disclosure. As shown in fig. 1, an embodiment of the present disclosure provides an emotional state display method, including the following steps:
s101: an image of the face of the subject is acquired based on the emotional stimulation signal. The emotional stimulus signals include a positive emotional stimulus signal and a negative emotional stimulus signal. Positive emotions may be selected as happy, and negative emotions may be selected as sad, fear, or tension. The emotional state may be selected from anxiety, depression or stress. When the emotional state is anxiety, the face image can be acquired by selecting emotional stimulation signals corresponding to happiness and fear respectively. When the emotional state is depression, the face image can be obtained by selecting emotional stimulation signals corresponding to happiness and sadness. When the emotional state is stress, the face image can be obtained by selecting emotional stimulation signals corresponding to happiness and tension. The emotional stimulus signal may be selected as video, a virtual reality scene, or audio. When the emotional stimulation signal is audio, the expression of the testee listening to the audio with the eyes closed can be collected optionally, and the interference of environmental factors is reduced to the maximum extent.
S102: the face image is input into the first network model to obtain an emotion index. Before the face image is input into the first network model, downsampling processing can be performed on the face image, then denoising processing is performed on the face image after downsampling processing, and finally the face image after denoising processing is cut, so that the influence of noise on a subsequent data processing process is reduced. In some embodiments, the higher the emotional index, the greater the risk of abnormality of the emotional state. In some embodiments, the training process of the first network model includes obtaining a face image as a training sample, obtaining an emotion index corresponding to the face image as a label, and training the initial network model to obtain the first network model. The first network model may be selected as a convolutional neural network model.
S103: and extracting the features of the facial image to obtain an expression feature image. The expression feature image includes features related to expressions in the face image.
S104: and enhancing the interested region in the emotion characteristic image according to the emotion index to obtain a target characteristic image. The region of interest may be selected as a region having an effect on the emotional state.
S105: and superposing the target characteristic image on the facial image to obtain an expression mode image for showing the emotional state. The expression pattern image can highlight facial features related to emotional states.
The emotion state display method provided by the embodiment of the disclosure can preliminarily confirm the abnormal risk of the emotion state of the tested person by obtaining the emotion index through the first network model, can further obtain image information related to the emotion state by performing feature extraction and partial enhancement on the facial image, and can intuitively display the emotion state of the tested person through the expression mode image obtained by superimposing the target feature image on the facial image.
In some embodiments, determining whether the sentiment index is greater than a set value; if yes, inputting the face image into a second network model to obtain a first prediction vector; acquiring a first gradient map of the expression feature image according to the first prediction vector; calculating the average gradient value of the first gradient map as a weight coefficient of the expression feature image; if not, inputting the face image into a third network model to obtain a second prediction vector; acquiring a second gradient map of the expression feature image according to the second prediction vector; and calculating the average gradient value of the second gradient map as a weight coefficient of the expression feature image. The second network model and the third network model are deep learning network models.
Inputting the face image into a fourth network model comprising a plurality of convolution kernels to obtain sub-feature maps corresponding to the convolution kernels one by one; generating weight vectors corresponding to the convolution kernels one by one according to the distribution of the expressions in the facial image; and performing weighted fusion on the sub-feature graphs according to the weight vectors corresponding to the sub-feature graphs to obtain expression feature images.
And multiplying each pixel point in the expression characteristic image by the weight coefficient to obtain a target characteristic image.
In some embodiments, obtaining an image of the subject's face based on the emotional stimulus signal comprises: a plurality of emotions corresponding to the emotional states are acquired. And acquiring a face image of the tested person based on the emotional stimulation signals corresponding to the emotions one by one. The number of face images is plural. And superposing a plurality of target characteristic images corresponding to the facial images on the facial images to obtain expression mode images for showing emotional states.
Fig. 2 is a block diagram of a structure of an emotional state display apparatus according to an embodiment of the present disclosure. As shown in fig. 2, an embodiment of the present disclosure provides an emotional state display device, including:
an obtaining module 21, configured to obtain a facial image of the subject based on the emotional stimulation signal;
a first data processing module 22, for inputting the face image into the first network model to obtain an emotion index;
the feature extraction module 23 is configured to perform feature extraction on the facial image to obtain an expression feature image;
the second data processing module 24 is configured to adjust the intensity of the emotion feature image according to the emotion index to obtain a target feature image;
and a third data processing module 25, configured to superimpose the target feature image on the face image to obtain an expression pattern image for showing an emotional state.
The emotional state display device provided by the embodiment of the disclosure can preliminarily confirm the abnormal risk of the emotional state of the tested person by obtaining the emotional index through the first network model, can further acquire image information related to the emotional state by performing feature extraction and partial enhancement on the facial image, and can intuitively display the emotional state of the tested person through the expression mode image obtained by superimposing the target feature image on the facial image.
Fig. 3 is an emotional state display system according to an embodiment of the disclosure. As shown in fig. 3, an embodiment of the present disclosure provides an emotional state display system, including:
and the emotion stimulation module 31 is used for providing video or audio with set emotion for the testee to watch. The emotion stimulation module can optionally use a display or an earphone of the device (including but not limited to a mobile terminal of a mobile phone, a tablet computer, a desktop computer and a notebook computer) to present videos of positive emotions and negative emotions to the testee. The emotional stimulus signal may be selected to be video or audio.
And the facial image acquisition module 32 is used for acquiring a facial image of the tested person when watching the video. The facial image acquisition module can be used for synchronously acquiring facial images of the testee by using a camera (including an external camera) of a local machine (including but not limited to a mobile terminal of a mobile phone, a tablet computer, a desktop computer and a notebook computer), and then storing the facial images into the local machine (including but not limited to the mobile terminal of the mobile phone, the tablet computer, the desktop computer and the notebook computer) or sending the facial images to a server (including but not limited to a local server and a cloud server).
And the data processing module 33 is configured to process the facial image to obtain an expression mode image for displaying an emotional state, and send the expression mode image to the feedback module. The data processing module optionally inputs the facial image into the first network model to obtain an emotion index, performs feature extraction on the facial image to obtain an expression feature image, enhances an interested area in the expression feature image according to the emotion index to obtain a target feature image, and superimposes the target feature image on the facial image to obtain an expression mode image for displaying an emotion state. The data processing module can optionally process the facial image on a local machine (including but not limited to a mobile terminal of a mobile phone, a tablet computer, a desktop computer and a notebook computer) or a server (including but not limited to a local server or a cloud server).
And the feedback module 34 is used for displaying the expression mode image to the tested person. The feedback module comprises an emotion abnormity judging module and a display module. The display module can optionally comprise a mobile terminal display, a notebook computer display or a desktop computer display. The display module optionally includes a local (including but not limited to mobile end of mobile phone, tablet, desktop, and notebook) display. The data processing module is also used for acquiring the emotion index according to the face image and sending the emotion index to the emotion abnormity judging module. The emotion abnormity judging module is used for generating an emotion abnormity risk level according to the emotion index and sending the risk level to the display module. The display module is used for displaying the expression mode images and displaying the early warning images according to the emotional anomaly risk level. The emotional anomaly risk level can be divided into a first level, a second level, a third level and a fourth level from small to large according to the threshold interval of the emotional index. The emotion state of the first level is normal, the early warning image corresponding to the first level can be selected to be a white bar, the emotion state of the second level is slightly abnormal, the early warning image corresponding to the second level can be selected to be a green bar, the emotion state of the third level is moderate abnormal, the early warning image corresponding to the third level can be selected to be a blue bar, the emotion state of the fourth level is severely abnormal, and the early warning image corresponding to the fourth level can be selected to be a red bar.
The system provided by the embodiment of the disclosure can judge whether the emotional state is abnormal or not and the severity of the abnormality and visually display the emotional state to the testee, and the display form of the image is easier to understand. The invention can more objectively measure and evaluate the emotional state of the testee, so the invention has important value for realizing self health management and improving the life quality.
It is to be understood that the above-described specific embodiments of the present disclosure are merely illustrative of or illustrative of the principles of the present disclosure and are not to be construed as limiting the present disclosure. Accordingly, any modification, equivalent replacement, improvement or the like made without departing from the spirit and scope of the present disclosure should be included in the protection scope of the present disclosure. Further, it is intended that the following claims cover all such variations and modifications that fall within the scope and bounds of the appended claims, or equivalents of such scope and bounds.

Claims (9)

1. An emotional state display method, comprising:
acquiring a face image of the tested person based on the emotional stimulation signal;
inputting the facial image into a first network model to obtain an emotion index;
extracting the features of the facial image to obtain an expression feature image;
enhancing an interested area in the expression characteristic image according to the emotion index to obtain a target characteristic image;
overlaying the target feature image on the facial image to obtain an expression mode image for displaying an emotional state;
the obtaining of the expression feature image by performing feature extraction on the facial image comprises:
inputting the facial image into a fourth network model comprising a plurality of convolution kernels to obtain sub-feature maps in one-to-one correspondence with the convolution kernels;
generating weight vectors corresponding to the convolution kernels one by one according to the distribution of the expressions in the facial image;
and carrying out weighted fusion on the sub-feature graphs according to the weight vectors corresponding to the sub-feature graphs to obtain the expression feature images.
2. The emotional state display method of claim 1, wherein enhancing the region of interest in the expressive feature image according to the emotion index to obtain a target feature image comprises:
calculating a weight coefficient of the expression feature image according to the emotion index;
multiplying each pixel point in the expression characteristic image by the weight coefficient to obtain the target characteristic image;
wherein the higher the sentiment index, the larger the weighting factor.
3. The emotional state presentation method of claim 2, wherein calculating the weighting factor of the expressive feature image according to the emotion index comprises:
judging whether the emotion index is larger than a set value or not;
if yes, inputting the face image into a second network model to obtain a first prediction vector;
acquiring a first gradient map of the expression feature image according to the first prediction vector;
calculating the average gradient value of the first gradient map as the weight coefficient of the expression feature image;
if not, inputting the face image into a third network model to obtain a second prediction vector;
acquiring a second gradient map of the expression feature image according to the second prediction vector;
and calculating the average gradient value of the second gradient map as the weight coefficient of the expression feature image.
4. The emotional state presentation method of claim 1, wherein, before inputting the facial image into the first network model to obtain the emotional index, further comprising:
down-sampling the face image;
and performing noise reduction processing on the face image subjected to the down-sampling processing.
5. The emotional state presentation method according to any of claims 1-4, wherein the face image is plural;
and superposing a plurality of target characteristic images corresponding to the facial images on the facial images to obtain expression mode images for showing emotional states.
6. The emotional state presentation method of claim 5, wherein obtaining the image of the subject's face based on the emotional stimulus signal comprises:
acquiring a plurality of emotions corresponding to the emotional states;
and acquiring a face image of the tested person based on the emotion stimulating signals corresponding to the emotions one by one.
7. An emotional state display device, comprising:
the acquisition module is used for acquiring a facial image of the tested person based on the emotional stimulation signal;
the first data processing module is used for inputting the facial image into a first network model to obtain an emotion index;
the feature extraction module is used for extracting features of the facial image to obtain an expression feature image; the obtaining of the expression feature image by performing feature extraction on the facial image comprises:
inputting the facial image into a fourth network model comprising a plurality of convolution kernels to obtain sub-feature maps in one-to-one correspondence with the convolution kernels;
generating weight vectors corresponding to the convolution kernels one by one according to the distribution of the expressions in the facial image;
carrying out weighted fusion on the sub-feature graphs according to the weight vectors corresponding to the sub-feature graphs to obtain the expression feature images;
the second data processing module is used for adjusting the intensity of the expression characteristic image according to the emotion index to obtain a target characteristic image;
and the third data processing module is used for superposing the target characteristic image on the facial image to obtain an expression mode image for showing an emotional state.
8. An emotional state display system, comprising:
the emotion stimulation module is used for providing video or audio with set emotion to the testee;
the facial image acquisition module is used for acquiring a facial image of the tested person when watching the video or listening to the audio;
the data processing module is used for processing the facial image to obtain an expression mode image for displaying the emotional state and sending the expression mode image to the feedback module; the data processing module is used for inputting the facial image into the first network model to obtain an emotion index, performing feature extraction on the facial image to obtain an expression feature image, enhancing an interested area in the expression feature image according to the emotion index to obtain a target feature image, and overlaying the target feature image on the facial image to obtain an expression mode image for displaying an emotion state; the obtaining of the expression feature image by performing feature extraction on the facial image comprises:
inputting the facial image into a fourth network model comprising a plurality of convolution kernels to obtain sub-feature maps in one-to-one correspondence with the convolution kernels;
generating weight vectors corresponding to the convolution kernels one by one according to the distribution of the expressions in the facial image;
carrying out weighted fusion on the sub-feature graphs according to the weight vectors corresponding to the sub-feature graphs to obtain the expression feature images;
and the feedback module is used for displaying the expression mode image to the testee.
9. The emotional state presentation system of claim 8, wherein the feedback module comprises an emotional anomaly discrimination module and a display module;
the data processing module is also used for acquiring an emotion index according to the facial image and sending the emotion index to the emotion abnormity judging module;
the emotion abnormity judging module is used for generating an emotion abnormity risk level according to the emotion index and sending the risk level to the display module;
the display module is used for displaying the expression mode image and displaying an early warning image according to the emotional anomaly risk level.
CN202111178893.5A 2021-10-11 2021-10-11 Emotional state display method, device and system Active CN113610067B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111178893.5A CN113610067B (en) 2021-10-11 2021-10-11 Emotional state display method, device and system
PCT/CN2021/133513 WO2023060720A1 (en) 2021-10-11 2021-11-26 Emotional state display method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111178893.5A CN113610067B (en) 2021-10-11 2021-10-11 Emotional state display method, device and system

Publications (2)

Publication Number Publication Date
CN113610067A CN113610067A (en) 2021-11-05
CN113610067B true CN113610067B (en) 2021-12-28

Family

ID=78343487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111178893.5A Active CN113610067B (en) 2021-10-11 2021-10-11 Emotional state display method, device and system

Country Status (2)

Country Link
CN (1) CN113610067B (en)
WO (1) WO2023060720A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610067B (en) * 2021-10-11 2021-12-28 北京工业大学 Emotional state display method, device and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6186145B1 (en) * 1994-05-23 2001-02-13 Health Hero Network, Inc. Method for diagnosis and treatment of psychological and emotional conditions using a microprocessor-based virtual reality simulator
CN105559802A (en) * 2015-07-29 2016-05-11 北京工业大学 Tristimania diagnosis system and method based on attention and emotion information fusion
CN110147822A (en) * 2019-04-16 2019-08-20 北京师范大学 A kind of moos index calculation method based on the detection of human face action unit

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002007568A (en) * 2000-06-27 2002-01-11 Kaihatsu Komonshitsu:Kk Diagnostic system, diagnostic data generating method, information processing apparatus used for them, terminal device, and recording medium
CN105635574B (en) * 2015-12-29 2019-02-19 小米科技有限责任公司 The treating method and apparatus of image
CN106060572A (en) * 2016-06-08 2016-10-26 乐视控股(北京)有限公司 Video playing method and device
CN106341608A (en) * 2016-10-28 2017-01-18 维沃移动通信有限公司 Emotion based shooting method and mobile terminal
TWM573494U (en) * 2018-10-02 2019-01-21 眾匯智能健康股份有限公司 System for providing corresponding service according to facial expression
CN111598133B (en) * 2020-04-22 2022-10-14 腾讯医疗健康(深圳)有限公司 Image display method, device, system, equipment and medium based on artificial intelligence
CN112465909B (en) * 2020-12-07 2022-09-20 南开大学 Class activation mapping target positioning method and system based on convolutional neural network
CN113225590B (en) * 2021-05-06 2023-04-14 深圳思谋信息科技有限公司 Video super-resolution enhancement method and device, computer equipment and storage medium
CN113610067B (en) * 2021-10-11 2021-12-28 北京工业大学 Emotional state display method, device and system
CN113610853B (en) * 2021-10-11 2022-01-28 北京工业大学 Emotional state display method, device and system based on resting brain function image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6186145B1 (en) * 1994-05-23 2001-02-13 Health Hero Network, Inc. Method for diagnosis and treatment of psychological and emotional conditions using a microprocessor-based virtual reality simulator
CN105559802A (en) * 2015-07-29 2016-05-11 北京工业大学 Tristimania diagnosis system and method based on attention and emotion information fusion
CN110147822A (en) * 2019-04-16 2019-08-20 北京师范大学 A kind of moos index calculation method based on the detection of human face action unit

Also Published As

Publication number Publication date
WO2023060720A1 (en) 2023-04-20
CN113610067A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US11006834B2 (en) Information processing device and information processing method
WO2020121308A1 (en) Systems and methods for diagnosing a stroke condition
CN111887867A (en) Method and system for analyzing character formation based on expression recognition and psychological test
CN113554597B (en) Image quality evaluation method and device based on electroencephalogram characteristics
CN112966792B (en) Blood vessel image classification processing method, device, equipment and storage medium
CN113610853B (en) Emotional state display method, device and system based on resting brain function image
CN113610067B (en) Emotional state display method, device and system
Kroupi et al. Predicting subjective sensation of reality during multimedia consumption based on EEG and peripheral physiological signals
CN115334957A (en) System and method for optical assessment of pupillary psychosensory response
KR20210150124A (en) Method and apparatus for predicting user state
CN116230234A (en) Multi-mode feature consistency psychological health abnormality identification method and system
CN116993699A (en) Medical image segmentation method and system under eye movement auxiliary training
CN111048202A (en) Intelligent traditional Chinese medicine diagnosis system and method thereof
CN116269385A (en) Method, device, equipment and storage medium for monitoring equipment use experience
KR20230054286A (en) System and method for diagnosing skin based on analysis of image using deep learning
KR101808956B1 (en) System for acquiring consumers’ emotional responses to people and Method for collecting and using thereof
Shimada et al. Real-time system for horizontal asymmetry analysis on facial expression and its visualization
WO2023060719A1 (en) Method, apparatus, and system for calculating emotional indicators based on pupil waves
Cacciatori et al. On Developing Facial Stress Analysis and Expression Recognition Platform
CN117137442B (en) Parkinsonism auxiliary detection system based on biological characteristics and machine-readable medium
US20210056414A1 (en) Learning apparatus, learning method, and program for learning apparatus, as well as information output apparatus, information ouput method, and information output program
CN215503045U (en) Cognitive psychological receptor based on visual perception
WO2021241138A1 (en) Information processing device and information processing method
Al-Juboori et al. Modeling the relationships between changes in EEG features and subjective quality of HDR images
Liu et al. Underwater Image Quality Assessment: Benchmark Database and Objective Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant