CN115736922A - Emotion normalization monitoring system and method based on trusted environment - Google Patents

Emotion normalization monitoring system and method based on trusted environment Download PDF

Info

Publication number
CN115736922A
CN115736922A CN202211432974.8A CN202211432974A CN115736922A CN 115736922 A CN115736922 A CN 115736922A CN 202211432974 A CN202211432974 A CN 202211432974A CN 115736922 A CN115736922 A CN 115736922A
Authority
CN
China
Prior art keywords
emotion
pixel point
face
early warning
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211432974.8A
Other languages
Chinese (zh)
Inventor
翁彦
曾青
李钟旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuzhi Tianan Technology Co ltd
Original Assignee
Beijing Shuzhi Tianan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuzhi Tianan Technology Co ltd filed Critical Beijing Shuzhi Tianan Technology Co ltd
Priority to CN202211432974.8A priority Critical patent/CN115736922A/en
Publication of CN115736922A publication Critical patent/CN115736922A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses mood normalization monitoring system and method based on letter creates environment, and this system includes: the camera is used for collecting video stream of a human face; the emotion calculation gateway generates a plurality of video files according to the video stream, acquires a plurality of frames of face images from each video file, processes the plurality of frames of face images, and acquires first time sequence change data of the amplitude and the vibration frequency of each pixel point in the contour of the facial muscle group within preset time and the vibration frequency of each pixel point in the contours of a plurality of organs related to the head; calculating second time sequence change data of the resonance frequency related to the head within preset time, and outputting emotion state data corresponding to the human face; and the emotion early warning server analyzes the emotion historical data based on monitoring and carries out graded early warning on the personnel with higher risk of the psychological problems. And both the emotion calculation gateway and the emotion early warning server support the trusted environment. The method and the device can accurately, objectively and safely realize normalized emotion monitoring.

Description

Emotion normalization monitoring system and method based on information-creation environment
Technical Field
The application belongs to the field of mental health monitoring, and particularly relates to an emotion normalization monitoring system and method based on a confidence-induced environment.
Background
In order to truly reflect the daily emotion of the target person, the best method is to carry out normalized monitoring on the emotion. Currently, the technology and method mainly use wearable devices, such as electronic bracelets; and non-contact emotion normalization monitoring is performed by using a micro-expression recognition technology. The emotion indexes of electronic bracelet monitoring are few, and are mainly based on several indexes such as pressure that heart rate variability detected to need the person of wearing initiative cooperation, the effect of monitoring is just comparatively poor if the person of wearing initiative cooperation, can produce great influence to the monitoring of normality, leads to the monitoring of normality to hardly reach expected effect.
Content of application
The embodiment of the application aims to provide an emotion normalization monitoring system and method based on a trusted environment, so as to overcome the defect of poor monitoring effect in the prior art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an emotion normalization monitoring system based on a trusted environment is provided, which includes:
the camera is used for acquiring a video stream of a human face;
the emotion calculation gateway is used for generating a plurality of video files according to the video stream, acquiring a plurality of frames of face images from each video file, processing the plurality of frames of face images, positioning a plurality of muscle groups of the face and the outlines of a plurality of organs related to the head, and acquiring first time sequence change data of the amplitude and the vibration frequency of each pixel point in the outlines of the muscle groups of the face within preset time and the vibration frequency of each pixel point in the outlines of the organs related to the head; calculating second time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the face according to the first time sequence change data and the second time sequence change data;
the emotion early warning server is used for analyzing based on the monitored emotion historical data and carrying out grading early warning on the personnel with higher risk of psychological problems;
and the emotion calculating gateway and the emotion early warning server both support the trusted environment.
In a second aspect, an emotion normalization monitoring method based on a trusted environment is provided, which includes the following steps:
collecting a video stream of a human face;
generating a plurality of video files according to the video stream, and acquiring a plurality of frames of face images from each video file;
processing the multi-frame face image, positioning the outlines of a plurality of muscle groups of the face and a plurality of organs related to the head, and acquiring first time sequence change data of the amplitude and the vibration frequency of each pixel point in the outline of the muscle group of the face within preset time and the vibration frequency of each pixel point in the outlines of the organs related to the head;
calculating second time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the face according to the first time sequence change data and the second time sequence change data;
and analyzing based on the monitored emotional historical data, and performing grading early warning on personnel with higher risk of psychological problems.
According to the embodiment of the application, the multiple frames of face images in the video file are processed, the emotion state data corresponding to the faces are output, normalized emotion monitoring can be accurately, objectively and safely achieved, and daily emotions of target people are truly reflected.
Drawings
Fig. 1 is a schematic structural diagram of an emotion normalization monitoring system based on a trusted environment according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a specific implementation manner of an emotion normalization monitoring system based on a trusted environment according to an embodiment of the present application;
FIG. 3 is a hardware architecture diagram of an emotion normalization monitoring system based on a trusted environment according to an embodiment of the present application;
fig. 4 is a flowchart of an emotion normalization monitoring method based on a trusted environment according to an embodiment of the present application;
fig. 5 is a schematic diagram of a specific implementation manner of the emotion normalization monitoring method based on the creative environment according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The emotion monitoring of the personnel relates to information safety, and the software and hardware of the emotion monitoring system can be controlled autonomously. At present, few emotion monitoring systems developed based on trusted environment in China are available. Based on this, the embodiment of the present application provides an emotion normalization monitoring system based on a trusted environment, as shown in fig. 1, including:
and the camera 110 is used for acquiring a video stream of a human face.
The emotion calculation gateway 120 is configured to generate a plurality of video files according to the video stream, acquire a plurality of frames of face images from each of the video files, process the plurality of frames of face images, locate a plurality of muscle groups of the face and outlines of a plurality of organs related to the head, and acquire first time sequence change data of amplitude and vibration frequency of each pixel point in the outlines of the muscle groups of the face within a preset time, and vibration frequency of each pixel point in the outlines of the organs related to the head; and calculating second time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the face according to the first time sequence change data and the second time sequence change data.
The emotion calculating gateway 120 includes:
the emotion calculation engine is used for positioning each pixel point in the facial muscle group contour on each frame of face image based on the muscle group contour definition model; calculating the amplitude and the frequency of each pixel point according to the displacement of each pixel point in the continuous face images, and acquiring first time sequence change data of the amplitude and the frequency in a preset time;
based on the human biological characteristics, positioning the outlines of the pupil, the eyes, the nose and the head on each frame of face image, and each pixel point in the outlines; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous face images.
Specifically, the emotion calculation engine is specifically configured to perform weighted average calculation on the vibration frequency of each pixel point respectively to obtain second time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency and the head resonance frequency within a preset time.
The emotion early warning server 130 is used for analyzing the monitored emotion historical data and carrying out grading early warning on the personnel with higher risk of psychological problems;
specifically, the emotion early warning server 130 is specifically configured to generate emotion history data and emotion history trend of the target person corresponding to the face according to the emotion state data corresponding to the face; and under the condition that the emotion historical trend triggers an emotion early warning model, carrying out grading early warning on the target personnel.
The emotion calculation gateway 120 and the emotion early warning server 130 both support the trusted environment and operate in the environments of a domestic CPU and a domestic operating system.
According to the embodiment of the application, the multiple frames of face images in the video file are processed, the emotion state data corresponding to the faces are output, normalized emotion monitoring can be accurately, objectively and safely achieved, and daily emotions of target people are truly reflected.
In a specific implementation manner of the embodiment of the application, the emotion normalization monitoring system based on the trusted environment includes: a camera with 30 frames/second, an emotion calculating gateway and an emotion early warning server, as shown in fig. 2, wherein the emotion calculating gateway and the emotion early warning server both support a domestic CPU and a domestic operating system. The camera is responsible for collecting human head videos and is connected to the emotion calculation gateway through the USB port, and video signals are transmitted to the emotion calculation gateway. The emotion calculation gateway can process video signals transmitted by four cameras at the same time, firstly transcribing the video signals into video files, only storing human head videos meeting the image quality requirement in each video file, and not transcribing videos without human faces or fuzzy human faces; and then, carrying out distributed rapid processing on the plurality of video files by using a self-developed emotion calculating technology, analyzing the emotional conditions of the personnel in the video files, including the occurrence frequency, the intensity, the duration and the like of various emotions, and transmitting the emotional state data to an emotion early warning server by the emotion calculating gateway through a network. The emotion early warning server analyzes the emotion historical data based on monitoring, comprehensively judges emotions of personnel monitored by the cameras, and carries out graded early warning on personnel with high risk of psychological problems.
Specifically, the emotion calculation gateway is composed of a video file generation program, an emotion calculation engine and a concurrent calculation scheduling program and is used for achieving video file generation and concurrent emotion calculation. The emotion early warning server consists of an emotion early warning generation program, an emotion early warning application program and an external interface API and is used for generating and applying emotion early warning messages. The emotion computing gateways can be connected to the same emotion early warning server.
The emotion calculation gateway and the emotion early warning server both support the trusted environment and can operate in software and hardware environments such as a domestic CPU (central processing unit), a domestic operating system and the like.
The method and the device can be used for carrying out normalized monitoring on daily emotions of people, so that people who easily have psychological problems due to negative emotion accumulation can be found as early as possible, and the method and the device can be applied to non-contact and non-sensory screening of psychological problem people in groups such as special post people, window service personnel, students and soldiers.
In this embodiment, the hardware of the system includes a small computer used as an emotion calculation gateway and a general server used as an emotion early warning server, as shown in fig. 3, one emotion calculation gateway can access 4 USB cameras, and concurrently performs emotion calculation processing on 4 video streams. The emotion calculation gateway and the emotion early warning server are connected through a wired network, and a plurality of emotion calculation gateways can be arranged in the system. The emotion calculation gateway and the emotion early warning server are both operated on a domestic CPU and a domestic operating system. The adaptations made at this stage are the mega-core CPU and the chinese reed operating system.
The system can ensure the quality of video images by using the USB camera to carry out close-range one-to-one monitoring on the human face. The system realizes emotion normalization monitoring by adopting a non-real-time computing method, realizes the maximum utilization of computing resources through special video file processing and emotion computing engine scheduling, and can finish emotion computing by using thousand-element equipment, so that the system can be deployed in a large scale, and emotion normalization monitoring can be realized for all personnel in a team. The non-contact and non-inductive mode is adopted to monitor the emotion of the person for a long time, so that the daily emotion of the person can be reflected truly; the whole system operates based on the trusted environment and supports a domestic CPU and a domestic operating system; the USB camera is used for single face monitoring, so that the quality of face images can be fully guaranteed, and emotion monitoring is accurate.
The emotion normalization monitoring method based on the innovation environment provided by the embodiment of the application is described in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 4, a flowchart of an emotion normalization monitoring method based on a trusted environment provided in an embodiment of the present application is shown, where the method includes the following steps:
step 401, collecting a video stream of a human face.
Step 402, generating a plurality of video files according to the video stream, and acquiring a plurality of frames of face images from each video file.
Step 403, processing the multiple frames of face images, locating the contours of multiple muscle groups of the face and multiple organs related to the head, and acquiring first time sequence change data of the amplitude and the vibration frequency of each pixel point in the contour of the muscle group of the face within a preset time, and the vibration frequency of each pixel point in the contours of the multiple organs related to the head.
Specifically, each pixel point in the facial muscle group contour can be positioned on each frame of face image based on the muscle group contour definition model; calculating the amplitude and the frequency of each pixel point according to the displacement of each pixel point in the continuous face images, and acquiring first time sequence change data of the amplitude and the frequency in a preset time; based on human biological characteristics, positioning the outlines of pupils, eyes, noses and heads and all pixel points in the outlines on each frame of face image; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous face images.
Step 404, calculating second time sequence change data of the resonance frequency related to the head within a preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the human face according to the first time sequence change data and the second time sequence change data.
Specifically, the vibration frequency of each pixel point may be respectively subjected to weighted average calculation to obtain second time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency, and the head resonance frequency within a preset time.
And 405, analyzing based on the monitored emotional historical data, and performing grading early warning on the personnel with higher risk of the psychological problems.
In this embodiment, after the emotional state data corresponding to the face is output, the plurality of video files may be deleted, and emotional history data and an emotional history trend of a target person corresponding to the face may be generated according to the emotional state data corresponding to the face; and under the condition that the emotion historical trend triggers an emotion early warning model, carrying out grading early warning on the target personnel.
According to the embodiment of the application, the multiple frames of face images in the video file are processed, the emotion state data corresponding to the faces are output, normalized emotion monitoring can be accurately, objectively and safely achieved, and daily emotions of target people are truly reflected.
In the embodiment of the present application, a specific implementation manner of the emotion normalization monitoring method based on the trusted environment is shown in fig. 5, and includes the following steps:
(1) The video stream is continuously acquired from a 30 frame/second USB camera.
(2) The method comprises the steps of processing a video stream frame by frame, judging the size of pixels occupied by a face, the angle of the face and the like when the face appears in a video, transcribing the video file frame by frame after meeting the acquisition requirement, stopping transcribing when the face disappears from the video, and generating a video file when the face appears to leave in a camera. A person appears in the camera many times during learning or work and a series of video files are generated. A plurality of cameras are collected simultaneously, and a plurality of video files can be generated concurrently.
(3) Processing each video file, inputting the video files into a self-developed emotion calculation engine for emotion analysis and statistics, wherein the self-developed emotion calculation engine comprises two algorithms, namely a facial muscle tremor emotion recognition algorithm and a head organ multi-vibration frequency emotion recognition algorithm. Because the video file is concurrently generated, the concurrent computation scheduler launches multiple emotion computation engines and automatically schedules the concurrent processing of the video file for processing as quickly as possible. After one video file is processed, only the statistical information of the emotion of the person in one video is generated, and the video file is automatically deleted.
(4) Through the continuous monitoring of the emotion of the person, a large amount of emotion historical data of the person can be generated, the emotion historical trend of the person can be tracked for the person, and the emotion difference of a certain person and other persons in the group can be judged for the group. When the emotion trend of the person or the emotion difference of the group triggers the emotion early warning model, the person is subjected to graded early warning.
(5) Since the early warning message is automatically generated in the daily normalized monitoring of the emotion, the early warning message needs to be displayed to be viewed by a manager. By issuing the early warning message in the WEB server, a manager can open a browser to check the early warning message and can track normalized emotion monitoring data to know why the early warning occurs.
(6) Emotion normalization monitoring is often used to provide a basis for other management systems, such as whether to be appropriate for post work, etc. And the emotion early warning message and the emotion monitoring data can be issued to a third party application through the API.
For example, the system is applied to subway train drivers in work, student psychological courses or computer courses, soldier sentry duty and service window workers, so that the detection of people with abnormal emotion can be realized under the condition of not influencing learning and work, and the early warning can be automatically and accurately carried out on people with higher psychological risk. Moreover, the system has lower cost and is convenient for large-scale application.
The embodiment of the application realizes emotion normalization monitoring based on the video, has the characteristics of non-contact, no sense, objectivity and accuracy, can truly reflect the daily emotion of a person, can identify expression emotions such as anger, sadness, joy and the like, and can also identify deep emotions such as pressure, tension, depression and the like. More than 20 emotions can be analyzed by using a self-researched emotion calculation technology, so that the abnormality of the emotions can be found more easily; and analyzing based on the monitored emotion historical data, and automatically performing emotion abnormity early warning.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (10)

1. An emotion normalization monitoring system based on a trusted environment, comprising:
the camera is used for acquiring a video stream of a human face;
the emotion calculation gateway is used for generating a plurality of video files according to the video stream, acquiring a plurality of frames of face images from each video file, processing the plurality of frames of face images, positioning a plurality of muscle groups of the face and the outlines of a plurality of organs related to the head, and acquiring first time sequence change data of the amplitude and the vibration frequency of each pixel point in the outlines of the muscle groups of the face within preset time and the vibration frequency of each pixel point in the outlines of the organs related to the head; calculating second time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the face according to the first time sequence change data and the second time sequence change data;
the emotion early warning server is used for analyzing based on the monitored emotion historical data and carrying out grading early warning on the personnel with higher risk of psychological problems;
and the emotion calculating gateway and the emotion early warning server both support the trusted environment.
2. The system of claim 1, wherein the emotion calculation gateway comprises:
the emotion calculation engine is used for positioning each pixel point in the facial muscle group contour on each frame of face image based on the muscle group contour definition model; calculating the amplitude and the frequency of each pixel point according to the displacement of each pixel point in the continuous face image, and acquiring first time sequence change data of the amplitude and the frequency in a preset time;
based on the human biological characteristics, positioning the outlines of the pupil, the eyes, the nose and the head on each frame of face image, and each pixel point in the outlines; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous face images.
3. The system of claim 2,
and the emotion calculation engine is specifically configured to perform weighted average calculation on the vibration frequency of each pixel point respectively to obtain second time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency and the head resonance frequency within a preset time.
4. The system of claim 1,
the emotion early warning server is specifically used for generating emotion historical data and emotion historical trend of a target person corresponding to the face according to the emotion state data corresponding to the face; and under the condition that the emotion historical trend triggers an emotion early warning model, carrying out grading early warning on the target personnel.
5. The system of claim 1, wherein the emotion calculation gateway and the emotion early warning server are both supported to run in a domestic CPU and domestic operating system environment.
6. An emotion normalization monitoring method based on a trusted environment is characterized by comprising the following steps:
collecting a video stream of a human face;
generating a plurality of video files according to the video stream, and acquiring a plurality of frames of face images from each video file;
processing the multi-frame face image, positioning the outlines of a plurality of muscle groups of the face and a plurality of organs related to the head, and acquiring first time sequence change data of the amplitude and the vibration frequency of each pixel point in the outline of the muscle group of the face within preset time and the vibration frequency of each pixel point in the outlines of the organs related to the head;
calculating second time sequence change data of the resonance frequency related to the head within preset time according to the vibration frequency of each pixel point in the contour, and outputting emotion state data corresponding to the face according to the first time sequence change data and the second time sequence change data;
and analyzing based on the monitored emotional historical data, and performing grading early warning on personnel with higher risk of psychological problems.
7. The method according to claim 6, wherein said processing the plurality of frames of human face images, locating contours of a plurality of muscle groups of the face and a plurality of organs associated with the head, and obtaining first time-sequence variation data of amplitude and vibration frequency of each pixel point in the contour of the muscle group of the face within a preset time, and vibration frequency of each pixel point in the contours of the organs associated with the head specifically comprises:
based on the muscle group contour definition model, positioning each pixel point in the facial muscle group contour on each frame of face image; calculating the amplitude and the frequency of each pixel point according to the displacement of each pixel point in the continuous face images, and acquiring first time sequence change data of the amplitude and the frequency in a preset time;
based on human biological characteristics, positioning the outlines of pupils, eyes, a nose and a head on each frame of face image, and all pixel points in the outlines; and calculating the vibration frequency of each pixel point according to the displacement of each pixel point in the continuous face images.
8. The method according to claim 7, wherein the calculating, according to the vibration frequency of each pixel point in the contour, second time series variation data of the head-related resonance frequency within a preset time includes:
and respectively carrying out weighted average calculation on the vibration frequency of each pixel point to obtain second time sequence change data of the pupil resonance frequency, the eyeball resonance frequency, the respiration resonance frequency and the head resonance frequency in a preset time.
9. The method according to claim 6, further comprising, after outputting the emotional state data corresponding to the face according to the first time-series change data and the second time-series change data:
generating emotion history data and emotion history trend of a target person corresponding to the face according to the emotion state data corresponding to the face;
and under the condition that the emotion historical trend triggers an emotion early warning model, carrying out grading early warning on the target personnel.
10. The method according to claim 6, after outputting the emotional state data corresponding to the face according to the first time-series variation data and the second time-series variation data, further comprising:
deleting the plurality of video files.
CN202211432974.8A 2022-11-16 2022-11-16 Emotion normalization monitoring system and method based on trusted environment Pending CN115736922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211432974.8A CN115736922A (en) 2022-11-16 2022-11-16 Emotion normalization monitoring system and method based on trusted environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211432974.8A CN115736922A (en) 2022-11-16 2022-11-16 Emotion normalization monitoring system and method based on trusted environment

Publications (1)

Publication Number Publication Date
CN115736922A true CN115736922A (en) 2023-03-07

Family

ID=85371943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211432974.8A Pending CN115736922A (en) 2022-11-16 2022-11-16 Emotion normalization monitoring system and method based on trusted environment

Country Status (1)

Country Link
CN (1) CN115736922A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116077062A (en) * 2023-04-10 2023-05-09 中国科学院自动化研究所 Psychological state perception method and system and readable storage medium
CN116311510A (en) * 2023-03-08 2023-06-23 广东兆邦智能科技股份有限公司 Emotion detection method and system based on image acquisition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311510A (en) * 2023-03-08 2023-06-23 广东兆邦智能科技股份有限公司 Emotion detection method and system based on image acquisition
CN116311510B (en) * 2023-03-08 2024-05-31 广东兆邦智能科技股份有限公司 Emotion detection method and system based on image acquisition
CN116077062A (en) * 2023-04-10 2023-05-09 中国科学院自动化研究所 Psychological state perception method and system and readable storage medium

Similar Documents

Publication Publication Date Title
CN115736922A (en) Emotion normalization monitoring system and method based on trusted environment
Anderson et al. Recurrence quantification analysis of eye movements
KR102262890B1 (en) Reading ability improvement training apparatus for providing training service to improve reading ability in connection with reading ability diagnosis apparatus based on eye tracking and apparatus for providing service comprising the same
CN109875579A (en) Emotional health management system and emotional health management method
CN112957042B (en) Non-contact target emotion recognition method and system
KR20210019266A (en) Apparatus and method for diagnosis of reading ability based on machine learning using eye tracking
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
WO2018151628A1 (en) Algorithm for complex remote non-contact multichannel analysis of a psycho-emotional and physiological condition of a subject from audio and video content
CN112990794B (en) Video conference quality detection method, system, storage medium and electronic equipment
US20220383896A1 (en) System and method for collecting behavioural data to assist interpersonal interaction
CN113662545A (en) Personality assessment method based on emotion electroencephalogram signals and multitask learning
CN113421630A (en) Intelligent management method and system for physical and mental health
CN110364260A (en) Autism earlier evaluations apparatus and system based on indicative language paradigm
CN110047518A (en) A kind of speech emotional analysis system
CN116383618A (en) Learning concentration assessment method and device based on multi-mode data
CN112036328A (en) Bank customer satisfaction calculation method and device
Husom et al. Machine learning for fatigue detection using Fitbit fitness trackers
CN110459296A (en) Information-pushing method and Related product
CN114999655A (en) Epidemic prevention psychological problem early warning and management system based on big data
CN113921098A (en) Medical service evaluation method and system
Kugurakova et al. Neurotransmitters level detection based on human bio-signals, measured in virtual environments
RU2813438C1 (en) System and method for identifying and using reference emotional-intellectual profile (ei-profile) by analysis groups
CN215503045U (en) Cognitive psychological receptor based on visual perception
CN117547271B (en) Psychological diathesis intelligent evaluation analyzer
CN117975545B (en) Communication module calling system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination