CN113812948A - Dequantization anxiety and depression psychological detection method and device - Google Patents

Dequantization anxiety and depression psychological detection method and device Download PDF

Info

Publication number
CN113812948A
CN113812948A CN202111052318.0A CN202111052318A CN113812948A CN 113812948 A CN113812948 A CN 113812948A CN 202111052318 A CN202111052318 A CN 202111052318A CN 113812948 A CN113812948 A CN 113812948A
Authority
CN
China
Prior art keywords
anxiety
depression
data set
audio
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111052318.0A
Other languages
Chinese (zh)
Inventor
黄雅婷
黄杰
孙晓
汪萌
吴枫
康宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Original Assignee
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Artificial Intelligence of Hefei Comprehensive National Science Center filed Critical Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority to CN202111052318.0A priority Critical patent/CN113812948A/en
Publication of CN113812948A publication Critical patent/CN113812948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Abstract

The invention discloses a descaled anxiety and depression psychological detection method and a device, belonging to the technical field of psychological measurement, and comprising the steps of obtaining audio and video data of testers in a psychological interview process, and extracting the video data and the audio data from the audio and video data; inputting video data into a pre-constructed regression model to obtain an anxiety and depression prediction value based on a video modality; inputting the audio data into a pre-constructed linear regression device to obtain an anxiety and depression prediction value based on an audio mode; and fusing the anxiety and depression predicted value based on the video mode and the anxiety and depression predicted value based on the audio mode to obtain the psychological anxiety and depression evaluation result of the testee. The method realizes the de-quantization process of psychological assessment, can accurately and truly reflect the anxiety and depression degree of an individual, and is more convenient and rapid and has popularization compared with the traditional psychology method of measuring only by using a dose table.

Description

Dequantization anxiety and depression psychological detection method and device
Technical Field
The invention relates to the technical field of psychological measurement, in particular to a descaled anxiety and depression psychological detection method and device.
Background
Currently, in clinical environments, diagnostic methods for measuring Anxiety of outpatients or inpatients are mainly scales, and currently, psychological assessment methods with the widest application range are completed by using Hamilton Anxiety Scale (HAMA), Hamilton depression Scale (HAMD), and other common Self-Rating Scales for Depression (SDS) and Anxiety Self-Rating scales (Self-Rating Scale SAS). HAMA, HAMD require two specialized doctors to assess the condition of the patient, which is time consuming and labor intensive.
Or in the field of biological medicine, the reagent is used for identifying and diagnosing anxiety and depression, but although important basis is provided for target treatment from the perspective of gene and medicine, the diagnosis result is difficult to obtain quickly in the practical clinical application; for example, diagnosis is performed based on electroencephalogram data, but electroencephalogram data and electroencephalogram data need to be measured for analysis, so that the measurement cost is high, and the measurement is difficult to be used universally, especially in environments other than clinical environments.
The conventional psychological detection method and the biological reagent detection method have complicated operation processes, need professional operation, and cannot realize quick integrated result output. The psychological state of anxiety and depression which is greatly influenced by external environment and is unstable usually requires the patient to return visit in time, so the integrated design is simple and easy to operate, and the rapid monitoring device has important significance for real-time consultation and return visit of the psychological problems.
Therefore, related technicians develop and design a technology for monitoring the individual psychological state by using video information based on a video analysis method, but many researches only use single-mode identification, and misdiagnosis is easy to occur when the identification accuracy and the model sensitivity are low.
Disclosure of Invention
The invention aims to overcome the defects in the background technology and provide a psychological detection method for anxiety and depression with universality and accurate detection results.
To achieve the above objects, in one aspect, a method for Deratified psychological detection of anxiety and depression is used, comprising:
acquiring audio and video data of a tester participating in a psychological interview process, and extracting the audio and video data to obtain video data and audio data;
inputting video data into a pre-constructed regression model to obtain an anxiety and depression prediction value based on a video modality;
inputting the audio data into a pre-constructed linear regression device to obtain an anxiety and depression prediction value based on an audio mode;
and fusing the anxiety and depression predicted value based on the video mode and the anxiety and depression predicted value based on the audio mode to obtain the psychological anxiety and depression evaluation result of the testee.
Further, the construction process of the regression model and the linear regressor comprises the following steps:
collecting a psychological scale of a participant and facial video data of the participant in the process of collecting the psychological scale;
extracting to obtain a first training data set and a second training data set based on the psychological mass table and the face video data of the participants;
establishing the regression model using a first training data set;
the linear regressor is built using a second training data set.
Further, the extracting a first training data set and a second training data set based on the mental metric table and the facial video data of the participant comprises:
calculating the anxiety depression score value corresponding to each participant according to the psychological scale corresponding to each participant;
extracting multi-modal characteristics corresponding to each participant according to the face video data of each participant, wherein the multi-modal characteristics comprise face characteristics, face key points, eye gaze angles, face motion unit characteristics and audio data;
and for each participant, constructing a first training data set by taking the face features, the face key points, the eye gaze angles, the facial movement unit features and the anxiety and depression score values of each participant as first data items, and constructing a second training data set by taking the audio data and the anxiety and depression score values of each participant as second data items.
Further, the establishing the regression model using the first training data set includes:
and taking the face features, face key points, eye gaze angles and facial movement unit features in the first training data set as the input of a convolutional neural network, taking anxiety and depression scores as labels, and performing nonlinear fitting on the multi-modal features by using the convolutional neural network to establish the regression model.
Further, the establishing the linear regressor using the second training data set includes:
extracting mel frequency cepstrum coefficients of the audio data in the second training data set;
and establishing the linear regression by taking the Mel frequency cepstrum coefficient as the input of a recurrent neural network and the anxiety and depression score value in the second training data set as a label.
In a second aspect, a descaled mental detection device for anxiety and depression comprises audio and video acquisition equipment, a client and a server, wherein a regression model and a linear regressor are deployed on the server, wherein:
the audio and video acquisition equipment is used for acquiring audio and video data of a tester participating in a psychological interview process and extracting the audio and video data from the audio and video data;
and the client is connected with the server, and the video data and the audio data are respectively processed by adopting a regression model and a linear regressor to obtain an anxiety and depression predicted value based on a video mode and an anxiety and depression predicted value based on an audio mode, and the anxiety and depression predicted values are fused to obtain a psychological anxiety and depression evaluation result of the tester.
Further, the construction process of the regression model and the linear regressor comprises the following steps:
collecting a psychological scale of a participant and facial video data of the participant in the process of collecting the psychological scale;
extracting to obtain a first training data set and a second training data set based on the psychological mass table and the face video data of the participants;
establishing the regression model using a first training data set;
the linear regressor is built using a second training data set.
Further, the extracting a first training data set and a second training data set based on the mental metric table and the facial video data of the participant comprises:
calculating the anxiety depression score value corresponding to each participant according to the psychological scale corresponding to each participant;
extracting multi-modal characteristics corresponding to each participant according to the face video data of each participant, wherein the multi-modal characteristics comprise face characteristics, face key points, eye gaze angles, face motion unit characteristics and audio data;
and for each participant, constructing a first training data set by taking the face features, the face key points, the eye gaze angles, the facial movement unit features and the anxiety and depression score values of each participant as first data items, and constructing a second training data set by taking the audio data and the anxiety and depression score values of each participant as second data items.
Further, the establishing the regression model using the first training data set includes:
and taking the face features, face key points, eye gaze angles and facial movement unit features in the first training data set as the input of a convolutional neural network, taking anxiety and depression scores as labels, and performing nonlinear fitting on the multi-modal features by using the convolutional neural network to establish the regression model.
Further, the establishing the linear regressor using the second training data set includes:
extracting mel frequency cepstrum coefficients of the audio data in the second training data set;
and establishing the linear regression by taking the Mel frequency cepstrum coefficient as the input of a recurrent neural network and the anxiety and depression score value in the second training data set as a label.
Compared with the prior art, the invention has the following technical effects: the invention realizes the de-quantization process of psychological assessment by the non-contact multi-mode audio and video characteristic assessment method and the device, can accurately and truly reflect the anxiety and depression degree of an individual, and is more convenient and rapid and has popularization compared with the traditional psychology measuring method only by using a dosage table.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a flow chart of a method for Deratified mental detection of anxiety and depression;
FIG. 2 is a schematic diagram of participant information collection;
FIG. 3 is a participant information collection flow diagram;
fig. 4 is a flow chart of processing of video information for multiple modalities.
Detailed Description
To further illustrate the features of the present invention, refer to the following detailed description of the invention and the accompanying drawings. The drawings are for reference and illustration purposes only and are not intended to limit the scope of the present disclosure.
As shown in fig. 1 to 4, the present embodiment discloses a method for detecting Deratified anxiety and depression psychology, which includes the following steps S1 to S4:
s1, acquiring audio and video data of a tester participating in a psychological interview process, and extracting the audio and video data to obtain video data and audio data from the audio and video data;
s2, inputting the video data into a pre-constructed regression model to obtain an anxiety and depression prediction value based on a video mode;
s3, inputting the audio data into a pre-constructed linear regression device to obtain an anxiety and depression prediction value based on the audio mode;
and S4, fusing the anxiety and depression predicted value based on the video mode and the anxiety and depression predicted value based on the audio mode to obtain the psychological anxiety and depression evaluation result of the testers.
It should be noted that before the test, the tester opens the acquisition camera of the installed acquisition client software on the PC and clicks to start, the tester needs to make a 15-second self-introduction to the video, the tester submits video information, the system automatically identifies the anxiety and depression states of the participants, and outputs the result after making an assessment, and the whole assessment process is integrated and quickly completed, and the result can be applied to the clinic and provides reference for the diagnosis of the expert.
As a further preferred technical solution, the process of constructing the regression model and the linear regressor includes the following steps S01 to S04:
s01, collecting a psychological scale of the participant and collecting face video data of the participant in the psychological scale process, wherein the process specifically comprises the following steps:
(1) selecting participants
Adult participants who voluntarily participated in psychological assessment were recruited in the outpatient department of the third hospital for a total of 300 persons, 150 male and 150 female. Participants 100 older than 18 were enrolled in other settings (e.g., school), with 50 men and 50 women. Volunteers recruited in other environments are expected to have significantly lower anxiety, depression levels than participants from hospitals, thereby balancing data distribution. All participants were evenly distributed over the three age stages of 18-40 years, 40-60 years, and over 60 years.
(2) Data acquisition
And (3) acquiring psychological scale data: a psychologist with abundant clinical experience initially talks with a patient, diagnoses the patient in a interview mode and records the diagnosis result; then the patient is taken into another independent consulting room to communicate with the patient in the form of interview, and another expert helps the patient to fill in HAMA and HAMD according to the actual condition of the patient and records the result obtained by the scale; the results of the assessments of the two experts are then combined and the patient is graded into four different categories of anxiousness or no depression, anxiety or depression, and major anxiety or depression, and if there is a deviation in the diagnosis of the two experts, the measurement process is repeated until uniform.
Video information acquisition process: in the process of interview communication between participants and two experts, the cameras record the limb postures and facial expression changes of the participants all the time, and each participant has at least more than 10 minutes of recording time. All participants had to sign informed consent and have the right to quit at any time before data collection.
It should be noted that in the interview process with two experts, two cameras are needed to record video information, the video resolution of the cameras is above 720p, the frame rate is 30 frames, the face cannot be blocked, and a mask cannot be worn (the test result is not affected by wearing glasses, the ambient light intensity is greater than 300lux, recording of the video is performed in the environment without stroboflash or low stroboflash as far as possible, and the light on the face needs to be as uniform as possible without excessive shadow blocking.
And the expert logs in the front-end interface of the acquisition terminal to perform scale evaluation by using psychological scales HAMA and HAMD in a way of asking the participants to answer one by one, and after acquisition is finished, the uploaded scale is in one-to-one correspondence with the video information service back end.
S02, extracting and obtaining a first training data set and a second training data set based on the mental metric table and the face video data of the participants, wherein the process comprises the following steps:
the method comprises the steps of sorting and recording the scale data obtained by each participant and calculating corresponding anxiety and depression score values, uniformly framing the face video data of the participants, rotationally aligning the face in the video by using an open source tool openface, and extracting multi-modal characteristics, wherein the multi-modal characteristics mainly comprise the face characteristic data of the participants, the key point characteristics of the face, the face activity units, the audio data of the participants, the limb activity of the participants and the like, and then correspondingly storing the data of each mode and the data obtained by the scales so as to facilitate subsequent data processing;
for each participant, constructing a first training data set by taking the face features, the face key points, the eye gaze angles, the facial movement unit features and the anxiety and depression score values of each participant as first data items;
a second training data set was constructed using the audio data and anxiety-depression score values for each participant as second data items.
S03, establishing the regression model by using the first training data set, including:
and taking the single-frame face features, face key points, eye gaze distribution and facial movement unit feature in the first training data set as an input layer of a convolutional neural network, taking the score of the depression anxiety mood in the first training data set as a label, performing nonlinear fitting on the features by using the convolutional neural network, and establishing a regression model so as to output the specific score of the anxiety and depression.
It should be noted that Regression Analysis (Linear Regression Analysis) is a statistical Analysis method for determining the quantitative relationship between two or more variables, and it is simply to say that if there is a data set x and its corresponding true value y1, Regression is to fit these data sets into a functional relationship, so that y2 is g (x), and certainly, the fit is not perfect, so there is an error, which is y2-y1, i.e., the fitted value is subtracted from the true value. The present embodiment acts as a method of fitting this function by a convolutional neural network.
S04, establishing the linear regressor by using the second training data set, wherein the process comprises the following steps:
extracting features of logfbank audio mode in Mel Frequency Cepstral Coefficients (MFCC) in the audio data in the second training data set by using an audio analysis tool package pyAudioAnalysis, and using the extracted features as input of a recurrent neural network GRU, and establishing a linear regressor by collecting the scale score value of the data as a label, wherein the linear regressor is used for completing prediction of anxiety and depression specific score value based on the audio mode through voice features as a regression result.
As a more preferable embodiment, in step S4: fusing the anxiety and depression predicted value based on the video mode and the anxiety and depression predicted value based on the audio mode to obtain a psychological anxiety and depression evaluation result of the tester, which specifically comprises the following steps: and taking the average value of the anxiety and depression predicted value based on the video modality and the anxiety and depression predicted value based on the audio modality as the final anxiety and depression predicted result of the video.
It should be noted that after the initial measurement and the model training, the quantitative analysis can be achieved, that is, the subsequent mental detection room for the testers does not need to measure the scales of the testers, the quantitative analysis process of the psychological assessment is realized, only the video interviewed by the testers needs to be obtained, the video content only needs 15 seconds, the anxiety and depression degree of the individual can be accurately and truly reflected, and compared with the traditional psychology method only using the scales for measurement, the method is more convenient and rapid and has popularization.
The embodiment also discloses a depreciation mental detection device for anxiety and depression, which comprises audio and video acquisition equipment, a client and a server, wherein a regression model and a linear regressor are deployed on the server, wherein:
the audio and video acquisition equipment is used for acquiring audio and video data of a tester participating in a psychological interview process and extracting the audio and video data from the audio and video data;
and the client is connected with the server, and the video data and the audio data are respectively processed by adopting a regression model and a linear regressor to obtain an anxiety and depression predicted value based on a video mode and an anxiety and depression predicted value based on an audio mode, and the anxiety and depression predicted values are fused to obtain a psychological anxiety and depression evaluation result of the tester.
It should be noted that, in this embodiment, the regression model and the linear regressor trained in advance are deployed on the remote server, and the local machine can perform the non-contact and non-inquiry anxiety depression detection on the patient only by connecting the acquisition client to the model on the remote server.
The device submits the individual video on line through the client, directly analyzes the video and obtains the anxiety and depression measuring result of the individual, is convenient and portable, can be used for volume production or measuring and evaluating a large amount of video data, and has high prediction accuracy and high evaluation efficiency. After the initial experiment, the prediction model and the system are completely established, the data required by subsequent products are reduced, and the using process is simple and easy to operate. The problem of in the past often need consume a large amount of time to visitor's psychological measurement and aassessment in outpatient service or psychological consultation, brought huge work load for medical staff or psychologist is solved.
The apparatus provided in the embodiment of the present invention is used for executing the above method embodiments, and for details of the process and the details, reference is made to the above embodiments, which are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for Deratified psychological detection of anxiety and depression, comprising:
acquiring audio and video data of a tester participating in a psychological interview process, and extracting the audio and video data to obtain video data and audio data;
inputting video data into a pre-constructed regression model to obtain an anxiety and depression prediction value based on a video modality;
inputting the audio data into a pre-constructed linear regression device to obtain an anxiety and depression prediction value based on an audio mode;
and fusing the anxiety and depression predicted value based on the video mode and the anxiety and depression predicted value based on the audio mode to obtain the psychological anxiety and depression evaluation result of the testee.
2. The method for Dequantization anxious-depressive psychological test of claim 1, wherein the regression model and the linear regressor are constructed by a process comprising:
collecting a psychological scale of a participant and facial video data of the participant in the process of collecting the psychological scale;
extracting to obtain a first training data set and a second training data set based on the psychological mass table and the face video data of the participants;
establishing the regression model using a first training data set;
the linear regressor is built using a second training data set.
3. The method of Dequantization anxious-depressive psychological test of claim 2, wherein the extracting a first training data set and a second training data set based on the participant's mental mass table and facial video data comprises:
calculating the anxiety depression score value corresponding to each participant according to the psychological scale corresponding to each participant;
extracting multi-modal characteristics corresponding to each participant according to the face video data of each participant, wherein the multi-modal characteristics comprise face characteristics, face key points, eye gaze angles, face motion unit characteristics and audio data;
and for each participant, constructing a first training data set by taking the face features, the face key points, the eye gaze angles, the facial movement unit features and the anxiety and depression score values of each participant as first data items, and constructing a second training data set by taking the audio data and the anxiety and depression score values of each participant as second data items.
4. The method for Dequantization anxious-depressive psychological test of claim 3, wherein said using a first training data set to build said regression model comprises:
and taking the face features, face key points, eye gaze angles and facial movement unit features in the first training data set as the input of a convolutional neural network, taking anxiety and depression scores as labels, and performing nonlinear fitting on the multi-modal features by using the convolutional neural network to establish the regression model.
5. The method for Dequantization anxious-depressive psychological test of claim 3, wherein said using a second training data set to create said linear regressor comprises:
extracting mel frequency cepstrum coefficients of the audio data in the second training data set;
and establishing the linear regression by taking the Mel frequency cepstrum coefficient as the input of a recurrent neural network and the anxiety and depression score value in the second training data set as a label.
6. The utility model provides a remove anxious depression psychology detection device of quantization, its characterized in that includes audio frequency and video collection equipment, client and server, and the server is gone up and is deployed with regression model and linear regression ware, wherein:
the audio and video acquisition equipment is used for acquiring audio and video data of a tester participating in a psychological interview process and extracting the audio and video data from the audio and video data;
and the client is connected with the server, and the video data and the audio data are respectively processed by adopting a regression model and a linear regressor to obtain an anxiety and depression predicted value based on a video mode and an anxiety and depression predicted value based on an audio mode, and the anxiety and depression predicted values are fused to obtain a psychological anxiety and depression evaluation result of the tester.
7. The apparatus according to claim 6, wherein the regression model and the linear regressor are constructed by a process comprising:
collecting a psychological scale of a participant and facial video data of the participant in the process of collecting the psychological scale;
extracting to obtain a first training data set and a second training data set based on the psychological mass table and the face video data of the participants;
establishing the regression model using a first training data set;
the linear regressor is built using a second training data set.
8. The apparatus according to claim 7, wherein the extracting a first training data set and a second training data set based on the mental metric tables and facial video data of the participants comprises:
calculating the anxiety depression score value corresponding to each participant according to the psychological scale corresponding to each participant;
extracting multi-modal characteristics corresponding to each participant according to the face video data of each participant, wherein the multi-modal characteristics comprise face characteristics, face key points, eye gaze angles, face motion unit characteristics and audio data;
and for each participant, constructing a first training data set by taking the face features, the face key points, the eye gaze angles, the facial movement unit features and the anxiety and depression score values of each participant as first data items, and constructing a second training data set by taking the audio data and the anxiety and depression score values of each participant as second data items.
9. The apparatus according to claim 8, wherein the using a first training data set to build the regression model comprises:
and taking the face features, face key points, eye gaze angles and facial movement unit features in the first training data set as the input of a convolutional neural network, taking anxiety and depression scores as labels, and performing nonlinear fitting on the multi-modal features by using the convolutional neural network to establish the regression model.
10. The apparatus according to claim 8, wherein the using a second training data set to create the linear regressor comprises:
extracting mel frequency cepstrum coefficients of the audio data in the second training data set;
and establishing the linear regression by taking the Mel frequency cepstrum coefficient as the input of a recurrent neural network and the anxiety and depression score value in the second training data set as a label.
CN202111052318.0A 2021-09-08 2021-09-08 Dequantization anxiety and depression psychological detection method and device Pending CN113812948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052318.0A CN113812948A (en) 2021-09-08 2021-09-08 Dequantization anxiety and depression psychological detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052318.0A CN113812948A (en) 2021-09-08 2021-09-08 Dequantization anxiety and depression psychological detection method and device

Publications (1)

Publication Number Publication Date
CN113812948A true CN113812948A (en) 2021-12-21

Family

ID=78914230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052318.0A Pending CN113812948A (en) 2021-09-08 2021-09-08 Dequantization anxiety and depression psychological detection method and device

Country Status (1)

Country Link
CN (1) CN113812948A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109171769A (en) * 2018-07-12 2019-01-11 西北师范大学 It is a kind of applied to depression detection voice, facial feature extraction method and system

Similar Documents

Publication Publication Date Title
Lauri et al. Decision‐making models in different fields of nursing
KR101880159B1 (en) A system and method for providing a picture psychological examination service using a sketchbook dedicated to psychophysical testing and its sketchbook and smartphone
US20150305662A1 (en) Remote assessment of emotional status
JP2007502461A (en) Systems and methods that facilitate centralized candidate selection and subject monitoring to participate in clinical trial studies
CN106407672A (en) Mental health evaluation system based on Internet
CN110570941A (en) System and device for assessing psychological state based on text semantic vector model
CN114334123A (en) Cognition assessment system suitable for mild cognitive impairment rapid detection
CN106407673A (en) Mental health evaluation method based on Internet cloud database
Hasan et al. RGB pixel analysis of fingertip video image captured from sickle cell patient with low and high level of hemoglobin
CN116993421A (en) Patient evaluation system based on large language model
Prescott Nursing intensity: needed today for more than staffing... the Patient Intensity for Nursing Index (PINI)
Chong et al. Development of automated triage system for emergency medical service
CN113812948A (en) Dequantization anxiety and depression psychological detection method and device
Guarin et al. Video-based facial movement analysis in the assessment of bulbar amyotrophic lateral sclerosis: clinical validation
CN116458887A (en) Method, device and equipment for monitoring and training attention deficit hyperactivity disorder of children
CN115497621A (en) Old person cognitive status evaluation system
CN108968975A (en) The measurement method and equipment of blood glucose value based on artificial intelligence
Baev et al. Non-biased cFACS measurement tool: From idea to software application
CN114283912A (en) Medical record filing method based on RTHD and artificial intelligence and cloud platform system
CN104239697B (en) A kind of use PAD realizes the construction method of apperception test
CN113729708A (en) Lie evaluation method based on eye movement technology
Vila-Blanco et al. DenTiUS plaque, a web-based application for the quantification of bacterial plaque: development and usability study
Cowen et al. Facial movements have over twenty dimensions of perceived meaning that are only partially captured with traditional methods
CN113539486A (en) Health state identification system based on traditional Chinese medicine facial and tongue manifestation dynamic change
CN111403036A (en) Morning check system and morning check terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination