CN110507335B - Multi-mode information based criminal psychological health state assessment method and system - Google Patents

Multi-mode information based criminal psychological health state assessment method and system Download PDF

Info

Publication number
CN110507335B
CN110507335B CN201910784156.6A CN201910784156A CN110507335B CN 110507335 B CN110507335 B CN 110507335B CN 201910784156 A CN201910784156 A CN 201910784156A CN 110507335 B CN110507335 B CN 110507335B
Authority
CN
China
Prior art keywords
features
voice
facial expression
physiological
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910784156.6A
Other languages
Chinese (zh)
Other versions
CN110507335A (en
Inventor
刘治
姚佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Botu Information Technology Co ltd
Shandong University
Original Assignee
Jinan Botu Information Technology Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Botu Information Technology Co ltd, Shandong University filed Critical Jinan Botu Information Technology Co ltd
Priority to CN201910784156.6A priority Critical patent/CN110507335B/en
Publication of CN110507335A publication Critical patent/CN110507335A/en
Application granted granted Critical
Publication of CN110507335B publication Critical patent/CN110507335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Abstract

The invention discloses a criminal psychological health state assessment method and system based on multi-mode information, which comprises the steps of obtaining physiological signals, facial expression images and voice signals after the improved criminal and the person to be detected experience in a virtual reality scene, and extracting physiological signal features, facial expression image features and voice signal features from the obtained signals; inputting the physiological signal characteristics, facial expression image characteristics and voice signal characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner; inputting physiological signal characteristics, facial expression characteristics and voice signal characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced; calculating the distance between the mental state evaluation vectors of the person to be observed and the reconstructed person; and evaluating the mental health state of the prisoner according to the distance.

Description

Multi-mode information based criminal psychological health state assessment method and system
Technical Field
The disclosure relates to a criminal psychological health state assessment method and system based on multi-mode information.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Psychological hygiene of criminals refers to measures and methods for maintaining psychological health of criminals during criminal prosecution, reducing and avoiding the occurrence of psychological diseases. Mainly comprises the following steps: (1) and the criminal environment is optimized. (2) Helps and guides criminals to learn to correctly apply self-psychological adjustment mechanism to timely carry out self-psychological adjustment so as to avoid psychological imbalance. (3) A psychological counseling and treating mechanism is established to timely dredge and treat the psychoses who suffer from psychological diseases and serious psychological frustration and psychological pressure.
Prisons in the world are usually psychologically tested by criminals using two scales: (1) general personality scales such as the Essecker personality questionnaire, the minnesota multiphase personality measuring table, the Kater sixteen personality factor questionnaire and the like, and the personality characteristics of the criminal and the conditions such as moral sensation, legal sensation, inhibitory power, regulatory power and the like in the personality structure are known through tests. (2) The scale is specially used for detecting the mental structure condition of the crime and predicting the possibility of the crime again, and each country is independently developed according to the characteristics of self social politics, economy, culture and the like.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
the traditional psychological state assessment does not consider a special group of prisoners, and does not consider that some electronic equipment is used for collecting physiological signals and processing various physiological signals so as to realize the quick assessment and accurate assessment of the psychological health state of the prisoners.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a criminal psychological health state assessment method and system based on multi-mode information; the method and the system have the advantages that physiological signals are fused with other modes such as expressions and voices, more accurate psychological condition assessment is achieved through intelligent recognition of the artificial neural network, and modification effect evaluation is effectively conducted while modification of prisoners is guided.
In a first aspect, the present disclosure provides a criminal mental health state assessment method based on multi-modal information;
a criminal mental health status assessment method based on multimodal information, which is not used for the diagnosis of diseases; the method comprises the following steps:
acquiring physiological signals, facial expression images and voice signals of reconstructed prisoners after virtual reality scene experience, and extracting physiological signal characteristics, facial expression image characteristics and voice signal characteristics from the acquired signals;
acquiring physiological signals, facial expression images and voice signals of prisoners to be tested after the prisoners experience virtual reality scenes, and extracting physiological signal features, facial expression features and voice signal features from the acquired signals;
inputting the physiological signal characteristics, facial expression image characteristics and voice signal characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner;
inputting physiological signal characteristics, facial expression characteristics and voice signal characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced;
calculating the distance between the mental state evaluation vectors of the person to be observed and the reconstructed person;
and evaluating the mental health state of the prisoner according to the distance.
In a second aspect, the present disclosure also provides a criminal mental health state assessment system based on multi-modal information;
criminal person mental health state evaluation system based on multi-mode information includes:
reform transform good person's of serving a sentence data acquisition module: acquiring physiological signals, facial expression images and voice signals of reconstructed prisoners after virtual reality scene experience;
the data feature extraction module for the good prisoners is improved: extracting physiological signal features, facial expression image features and voice signal features from the acquired signals;
the criminal data acquisition module that awaits measuring: acquiring physiological signals, facial expression images and voice signals of prisoners to be tested after experience of virtual reality scenes;
the data feature extraction module for the prisoner to be tested: extracting physiological signal features, facial expression features and voice signal features from the acquired signals;
a first mental state evaluation vector output module: inputting the physiological signal characteristics, facial expression image characteristics and voice signal characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner;
the second mental state evaluation vector output module: inputting physiological signal characteristics, facial expression characteristics and voice signal characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced;
the module for evaluating the mental health state of the person to be sentenced: calculating the Euclidean distance of the mental state evaluation vectors of the person to be observed and the reconstructed person; and evaluating the mental health state of the person to be sentenced according to the distance.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
In a fifth aspect, the present disclosure further provides a criminal psychological health status assessment system based on multi-modal information;
criminal person mental health state evaluation system based on multi-mode information includes:
a physiological parameter acquisition device, an image acquisition device, a voice acquisition device and the electronic device of the third aspect;
the physiological parameter acquisition device, the image acquisition device and the voice acquisition device transmit acquired data to the electronic equipment;
the electronic equipment evaluates the mental health state of the prisoner according to the collected data.
Compared with the prior art, the beneficial effect of this disclosure is:
(1) the accuracy of psychological health degree evaluation by simply depending on a psychological scale in a question-answering mode is effectively improved based on a psychological state quantitative evaluation mechanism of multi-mode information, and interference caused by emotional excitement, low degree of cooperation, subjective rejection and the like of a subject is avoided. Some electronic devices are used for collecting physiological signals, and various physiological signals are processed, so that the psychological health state of prisoners can be quickly and accurately evaluated.
Facial expression (the mode formed by facial muscle changes) and intonation expression (changes in the aspects of tone, rhythm, speed and the like of speech) can reflect subjective emotional experience of people, and meanwhile, the change of the emotion and the mood state of people can be accompanied with fluctuation of certain physiological characteristics. The improved criminal personnel can have similar psychological states when facing a specific virtual reality scene, the psychological states are used as a comparison standard, the criminal personnel who cannot finish the improved thought can obtain inappropriate emotional experience when facing the same scene, the difference is analyzed and screened based on multi-mode information, and quantitative evaluation results can be obtained according to thought improvement conditions and psychological health degrees of the criminal personnel.
(2) And the personalized virtual reality emotion arousing experience platform. The method and the system aim at different types of prisoners to develop a targeted virtual reality experience environment, combine the program content with the personal experience of the prisoners, and comprehensively consider factors such as age education backgrounds of the prisoners at the same time to enable the prisoners to obtain emotion exciting experience of feeling the same. The fluctuation of emotional state caused by proper emotion exciting experience can cause instant feedback of information such as facial expression, physiological signals, voice tone and the like, the characteristics of the information are obtained, meanwhile, the information is intelligently analyzed and judged through a machine learning algorithm, and the mental health state and the thought transformation degree of the tested person can be ideally quantized.
(3) Effective measurement of mental states is a key and difficult point for realizing mental state assessment, and accurate measurement of emotion requires emotion measurement theory and measurement tools in psychology. The method adopts a PAD three-dimensional emotion model to quantify the emotional state, the PAD emotion model is proposed by Mehraban and Russell in 1974, the one-dimensional observation quantity model can effectively explain the mood of human beings, is not limited to describing the subjective experience of emotion, and has a good mapping relation with the external expression and physiological awakening of emotion.
(4) An intelligent prediction framework based on deep learning. In 1943, psychologists w.mcculloch and mathematists w.pitts collaborated, and from a mathematical logic point of view, the earliest mathematical models of neurons and neural networks were proposed. The self-learning function is provided, different facial expressions, physiological information, voice signals and the like and corresponding psychological states can be quantitatively evaluated and input into the artificial neural network, the self-learning function can learn to autonomously identify multi-source information and judge the psychological states, and the self-learning function has very important significance for emotion prediction. Secondly, the method has the capability of searching an optimal solution at a high speed, and an optimal solution of a complex problem is searched, so that a large amount of calculation is usually needed, and an artificial neural network designed aiming at a specific problem can be utilized to exert the high-speed calculation capability of a computer and quickly find the optimal solution. Based on the advantage, the neural network can accurately find out specific distribution rules in complex information such as electroencephalogram, electrocardio, expressions, voice and the like, and establish specific feedback connection between the distribution rules and psychological emotion generated by a human body. According to the method, a large number of simple processing units of an artificial neural network are connected to form a self-adaptive dynamics system, and biological signals are analyzed by means of functions of parallelism, distributed storage, self-organization of self-adaptive learning and the like, so that mental health assessment is accurately and objectively carried out.
(5) And evaluating the psychological health degree of the subjects on the PAD emotion model on the basis of the Euclidean distance as a calculation basis. Euclidean distance is a common similarity algorithm, and Euclidean distance is visual in the process of calculating human body psychological emotion similarity. The smaller the Euclidean distance, the greater the similarity between two different emotional states, otherwise the smaller the similarity. The thought modification condition and the psychological health degree of the tested person are judged by calculating the Euclidean distance between the tested person and the person who is modified to well keep the health psychology on the PAD basic scale, and the traditional subjective judgment interactive question-answer psychological evaluation mode is improved into an objective quantitative evaluation standard based on multi-mode information.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a schematic diagram of a system configuration according to a first embodiment;
FIGS. 2(a) and 2(b) are schematic diagrams of the PAD three-dimensional emotion model of the first embodiment;
FIG. 3 is a block diagram showing the overall structure of the first embodiment;
FIG. 4 is a schematic diagram of multi-source physiological signal emotion recognition based on a multi-layer neural network in the first embodiment;
FIG. 5 is a schematic diagram of facial expression emotion recognition based on a convolutional neural network according to the first embodiment;
FIG. 6 is a block diagram of a first embodiment of a speech-based mental state recognition architecture;
FIG. 7 is a flowchart of spectrogram generation based on speech features according to the first embodiment;
fig. 8 is a schematic diagram of evaluation of mental health degree based on euclidean distance in the first embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the eighties of the last century, scientists introduced psychological research into computer science in an attempt to change subjective psychological states into calculable ones, i.e., to understand the emotional reactions of a person through the face image and voice of the person during human-computer interaction. Different from facial expression recognition and speech emotion understanding, psychological calculation based on physiological signals has unique advantages, has authenticity, objectivity and obvious subjective controllability, can objectively reflect the psychological condition of people, but has a good recognition effect on the psychological state with higher awakening degree.
Criminal psychological diagnosis and criminal psychological prediction are carried out through criminal psychological tests, generally carried out when criminals are monitored, at the criminal service middle stage and before full criminal release, so as to determine the personality defects of the criminals, verify the correction effect and predict the possibility of criminal reconnaissance, and a targeted reconstruction scheme is made for criminal personnel on the basis of the personality defects. Psychological assessment based on questionnaires has strong subjective factors, assessment results can be interfered by external environments and ideas of a tested person, meanwhile, conditions that a prisoner is not matched and cannot effectively communicate and the like can be met, the psychological health condition of the tested person is difficult to objectively and truly reflect, and deviation of modification effect assessment is caused.
The first embodiment provides a criminal psychological health state assessment method based on multi-mode information;
a criminal mental health status assessment method based on multimodal information, which is not used for the diagnosis of diseases; the method comprises the following steps:
acquiring physiological signals, facial expression images and voice signals of reconstructed prisoners after virtual reality scene experience, and extracting physiological signal characteristics, facial expression image characteristics and voice signal characteristics from the acquired signals;
acquiring physiological signals, facial expression images and voice signals of prisoners to be tested after the prisoners experience virtual reality scenes, and extracting physiological signal features, facial expression features and voice signal features from the acquired signals;
inputting the physiological signal characteristics, facial expression image characteristics and voice signal characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner;
inputting physiological signal characteristics, facial expression characteristics and voice signal characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced;
calculating the Euclidean distance of the mental state evaluation vectors of the person to be observed and the reconstructed person;
if the Euclidean distance is smaller than a set threshold value, the psychological health state of the person to be tested is good; otherwise, the mental health state of the person serving the criminal to be detected is not good.
As an embodiment, the training step of the pre-trained neural network model includes:
constructing a neural network model;
acquiring physiological signals, facial expression images and voice signals of prisoners as training samples after the prisoners experience virtual reality scenes;
extracting physiological features, facial expression features and voice features from the acquired signals; labeling a psychological state evaluation vector for physiological characteristics, facial expression characteristics and voice characteristics of each prisoner in the training sample;
training a neural network model by extracting physiological features, facial expression features, voice features and labeled psychological state evaluation vectors; and obtaining a pre-trained neural network model.
The psychological state evaluation vector is a vector with 12 rows by 1 columns, each row comprises element values which are quantized values of each emotional state, the quantized values are integers, and the quantized values have the value range of-4, -3, -2, -1, 0, 1, 2, 3, 4; the 12 rows of elements include 12 emotional states, including: anger, waking, controlled, friendly, calm, dominating, suffering, interest, humble, excited, armed and influential.
As an embodiment, labeling the psychological state assessment vector for the physiological features, facial expression features and voice features of each prisoner in the training sample is based on the PAD emotion recognition scale:
acquiring a three-dimensional emotion recognition scale of a subject after the same subject experiences in the same virtual reality scene; collecting for N times;
carrying out averaging processing on the values of the three-dimensional emotion recognition tables acquired for N times under the same virtual reality scene of the same subject to obtain a psychological state evaluation vector, namely the psychological state evaluation vector of the current subject after experiencing the same virtual reality scene;
for the same subject, replacing the next virtual reality scene for experience, and obtaining the mental state evaluation vector of the next virtual reality scene; obtaining psychological state evaluation vectors of the same subject under different virtual reality scenes;
then, replacing the next subject, and repeating the steps in the same way to obtain psychological state evaluation vectors of different subjects under different virtual reality scene experiences;
and then, labeling the physiological features, the facial expression features and the voice features extracted by different subjects under different virtual reality scene experiences by using the obtained mental state evaluation vectors of the different subjects under different virtual reality scene experiences.
As an embodiment, a virtual reality scenario, comprising: and reproducing the virtual reality scene according to the typical case formulated by the psychological assessment table, analyzing the virtual reality scene according to social hazards, and actively modifying and recovering the new virtual reality scene.
The psychological assessment scale comprises: six point test scales under the Chinese criminal psychological assessment system or universal personality scales of prisons in all countries in the world; the universal prison personality scale for all countries in the world comprises one or more of the following scales: an Essecker personality questionnaire, a Minnesota multiphase personality measurement chart, or a Katel sixteen personality factor questionnaire.
As an example, as shown in fig. 3, the physiological signal acquisition mode includes:
collecting a blood volume pulsation signal or a heart rate signal through a photoelectric clamp arranged on a thumb of a subject;
electrocardiosignals collected by electrodes arranged on the wrist and ankle of a subject;
skin conductance signals collected by a conductance sensor disposed on the finger;
electromyogram of the subject collected by an electrode provided on the anterior arm;
respiratory signals acquired by a sensor arranged at the position of the thorax of the subject; or the like, or, alternatively,
the electroencephalogram signals are collected through the electroencephalogram test electrodes.
It should be understood that the above signals are merely exemplary illustrations.
As an example, the physiological signal characteristic refers to:
a blood volume pulsatility signal characteristic comprising: a mean value of the blood volume pulsation signal amplitude, a variance of the blood volume pulsation signal amplitude, a maximum value of the blood volume pulsation signal amplitude, a minimum value of the blood volume pulsation signal amplitude, or a median value of the blood volume pulsation signal amplitude;
heart rate signal characteristics including: the mean value of the heart rate signal amplitudes, the variance of the heart rate signal amplitudes, the maximum value of the heart rate signal amplitudes, the minimum value of the heart rate signal amplitudes or the median value of the heart rate signal amplitudes;
the electrocardiosignal characteristics are that the frequency range of 0-10Hz in the electrocardiogram signal frequency spectrum is divided into 8 non-overlapping sub-frequency bands, the Fourier transform mean value of each sub-frequency band is obtained as the characteristics, simultaneously, the 8 sub-frequency bands are combined into two sub-frequency bands, 1-3 sub-frequency bands are combined into a low frequency band, 4-8 sub-frequency bands are combined into a high frequency band, and the ratio of the average Fourier transform values of the two sub-frequency bands is calculated as the characteristics;
a skin conductance signal signature, comprising: the mean value of the skin conductance signal amplitude, the variance of the skin conductance signal amplitude, the first-order difference mean value of the skin conductance signal amplitude, the root-mean-square of the skin conductance signal amplitude or the adjacent difference absolute value mean value of the skin conductance signal amplitude;
electromyographic signal features, comprising: electromyogram signal power spectral density;
respiratory signal characteristics, including: selecting average power spectral density in four frequency bands of 0-0.1Hz, 0.1-0.2Hz, 0.2-0.3Hz and 0.3-0.4Hz on the power spectrum of the respiratory signal;
electroencephalogram signal characteristics, including: brain electrical signal power spectral density, i.e., signal power within a unit frequency band.
As an embodiment, the manner of acquiring the facial expression features is:
collecting facial expression images of prisoners after experience of virtual reality scenes through a camera; carrying out image transformation on the facial expression image to expand a data set, and then carrying out feature extraction to obtain the texture features of the image;
as an embodiment, the manner of obtaining the speech signal features is:
collecting voice signals of prisoners after experience of the prisoners in the virtual reality scene through a microphone; dividing the voice signal into a plurality of frames, carrying out fast Fourier transform on each frame of voice signal to obtain frequency domain characteristics, carrying out characteristic extraction on the voice signal, and extracting tone characteristics or sound speed characteristics.
As an example, the crime type of the person to be crime is the same as the crime type of the reconstructed person.
As an embodiment, the neural network model is trained by extracting physiological features, facial expression features, voice features and labeled psychological state assessment vectors; the specific steps for obtaining the pre-trained neural network model are as follows:
and performing feature fusion on the physiological features, the facial expression features and the voice features, inputting the fused features into a neural model, outputting a predicted value of a quantized psychological state assessment vector of a prisoner, calculating a difference value between the predicted value of the quantized psychological state assessment vector of the prisoner and the labeled psychological state assessment vector of the prisoner, and stopping training when the difference value is minimum to obtain a trained prediction model.
The second embodiment also provides a criminal psychological health state assessment system based on multi-mode information;
as shown in fig. 1, the system for assessing the mental health status of a criminal based on multi-modal information comprises:
reform transform good person's of serving a sentence data acquisition module: acquiring physiological signals, facial expression images and voice signals of reconstructed prisoners after virtual reality scene experience;
the data feature extraction module for the good prisoners is improved: extracting physiological signal features, facial expression image features and voice signal features from the acquired signals;
the criminal data acquisition module that awaits measuring: acquiring physiological signals, facial expression images and voice signals of prisoners to be tested after experience of virtual reality scenes;
the data feature extraction module for the prisoner to be tested: extracting physiological signal features, facial expression features and voice signal features from the acquired signals;
a first mental state evaluation vector output module: inputting the physiological signal characteristics, facial expression image characteristics and voice signal characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner;
the second mental state evaluation vector output module: inputting physiological signal characteristics, facial expression characteristics and voice signal characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced;
the module for evaluating the mental health state of the person to be sentenced: calculating the Euclidean distance of the mental state evaluation vectors of the person to be observed and the reconstructed person; and evaluating the mental health state of the person to be sentenced according to the distance.
As an embodiment, the system for assessing the mental health status of a criminal based on multi-modal information further comprises:
the criminal type distinguishing module identifies the criminal type of the criminal according to the fingerprint information input by the criminal, the known one-to-one correspondence between the criminal type and the fingerprint information of the criminal. When a prisoner enters a prison, the prisoner can record crime types of the prisoner and associate the crime types with fingerprint information of the prisoner.
And the virtual reality emotion exciting module calls a corresponding virtual reality scene from the database according to the crime type of the prisoner, and the corresponding virtual reality scene is actually watched for the prisoner through the virtual reality helmet display.
As an embodiment, the system for assessing the mental health status of a criminal based on multi-modal information further comprises:
the training data acquisition module is used for acquiring physiological signals, facial expression images and voice signals of prisoners as training samples after the prisoners experience virtual reality scenes; extracting physiological features, facial expression features and voice features from the acquired signals;
the training data labeling module is used for labeling psychological state evaluation vectors for physiological characteristics, facial expression characteristics and voice characteristics of each prisoner in a training sample;
the machine learning emotion prediction model training module is used for training the neural network model by utilizing the extracted physiological characteristics, facial expression characteristics, voice characteristics and labeled psychological state evaluation vectors; and obtaining a pre-trained neural network model.
In a third embodiment, the present embodiment further provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, perform the steps of the method according to the first embodiment.
In a fourth embodiment, the present embodiment further provides a computer-readable storage medium for storing computer instructions, and the computer instructions, when executed by a processor, perform the steps of the method according to the first embodiment.
Fifth, the embodiment also provides a criminal psychological health state assessment system based on multi-mode information;
criminal person mental health state evaluation system based on multi-mode information includes:
a physiological parameter acquisition device, an image acquisition device, a voice acquisition device and the electronic equipment of the third embodiment;
the physiological parameter acquisition device, the image acquisition device and the voice acquisition device transmit acquired data to the electronic equipment;
the electronic equipment evaluates the mental health state of the prisoner according to the collected data.
The physiological parameter acquisition device comprises one or more of the following devices: a photoelectric clamp, an electrode, or a conductivity sensor.
The image acquisition device comprises: a camera;
the voice acquisition device comprises: a microphone.
The virtual reality situation experience platform is set up, the emotion of a subject is stimulated, a multi-mode quantitative mental health assessment strategy based on the combination of facial expressions and various physiological signals is set up on the basis, the modification effect of prisoners can be accurately and objectively reflected, and a reference basis is provided for further modification work.
The emotion calculation method of the human physiological signals is based on an emotion model and comprises an emotion model of a basic emotion theory, an emotion model of a dimensional space theory and an emotion model of cognitive neuroscience. The present disclosure uses a PAD three-dimensional emotion model as a psychological evaluation basis, as shown in fig. 2(a) and fig. 2(b), three dimensions respectively include:
the pleasure degree P represents the positive and negative characteristics of the emotional state of the individual;
arousal degree A, which represents the neurophysiologic activation level of the individual;
and the dominance degree D represents the control state of the individual on the situation and other people.
Each dimension is divided into four items with item scores ranging from-4 to 4.
The neural network is a system with learning ability, and can exceed the original knowledge level of a designer through developing knowledge. The learning with supervision or instructor is a learning and training mode. The psychological state of a testee is quantitatively expressed through a PAD three-dimensional emotion model, and the psychological state is used as a supervision label corresponding to collected multi-source information, a designed neural network framework is trained, and an intelligent system capable of objectively and quantitatively evaluating the psychological health condition through multi-mode information such as human face expression, various physiological signals and voice intonation is obtained.
The acquisition approach avoids interfering the subject from watching the VR program as much as possible. And a camera device is arranged at the hidden position to obtain the synchronous facial expression of the experiencer. After different experience links are finished, the testee appeals to watch the experience and obtains voice information. And after each link is finished, determining the emotional state of the subject by a professional psychologist through a three-dimensional emotion recognition scale (PAD), and respectively labeling the acquired physiological signals, facial expression information and voice information to establish a complete sample. And simultaneously, taking the average value of the phased PAD scales of all the persons participating in the data set establishment, and representing different standard emotional states of each stage by using three twelve-dimensional vectors, wherein the value of each dimension is between-4 and 4. The calculation formula is as follows:
Figure GDA0002226318130000151
i is a collection sample, n is a sample number, j is an experience link, the Standard (1) represents the Standard emotional experience obtained by the subject in the first test, the Standard (2) represents the Standard emotional experience obtained by the subject in the second test, and the Standard (3) represents the Standard emotional experience obtained by the subject in the third test.
As shown in fig. 3 and 4, a multi-layer neural network is used as a physiological signal emotional state recognition model, and a standard and an evolution mode are designed based on a supervised learning strategy, so that the error between the result and the standard is reduced less and less until the error is reduced to a reasonable range and converges.
The neural network training step is:
(1) using proper value as initialization weight;
(2) inputting the 'input' of the training data { input, correct output }, namely the physiological signal characteristics into a neural network, obtaining the output of the model, namely the emotional expression of the subject based on the PAD reduced scale, comparing the correct output, namely the difference between the psychological state label of the professional psychologist aiming at the subject and the output of the model, and calculating the error vector E and the increment of the output node, namely the increment of the output node
E=D-Y
Figure GDA0002226318130000161
(3) Calculating the increment of the back propagation output node, calculating the increment of the next hidden layer node, i.e.
E(k)=WT
Figure GDA0002226318130000162
(4) Repeating the step (3) until the calculation is carried out to the hidden layer adjacent to the input layer;
(5) adjusting the weight values according to specific learning rules, i.e.
Δωij=αixj
ωij←ωij+Δωij
(6) Repeating the steps (2) to (5) for all training data nodes;
(7) and (5) repeating the steps (2) to (6) until an ideal neural network model is trained.
Based on the facial expression of the testee acquired in the virtual reality experience link, four transformations are performed on each acquired picture: rotating, horizontally translating, vertically translating, horizontally turning, expanding a data set, and classifying and labeling psychological feedback expressed by the human face micro expression by adopting a single Convolutional Neural Network (CNN) model, wherein the labeling mode is 12-dimensional vector emotional expression. The CNN can find the features hidden in the picture, has better discrimination than the features extracted manually, and does not need to carry out excessive preprocessing on the original data.
As shown in fig. 5, in the present disclosure, a convolutional layer, a pooling layer, and a full connection layer are constructed for inputting a gray scale image with a fixed size, and a convolutional neural network structure is constructed, where each pooling layer is behind a corresponding convolutional layer, and a neuron of one convolutional layer is connected with only a part of neurons of the previous layer, so that each neuron can feel local visual characteristics, and then local information is integrated at a higher layer, and finally, description information of the whole image is obtained. And extracting the characteristics of the whole picture by adopting a weight sharing strategy, so that the weight of each edge connected with each neuron and part of neurons in the previous layer is the same as the weight of each edge connected with other neurons in the current layer and the previous layer, and the number of the trained parameters is effectively reduced. And a down-sampling strategy is adopted to compress the pixel points in a certain range into one pixel point, so that the dimensionality of the features is reduced, and the generalization is enhanced. After all convolutional layers, a fully connected layer containing 256 input neurons and 12 output neurons is connected, and the activation function is trained using parameters of ReLU, CNN using a stochastic gradient descent algorithm.
As shown in fig. 6, the present disclosure utilizes the collected language expression of the prisoner after the virtual reality situation experience and the corresponding mental state quantitative labeling to establish a speech library, and on this basis, trains an intelligent mental state measurement structure with a multilayer perceptron model as a framework.
As shown in fig. 7, each speech segment is divided into multiple frames, each frame of speech is subjected to fast fourier transform to obtain an expression in a frequency domain, a frequency spectrum of each frame of speech is represented by coordinates, the speech is rotated by 90 degrees, and then the amplitudes are mapped to a gray level representation (continuous amplitudes are quantized into 256 quantized values), the darker the color is, the larger the amplitude value is, and a speech spectrum, that is, a spectrogram describing a speech signal is obtained, and simultaneously contains static and dynamic information. After the characteristic parameters of the voice are extracted, nonlinear operation is carried out on the input mode by a large number of connection weights in the multilayer perceptron, and the input point with the maximum excitation represents the psychological state assessment corresponding to the input mode. The connection weight coefficient of the neural network is continuously adaptively corrected according to the correctness of the recognition result in use.
Through an effective emotion exciting link of the virtual reality experience platform, psychological state expression of the testee based on multi-source physiological signals, facial information, voice narration and the like can be obtained. Inputting the information of the prisoners into a trained machine learning model, and finally obtaining objective psychological quantitative evaluation vectors, wherein the specific formula is as follows:
Figure GDA0002226318130000181
j is different links of VR experience, j is 1 to represent a related case reproduction link, j is 2 to represent a social hazard link caused by a case, and j is 3 to represent an incentive link. Alpha is a background factor which is set according to the information such as the age, the crime type, the education background, the sex and the like of the prisoner. Alpha is alpha1Evaluating vector weights, alpha, for psychological states based on multi-source physiological information2Vector weights, alpha, are evaluated for mental states based on facial expressions3Evaluation of vector weights, alpha, for mental states based on linguistic expressions12+α 31. By setting a background factor strategy, the method combines a psychological state assessment machine learning prediction model based on multi-modal information with the obvious difference between different individuals of a prisoner, and simultaneously satisfies objective accuracy and pertinence.
As shown in fig. 8, a scheme for quantitatively evaluating the mental health degree based on the euclidean distance has the following specific formula:
Figure GDA0002226318130000182
Figure GDA0002226318130000183
Figure GDA0002226318130000191
wherein j represents an experience link, i represents a subject, and health (j) represents a health psychological standard vector of a modified prisoner after the jth virtual reality situation experience link.
Out (j) represents a psychological state evaluation vector determined by the subject through intelligent evaluation based on multi-mode information such as physiological signals, expressions, language expressions and the like of the subject after the virtual reality situation experience link. And calculating the Euclidean distance between the evaluation value and the standard value to obtain the offset degree, and quantizing the expression of the mental health indication. If the offset degree is large in the experience link 1, the criminal is lack of objective and rational cognition on the crime type; if the offset degree is large in the experience link 2, the repentance degree of prisoners is low; if the deviation degree is large in the experience link 3, the criminal staff is lack of positive and positive psychological assessment. The present disclosure may effectively assist in predicting the likelihood of a crime.
Description of the terms:
1. typical case reproduction, refers to: the social hazard in the economic crime is large, and the typical representative real case exists; the social hazard in violent crimes and job crimes is large, and typical and representative real cases exist. The case scene is truly reproduced through a virtual reality technology, so that the subject obtains vivid and specific emotional experience.
2. Social hazard analysis means: the damage of related cases to the property and personal safety of citizens, the damage to social order and economic order, and the like. Different types of social hazards are expressed through a visual scene and synchronous voice commentary through a virtual reality technology, so that the subject obtains vivid and specific emotional experience.
3. Actively modifying and recovering the newborn means that: the reconstruction process comprises organizing criminals to engage in production work, performing ideological education, cultural education and technical education on the criminals, realizing reconstruction of people and education of people, and finally achieving the purpose that the criminals can deeply know self-criminals and return to the society without criminals again. The real presentation of audio-visual combination is carried out on the transformation process and thought expression statement of different types of prisoners who transform well and successfully return to the society through virtual reality and video technology, so that a viewer obtains vivid and specific emotional experience.
4. A crime type comprising: violent criminals, job crimes and economic crimes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. The system for evaluating the psychological health state of prisoners based on multi-mode information is characterized in that the system is used for intelligently evaluating and determining the psychological state evaluation vector on the basis of expressing the multi-mode information by physiological signals, expressions and languages, calculating the Euclidean distance between the evaluation value and a standard value, quantifying the psychological health state of the prisoners to be evaluated according to the distance evaluation, and assisting in predicting crime possibility, and comprises the following steps:
criminal type distinguishing module: identifying the crime types of the prisoners according to the one-to-one correspondence among the fingerprint information input by the prisoners, the known crime types and the fingerprint information of the prisoners, recording the crime types of the prisoners by prisoners when the prisoners enter a prison, and associating the crime types with the fingerprint information of the prisoners;
virtual reality emotion arouses module: calling a corresponding virtual reality scene from a database according to the crime type of the prisoner, and displaying the scene to the prisoner through a virtual reality helmet display for watching;
a training data acquisition module: acquiring physiological signals, facial expression images and voice signals of prisoners as training samples after experiencing in a virtual reality scene; extracting physiological features, facial expression features and voice features from the acquired signals;
a training data labeling module: labeling a psychological state evaluation vector for physiological characteristics, facial expression characteristics and voice characteristics of each prisoner in the training sample;
the machine learning emotion prediction model training module: training a neural network model by extracting physiological features, facial expression features, voice features and labeled psychological state evaluation vectors; obtaining a pre-trained neural network model;
reform transform good person's of serving a sentence data acquisition module: acquiring physiological signals, facial expression images and voice signals of reconstructed prisoners after the prisoners experience in a virtual reality scene;
the data feature extraction module for the good prisoners is improved: extracting physiological features, facial expression features and voice features from the acquired signals;
the criminal data acquisition module that awaits measuring: acquiring physiological signals, facial expression images and voice signals of prisoners to be tested after experience in a virtual reality scene;
the data feature extraction module for the prisoner to be tested: extracting physiological features, facial expression features and voice features from the acquired signals;
a first mental state evaluation vector output module: inputting the physiological characteristics, facial expression characteristics and voice characteristics of the reconstructed prisoner into a pre-trained neural network model, and outputting the psychological state evaluation vector of the reconstructed prisoner;
the second mental state evaluation vector output module: inputting physiological characteristics, facial expression characteristics and voice characteristics of a person to be sentenced into a pre-trained neural network model, and outputting a psychological state evaluation vector of the person to be sentenced;
the module for evaluating the mental health state of the person to be sentenced: calculating the Euclidean distance of the mental state evaluation vectors of the person to be observed and the reconstructed person; and evaluating the mental health state of the person to be sentenced according to the distance.
2. The system for assessing the mental health of a criminal based on multi-modal information as claimed in claim 1, wherein the mental state assessment vector is a 12 row by 1 column vector, each row comprises element values which are quantized values of each emotional state, the quantized values are integers, and the quantized values have values in the range of-4, -3, -2, -1, 0, 1, 2, 3, 4; the 12 rows of elements include 12 emotional states, including: anger, waking, controlled, friendly, calm, dominating, suffering, interest, humble, excited, armed and influential.
3. The system for assessing the mental health of criminals based on multi-modal information as claimed in claim 1, wherein the labeling of mental state assessment vectors for physiological features, facial expression features and voice features of each criminal in training samples is based on PAD emotion recognition scale:
acquiring a three-dimensional emotion recognition scale of a subject after the same subject experiences in the same virtual reality scene; collecting for N times;
carrying out averaging processing on the values of the three-dimensional emotion recognition tables acquired for N times on the same subject in the same virtual reality scene to obtain a psychological state evaluation vector, namely the psychological state evaluation vector of the current subject after experiencing in the same virtual reality scene;
for the same subject, replacing the next virtual reality scene for experience, and obtaining the mental state evaluation vector of the next virtual reality scene; obtaining psychological state evaluation vectors of the same subject under different virtual reality scenes;
then, replacing the next subject, and repeating the steps in the same way to obtain psychological state evaluation vectors of different subjects under different virtual reality scene experiences;
and then, labeling the physiological features, the facial expression features and the voice features extracted by different subjects under different virtual reality scene experiences by using the obtained mental state evaluation vectors of the different subjects under different virtual reality scene experiences.
4. The system for assessing the mental health of a criminal based on multi-modal information as claimed in claim 1, wherein the physiological signals are obtained by a method comprising:
collecting a blood volume pulsation signal or a heart rate signal through a photoelectric clamp arranged on a thumb of a subject;
electrocardiosignals collected by electrodes arranged on the wrist and ankle of a subject;
skin conductance signals collected by a conductance sensor disposed on the finger;
electromyogram of the subject collected by an electrode provided on the anterior arm;
respiratory signals acquired by a sensor arranged at the position of the thorax of the subject; or the like, or, alternatively,
the electroencephalogram signals are collected through the electroencephalogram test electrodes.
5. The system for assessing the mental health status of a criminal based on multi-modal information as claimed in claim 1, wherein said physiological characteristics are:
a blood volume pulsatility signal characteristic comprising: a mean value of the blood volume pulsation signal amplitude, a variance of the blood volume pulsation signal amplitude, a maximum value of the blood volume pulsation signal amplitude, a minimum value of the blood volume pulsation signal amplitude, or a median value of the blood volume pulsation signal amplitude;
heart rate signal characteristics including: the mean value of the heart rate signal amplitudes, the variance of the heart rate signal amplitudes, the maximum value of the heart rate signal amplitudes, the minimum value of the heart rate signal amplitudes or the median value of the heart rate signal amplitudes;
the electrocardiosignal characteristics are that the frequency range of 0-10Hz in the electrocardiogram signal frequency spectrum is divided into 8 non-overlapping sub-frequency bands, the Fourier transform mean value of each sub-frequency band is obtained as the characteristics, simultaneously, the 8 sub-frequency bands are combined into two sub-frequency bands, 1-3 sub-frequency bands are combined into a low frequency band, 4-8 sub-frequency bands are combined into a high frequency band, and the ratio of the average Fourier transform values of the two sub-frequency bands is calculated as the characteristics;
a skin conductance signal signature, comprising: the mean value of the skin conductance signal amplitude, the variance of the skin conductance signal amplitude, the first-order difference mean value of the skin conductance signal amplitude, the root-mean-square of the skin conductance signal amplitude or the adjacent difference absolute value mean value of the skin conductance signal amplitude;
electromyographic signal features, comprising: electromyogram signal power spectral density;
respiratory signal characteristics, including: selecting average power spectral density in four frequency bands of 0-0.1Hz, 0.1-0.2Hz, 0.2-0.3Hz and 0.3-0.4Hz on the power spectrum of the respiratory signal;
electroencephalogram signal characteristics, including: brain electricity signal power spectral density, i.e. signal power within a unit frequency band;
alternatively, the first and second electrodes may be,
the acquisition mode of the facial expression features is as follows:
acquiring facial expression images of prisoners after experience in a virtual reality scene through a camera; carrying out image transformation on the facial expression image to expand a data set, and then carrying out feature extraction to obtain the texture features of the image;
alternatively, the first and second electrodes may be,
the voice feature acquisition mode is as follows:
collecting voice signals of prisoners after experience in a virtual reality scene through a microphone; dividing the voice signal into a plurality of frames, performing fast Fourier transform on each frame of voice signal to obtain frequency domain characteristics, performing characteristic extraction on the voice signal, and extracting tone characteristics or sound speed characteristics;
alternatively, the first and second electrodes may be,
training a neural network model by using the extracted physiological characteristics, facial expression characteristics, voice characteristics and labeled psychological state evaluation vectors; the specific steps for obtaining the pre-trained neural network model are as follows:
and performing feature fusion on the physiological features, the facial expression features and the voice features, inputting the fused features into a neural model, outputting a predicted value of a quantized psychological state assessment vector of a prisoner, calculating a difference value between the predicted value of the quantized psychological state assessment vector of the prisoner and the labeled psychological state assessment vector of the prisoner, and stopping training when the difference value is minimum to obtain a trained prediction model.
6. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor employing the system of any one of claims 1 to 5.
7. A computer-readable storage medium storing computer instructions which, when executed by a processor, implement the system of any one of claims 1 to 5.
8. Criminal personnel mental health state evaluation system based on multimode information, characterized by including:
a physiological parameter acquisition device, an image acquisition device, a voice acquisition device, and the electronic apparatus of claim 6;
the physiological parameter acquisition device, the image acquisition device and the voice acquisition device transmit acquired data to the electronic equipment;
the electronic equipment evaluates the mental health state of the prisoner according to the collected data.
CN201910784156.6A 2019-08-23 2019-08-23 Multi-mode information based criminal psychological health state assessment method and system Active CN110507335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910784156.6A CN110507335B (en) 2019-08-23 2019-08-23 Multi-mode information based criminal psychological health state assessment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910784156.6A CN110507335B (en) 2019-08-23 2019-08-23 Multi-mode information based criminal psychological health state assessment method and system

Publications (2)

Publication Number Publication Date
CN110507335A CN110507335A (en) 2019-11-29
CN110507335B true CN110507335B (en) 2021-01-01

Family

ID=68627532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910784156.6A Active CN110507335B (en) 2019-08-23 2019-08-23 Multi-mode information based criminal psychological health state assessment method and system

Country Status (1)

Country Link
CN (1) CN110507335B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028919A (en) * 2019-12-03 2020-04-17 北方工业大学 Phobia self-diagnosis and treatment system based on artificial intelligence algorithm
CN111125525B (en) * 2019-12-24 2023-09-15 山东大学 Personalized transformation correction strategy recommendation system for prisoner and operation method thereof
CN111145851B (en) * 2019-12-27 2023-07-07 山东华尚电气有限公司 Mental state monitoring and evaluating system based on intelligent bracelet
CN111222464B (en) * 2020-01-07 2023-11-07 中国医学科学院生物医学工程研究所 Emotion analysis method and system
CN113140312A (en) * 2020-01-19 2021-07-20 Oppo广东移动通信有限公司 User data processing method and device, session data processing method and device, and electronic equipment
CN111507592B (en) * 2020-04-08 2022-03-15 山东大学 Evaluation method for active modification behaviors of prisoners
CN111449684B (en) * 2020-04-09 2023-05-05 济南康硕生物技术有限公司 Method and system for rapidly acquiring standard scanning section of heart ultrasound
CN111513732A (en) * 2020-04-29 2020-08-11 山东大学 Intelligent psychological stress assessment early warning system for various groups of people under epidemic disease condition
CN111723869A (en) * 2020-06-22 2020-09-29 山东大学 Special personnel-oriented intelligent behavior risk early warning method and system
CN111967355B (en) * 2020-07-31 2023-09-01 华南理工大学 Prisoner jail-breaking intention assessment method based on limb language
CN112185493A (en) * 2020-08-26 2021-01-05 山东大学 Personality preference diagnosis device and project recommendation system based on same
CN112185558A (en) * 2020-09-22 2021-01-05 珠海中科先进技术研究院有限公司 Mental health and rehabilitation evaluation method, device and medium based on deep learning
CN112190264A (en) * 2020-10-09 2021-01-08 安徽美心信息科技有限公司 Intelligent psychological body and mind feedback analysis system
CN112155577B (en) * 2020-10-15 2023-05-05 深圳大学 Social pressure detection method and device, computer equipment and storage medium
CN112842337A (en) * 2020-11-11 2021-05-28 郑州大学第一附属医院 Emotion dispersion system and method for mobile ward-round scene
CN112370058A (en) * 2020-11-11 2021-02-19 西北工业大学 Method for identifying and monitoring emotion of user based on mobile terminal
CN112287873A (en) * 2020-11-12 2021-01-29 广东恒电信息科技股份有限公司 Judicial service early warning system
CN112450932B (en) * 2020-12-14 2022-10-21 深圳市艾利特医疗科技有限公司 Psychological disorder detection system and method
CN112597967A (en) * 2021-01-05 2021-04-02 沈阳工业大学 Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals
CN113035232B (en) * 2021-03-23 2022-08-30 北京智能工场科技有限公司 Psychological state prediction system, method and device based on voice recognition
CN112735585B (en) * 2021-04-02 2021-08-03 刘思佳 Arthritis rehabilitation diagnosis and treatment method and system based on neural network and machine learning
CN113284618B (en) * 2021-04-14 2022-07-22 北京育学园健康管理中心有限公司 Infant health assessment method
CN113255635B (en) * 2021-07-19 2021-10-15 中国科学院自动化研究所 Multi-mode fused psychological stress analysis method
CN114091844B (en) * 2021-11-01 2023-06-02 山东心法科技有限公司 Early warning method, device and storage medium for re-crime of violent personnel
CN114550860B (en) * 2022-01-28 2023-02-03 中国人民解放军总医院第一医学中心 Hospitalizing satisfaction evaluation method based on process data and intelligent network model
CN115601819B (en) * 2022-11-29 2023-04-07 四川大学华西医院 Multimode violence tendency recognition method, device, equipment and medium
CN116548971B (en) * 2023-05-17 2023-10-13 郑州师范学院 Psychological crisis auxiliary monitoring system based on physiological parameters of object
CN117352002A (en) * 2023-10-08 2024-01-05 广州点子信息科技有限公司 Remote intelligent voice analysis supervision method

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5601090A (en) * 1994-07-12 1997-02-11 Brain Functions Laboratory, Inc. Method and apparatus for automatically determining somatic state
KR20080009458A (en) * 2006-07-24 2008-01-29 중앙대학교 산학협력단 System for recognizing emotion using neural network
CN102564424A (en) * 2011-12-29 2012-07-11 上海电机学院 Multiple sensor-based data fusion method
CN105224961A (en) * 2015-11-04 2016-01-06 中国电子科技集团公司第四十一研究所 A kind of diffuse reflectance infrared spectroscopy of high resolution extracts and matching process
CN105683724A (en) * 2013-09-19 2016-06-15 欧莱雅公司 Systems and methods for measuring and categorizing colors and spectra of surfaces
CN106446550A (en) * 2016-09-28 2017-02-22 湖南老码信息科技有限责任公司 Cold prediction method and system based on incremental neutral network model
CN106861012A (en) * 2017-02-22 2017-06-20 南京邮电大学 User emotion adjusting method based on Intelligent bracelet under VR experience scenes
CN107007291A (en) * 2017-04-05 2017-08-04 天津大学 Intense strain intensity identifying system and information processing method based on multi-physiological-parameter
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107242876A (en) * 2017-04-20 2017-10-13 合肥工业大学 A kind of computer vision methods for state of mind auxiliary diagnosis
CN107437090A (en) * 2016-05-28 2017-12-05 郭帅杰 The continuous emotion Forecasting Methodology of three mode based on voice, expression and electrocardiosignal
CN108806722A (en) * 2017-04-21 2018-11-13 艾于德埃林公司 The method and automation affective state inference system inferred for automatic affective state
CN108888281A (en) * 2018-08-16 2018-11-27 华南理工大学 State of mind appraisal procedure, equipment and system
CN109124655A (en) * 2018-07-04 2019-01-04 中国电子科技集团公司电子科学研究院 State of mind analysis method, device, equipment, computer media and multifunctional chair
CN109157231A (en) * 2018-10-24 2019-01-08 阿呆科技(北京)有限公司 Portable multi-channel Depression trend assessment system based on emotional distress task
CN109998570A (en) * 2019-03-11 2019-07-12 山东大学 Inmate's psychological condition appraisal procedure, terminal, equipment and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9408570B2 (en) * 2013-05-03 2016-08-09 The Charles Stark Draper Laboratory, Inc. Physiological feature extraction and fusion to assist in the diagnosis of post-traumatic stress disorder

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5601090A (en) * 1994-07-12 1997-02-11 Brain Functions Laboratory, Inc. Method and apparatus for automatically determining somatic state
KR20080009458A (en) * 2006-07-24 2008-01-29 중앙대학교 산학협력단 System for recognizing emotion using neural network
CN102564424A (en) * 2011-12-29 2012-07-11 上海电机学院 Multiple sensor-based data fusion method
CN105683724A (en) * 2013-09-19 2016-06-15 欧莱雅公司 Systems and methods for measuring and categorizing colors and spectra of surfaces
CN105224961A (en) * 2015-11-04 2016-01-06 中国电子科技集团公司第四十一研究所 A kind of diffuse reflectance infrared spectroscopy of high resolution extracts and matching process
CN107437090A (en) * 2016-05-28 2017-12-05 郭帅杰 The continuous emotion Forecasting Methodology of three mode based on voice, expression and electrocardiosignal
CN106446550A (en) * 2016-09-28 2017-02-22 湖南老码信息科技有限责任公司 Cold prediction method and system based on incremental neutral network model
CN106861012A (en) * 2017-02-22 2017-06-20 南京邮电大学 User emotion adjusting method based on Intelligent bracelet under VR experience scenes
CN107007291A (en) * 2017-04-05 2017-08-04 天津大学 Intense strain intensity identifying system and information processing method based on multi-physiological-parameter
CN107242876A (en) * 2017-04-20 2017-10-13 合肥工业大学 A kind of computer vision methods for state of mind auxiliary diagnosis
CN108806722A (en) * 2017-04-21 2018-11-13 艾于德埃林公司 The method and automation affective state inference system inferred for automatic affective state
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN109124655A (en) * 2018-07-04 2019-01-04 中国电子科技集团公司电子科学研究院 State of mind analysis method, device, equipment, computer media and multifunctional chair
CN108888281A (en) * 2018-08-16 2018-11-27 华南理工大学 State of mind appraisal procedure, equipment and system
CN109157231A (en) * 2018-10-24 2019-01-08 阿呆科技(北京)有限公司 Portable multi-channel Depression trend assessment system based on emotional distress task
CN109998570A (en) * 2019-03-11 2019-07-12 山东大学 Inmate's psychological condition appraisal procedure, terminal, equipment and system

Also Published As

Publication number Publication date
CN110507335A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110507335B (en) Multi-mode information based criminal psychological health state assessment method and system
Bota et al. A review, current challenges, and future possibilities on emotion recognition using machine learning and physiological signals
CN112120716A (en) Wearable multi-mode emotional state monitoring device
RU2708807C2 (en) Algorithm of integrated remote contactless multichannel analysis of psychoemotional and physiological state of object based on audio and video content
CN105393252A (en) Physiologic data acquisition and analysis
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN109805944B (en) Children's ability analytic system that shares feelings
CN113197579A (en) Intelligent psychological assessment method and system based on multi-mode information fusion
Gavrilescu et al. Predicting the Sixteen Personality Factors (16PF) of an individual by analyzing facial features
Wang et al. Cross-subject EEG emotion classification based on few-label adversarial domain adaption
Yildirim A review of deep learning approaches to EEG-based classification of cybersickness in virtual reality
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
Dar et al. YAAD: young adult’s affective data using wearable ECG and GSR sensors
Yang et al. More to less (M2L): Enhanced health recognition in the wild with reduced modality of wearable sensors
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
Parmar et al. A novel and efficient Wavelet Scattering Transform approach for primitive-stage dyslexia-detection using electroencephalogram signals
Bakkialakshmi et al. AMIGOS: a robust emotion detection framework through Gaussian ResiNet
Dia et al. A novel stochastic transformer-based approach for post-traumatic stress disorder detection using audio recording of clinical interviews
Mijić et al. Classification of cognitive load using voice features: A preliminary investigation
Buscema et al. The implicit function as squashing time model: a novel parallel nonlinear EEG analysis technique distinguishing mild cognitive impairment and Alzheimer's disease subjects with high degree of accuracy
Andreas et al. CNN-Based Emotional Stress Classification using Smart Learning Dataset
Bakkialakshmi et al. Effective Prediction System for Affective Computing on Emotional Psychology with Artificial Neural Network
US20240050006A1 (en) System and method for prediction and control of attention deficit hyperactivity (adhd) disorders
Avramidis Affective analysis and interpretation of brain responses to music Stimuli
RAMACHANDAR HUMAN CENTRIC COGNITIVE FUNCTIONING USING BCI TECHNOLOGY

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant