CN110507335A - Inmate's psychological health states appraisal procedure and system based on multi-modal information - Google Patents
Inmate's psychological health states appraisal procedure and system based on multi-modal information Download PDFInfo
- Publication number
- CN110507335A CN110507335A CN201910784156.6A CN201910784156A CN110507335A CN 110507335 A CN110507335 A CN 110507335A CN 201910784156 A CN201910784156 A CN 201910784156A CN 110507335 A CN110507335 A CN 110507335A
- Authority
- CN
- China
- Prior art keywords
- inmate
- signal
- feature
- facial expression
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
Abstract
The present disclosure discloses inmate's psychological health states appraisal procedures and system based on multi-modal information, obtain the inmate and inmate to be measured being transformed, physiological signal, facial expression image and voice signal after virtual reality Scenario experiences extract physiological signal feature, facial expression image feature and phonic signal character from the signal of acquisition;The physiological signal feature, facial expression image feature and phonic signal character of the inmate being transformed are input in preparatory trained neural network model, the psychological condition assessment vector for the inmate being transformed is exported;Physiological signal feature, facial expression feature and the phonic signal character of inmate to be measured are input in preparatory trained neural network model, the psychological condition assessment vector of inmate to be measured is exported;Inmate to be measured is calculated at a distance from the psychological condition assessment vector for the inmate being transformed;Inmate's psychological health states are assessed according to distance.
Description
Technical field
This disclosure relates to inmate's psychological health states appraisal procedure and system based on multi-modal information.
Background technique
The statement of this part is only to refer to background technique relevant to the disclosure, not necessarily constitutes the prior art.
Mental health of convict refers to make criminal keep mental health during serving a sentence, reduces and avoid that psychological disease occurs
The measure and method of disease.Mainly have: (1) optimizing environment of serving a sentence.(2) it helps, criminal association is instructed correctly to adjust with self psychology
Section mechanism carries out self psychological regulation in time, to avoid psychological unbalance.(3) psychological consultation and treatment mechanism is established, to production
Raw psychological disease and the criminal for encountering serious Psychological setback, psychological pressure, carry out psychological counseling and treatment in time.
Countries in the world prison carries out psychological test: (1) general personality inventory to criminal usually using two amounts table, such as
Eisenke Personality Questionnaire, Minnesota Multiphasic personality measurement table, Cartel Sixteen Personality Factor Questionnaire etc., by testing
Solve the situations such as moral sense, legal system sense, restraint, the adjusting force in the personality characteristics and personality-formation of criminal.(2) it is exclusively used in examining
It surveys structure of criminal mentality situation and the scale of possibility is recommitted in prediction, various countries are with spies such as itself society and politics, economy, culture
Point is independent to be developed.
In implementing the present disclosure, following technical problem exists in the prior art in inventor:
Existing psychological condition assessment does not account for this specific group of inmate, moreover, not accounting for benefit yet
Physiological signal is acquired with some electronic equipments, a variety of physiological signals are handled, to realize to inmate's mental health shape
The rapid evaluation of state and accurate assessment, the prior art depend on marriage counselor, and subjectivity is too strong.
Summary of the invention
In order to solve the deficiencies in the prior art, present disclose provides inmate's mental healths based on multi-modal information
State evaluating method and system;The disclosure merges the other modes such as the same expression of physiological signal, voice, passes through artificial mind
Intelligent recognition through network realizes more accurate psychologic status assessment, is effectively transformed effect assessment while guidance is served a sentence
The transformation of personnel.
In a first aspect, present disclose provides inmate's psychological health states appraisal procedures based on multi-modal information;
Inmate's psychological health states appraisal procedure based on multi-modal information, the method are not used in examining for disease
It is disconnected;The method, comprising:
Obtain the inmate that has been transformed, physiological signal, facial expression image after virtual reality Scenario experiences and
Voice signal extracts physiological signal feature, facial expression image feature and phonic signal character from the signal of acquisition;
Inmate to be measured is obtained, physiological signal, facial expression image and voice letter after virtual reality Scenario experiences
Number, physiological signal feature, facial expression feature and phonic signal character are extracted from the signal of acquisition;
The physiological signal feature, facial expression image feature and phonic signal character of the inmate being transformed is defeated
Enter into preparatory trained neural network model, exports the psychological condition assessment vector for the inmate being transformed;
Physiological signal feature, facial expression feature and the phonic signal character of inmate to be measured are input to preparatory instruction
In the neural network model perfected, the psychological condition assessment vector of inmate to be measured is exported;
Inmate to be measured is calculated at a distance from the psychological condition assessment vector for the inmate being transformed;
Inmate's psychological health states are assessed according to distance.
Second aspect, the disclosure additionally provide inmate's psychological health states assessment system based on multi-modal information;
Inmate's psychological health states assessment system based on multi-modal information, comprising:
Good inmate's data acquisition module is transformed: the inmate being transformed is obtained, in virtual reality scene body
Physiological signal, facial expression image and voice signal after testing;
Good inmate's data characteristics extraction module is transformed: extracting physiological signal feature, face from the signal of acquisition
Facial expression image feature and phonic signal character;
Inmate's data acquisition module to be measured: obtaining inmate to be measured, the physiology after virtual reality Scenario experiences
Signal, facial expression image and voice signal;
Inmate's data characteristics extraction module to be measured: physiological signal feature, facial expression are extracted from the signal of acquisition
Feature and phonic signal character;
First psychological condition assesses vector output module: by the physiological signal feature for the inmate being transformed, face
Facial expression image feature and phonic signal character are input in preparatory trained neural network model, export the clothes being transformed
The psychological condition of punishment personnel assesses vector;
Second psychological condition assesses vector output module: the physiological signal feature of inmate to be measured, facial expression is special
Phonic signal character of seeking peace is input in preparatory trained neural network model, exports the psychological condition of inmate to be measured
Assess vector;
The people that serves a sentence that inmate's psychological health states evaluation module to be measured: calculating inmate to be measured and has been transformed
The Euclidean distance of the psychological condition assessment vector of member;Inmate's psychological health states to be measured are assessed according to distance.
The third aspect, the disclosure additionally provide a kind of electronic equipment, including memory and processor and are stored in storage
The computer instruction run on device and on a processor when the computer instruction is run by processor, completes first aspect institute
The step of stating method.
Fourth aspect, the disclosure additionally provide a kind of computer readable storage medium, for storing computer instruction, institute
When stating computer instruction and being executed by processor, the step of completing first aspect the method.
5th aspect, the disclosure additionally provide inmate's psychological health states assessment system based on multi-modal information;
Inmate's psychological health states assessment system based on multi-modal information, comprising:
Electronics described in physiological parameter acquisition device, image collecting device, voice acquisition device and the third aspect is set
It is standby;
Physiological parameter acquisition device, image collecting device and voice acquisition device set the data transmission electron of acquisition
It is standby;
Electronic equipment according to the collected data assesses the psychological health states of inmate.
Compared with prior art, the beneficial effect of the disclosure is:
(1) the psychological condition quantitatively evaluating mechanism based on multi-modal information is effectively improved simple anaclisis scale and uses
Interrogation reply system carries out the accuracy of mental health scale evaluation, avoids because subject is excited, fitness is low, subjective resistance
It is interfered Deng caused by.Physiological signal is acquired using some electronic equipments, a variety of physiological signals are handled, to realize to clothes
The rapid evaluation of punishment personnel psychology health status and precisely assessment.
Facial expression (facial muscles change composed mode) and intonation expression (tone, rhythm and speed of speech etc.
The variation of aspect), it can reflect the subjective emotional experience of people, while the variation of the mood of people and Mood State can be with certain
The fluctuating of physiological characteristic, the disclosure make full use of these information, extract feature using the strategy of fusion and carry out intelligent mood knowledge
Not.Good inmate is transformed and has similar psychological condition in face of specific virtual reality scenario, as to sighting target
Standard, and the inmate for failing to complete good ideological remoulding can obtain inappropriate emotional experience in face of same scene, be based on
Multi-modal information is analyzed and is screened to this species diversity, and the ideological remoulding situation and mental health of inmate can be directed to
The evaluation result that degree is quantified.
(2) personalized virtual reality emotion excites experience platform.The disclosure is directed to different types of inmate, opens
Send out virtual reality experience environment targeted, the personal experience of programme content combination inmate, while comprehensively considering and serving a sentence
The factors such as the age education background of personnel make it obtain the emotion excitation experience that I shall appreciate it as a personal favour.Appropriate emotional arousal experiences band
The fluctuation for carrying out affective state can cause the immediate feedback of the information such as facial expression, physiological signal, speech intonation, obtain these letters
The feature of breath passes through machine learning algorithm intellectual analysis simultaneously and judges, can be with the psychological health states of ideal quantized subject
With ideological remoulding degree.
(3) carrying out effective measurement to psychological condition is the key that realize psychological condition assessment and difficult point, is realized to emotion
Accurate measurement need emotion measure theory and measuring tool in psychology.The disclosure is quantified using PAD three-dimensional emotion model
Affective state, PAD emotion model were proposed that this dimension observed quantity model can have by Mehrabian and Russell in 1974
The mental state of the mankind is explained on effect ground, it is not limited to the subjective experience of description emotion, while calling out with the external presentation of emotion, physiology
Waking up has preferable mapping relations.
(4) the intelligent predicting frame based on deep learning.Nineteen forty-three, psychologist W.McCulloch and mathematician
W.Pitts cooperation proposes neuron and the earliest mathematical model of neural network from the angle of mathematical logic.Firstly, having
Self-learning function, can be different facial expression, physiologic information, voice signal etc. and corresponding psychological condition quantitatively evaluating
Artificial neural network is inputted, it can be learned autonomous classification multi-source information and be carried out psychological condition and sentence by the function of self study
Fixed, this is of great significance emotion prediction.Secondly, having the ability that high speed finds optimum solution, one is found again
The optimization solution of miscellaneous problem, it usually needs very big operand, and utilize an artificial neural network for particular problem design
Network can play the high-speed computation ability of computer, quickly find optimization solution.Based on this advantage, neural network can be in brain
The specific regularity of distribution is accurately searched out in the complex informations such as electricity, electrocardio, expression, voice, by the heart of they and human body generation
Reason mood sets up specific feedback link.The disclosure connects a large amount of simple process unit of artificial neural network to be formed certainly
Adaptive dynamics system analyzes bio signal by concurrency, distributed storage, the functions such as self-organizing of adaptive learning,
Accurately objectively carry out Mental health evaluation.
It (5) is to calculate basis to assess subject's mental health degree on PAD emotion model with Euclidean distance.Europe
It is a kind of common similarity algorithm that distance is obtained in several, calculates Euclidean distance during human body mental emotion similarity
Compare intuitive.Euclidean distance is smaller, and two different affective state similarities are bigger, and otherwise similarity is with regard to smaller.It is logical
The Euclidean distance for calculating that subject and the good personnel for keeping fit psychology of transformation show on the scale of the basis PAD is crossed, is sentenced
The ideological remoulding situation and mental health degree of disconnected subject, by traditional subjective judgement interacting Question-Answer Psychological Evaluation mode
Improve as the objective quantification evaluation criterion based on multi-modal information.
Detailed description of the invention
The accompanying drawings constituting a part of this application is used to provide further understanding of the present application, the application's
Illustrative embodiments and their description are not constituted an undue limitation on the present application for explaining the application.
Fig. 1 is the system structure diagram of the present embodiment one;
Fig. 2 (a) and Fig. 2 (b) is the PAD three-dimensional emotion model schematic diagram of the present embodiment one;
Fig. 3 is the overall structure block diagram of the present embodiment one;
Fig. 4 is the multi-source physiological signal emotion recognition schematic diagram based on multilayer neural network of the present embodiment one;
Fig. 5 is that the face based on convolutional neural networks of the present embodiment one shows emotion and recognizes schematic diagram;
Fig. 6 is the voice-based psychological condition identification structure block diagram of the present embodiment one;
Fig. 7 is the sonograph product process figure based on phonetic feature of the present embodiment one;
Fig. 8 is the mental health degree assessment schematic diagram based on Euclidean distance of the present embodiment one.
Specific embodiment
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless
Otherwise indicated, all technical and scientific terms used herein has and the application person of an ordinary skill in the technical field
Normally understood identical meanings.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular shape
Formula be also intended to include plural form, additionally, it should be understood that, when in the present specification use term "comprising" and/or
When " comprising ", existing characteristics, step, operation, device, component and/or their combination are indicated.
Last century the eighties, scientist is by psychological research Import computer subject, it is intended to which subjective psychological condition is become
For that can calculate, i.e., during human-computer interaction, the emotional reactions of this people are understood by the face picture of people, sound.With face
Expression Recognition and speech emotional understand difference, and the psychology based on physiological signal, which calculates, possesses unique advantage, it has really
Property, objectivity and significantly can not be subjective handling, the psychologic status of people can be objectively responded, but higher for arousal
Psychological condition just have preferable recognition effect.
Carry out mind of convict diagnosis and recommitting psychological calculation by mental test for convict, generally when criminal enters and supervises, serve a sentence
It mid-term and carries out before releasing upon completion of a sentence, to determine the personality defect of criminal, verifies rectifying effect and a possibility that prediction is recommitted,
And targeted modification scheme is made for inmate based on this.Psychological assessment based on questionnaire is with stronger
Subjective factor, assessment result will receive the interference of external environment and tested person's idea, while can encounter convict and mismatch and can not have
Situations such as effect is linked up, it is difficult to which objective reality reflects the mental health state of tested person, causes the deviation assessed correctional effect.
Embodiment one present embodiments provides inmate's psychological health states appraisal procedure based on multi-modal information;
Inmate's psychological health states appraisal procedure based on multi-modal information, the method are not used in examining for disease
It is disconnected;The method, comprising:
Obtain the inmate that has been transformed, physiological signal, facial expression image after virtual reality Scenario experiences and
Voice signal extracts physiological signal feature, facial expression image feature and phonic signal character from the signal of acquisition;
Inmate to be measured is obtained, physiological signal, facial expression image and voice letter after virtual reality Scenario experiences
Number, physiological signal feature, facial expression feature and phonic signal character are extracted from the signal of acquisition;
The physiological signal feature, facial expression image feature and phonic signal character of the inmate being transformed is defeated
Enter into preparatory trained neural network model, exports the psychological condition assessment vector for the inmate being transformed;
Physiological signal feature, facial expression feature and the phonic signal character of inmate to be measured are input to preparatory instruction
In the neural network model perfected, the psychological condition assessment vector of inmate to be measured is exported;
Calculate inmate to be measured assessed with the psychological condition of inmate being transformed the Euclid of vector away from
From;
If Euclidean distance is less than given threshold, then it represents that inmate's psychological health states to be measured are good;Otherwise,
Indicate that inmate's psychological health states to be measured are bad.
As one embodiment, the training step of trained neural network model includes: in advance
Construct neural network model;
It obtains as physiological signal of the inmate after virtual reality Scenario experiences of training sample, facial expression
Image and voice signal;
Physiological characteristic, facial expression feature and phonetic feature are extracted from the signal of acquisition;For clothes each in training sample
Physiological characteristic, facial expression feature and the phonetic feature mark psychological condition of punishment personnel assesses vector;
Vector is assessed using physiological characteristic, facial expression feature, phonetic feature and the psychological condition marked is extracted, it is right
Neural network model is trained;Obtain preparatory trained neural network model.
The psychological condition assesses vector, is the vector of a 12 row * 1 column, the element value that every row includes is every kind of emotion
The quantized value of state, the quantized value are integer, and the value range of the quantized value is -4, -3, -2, -1,0,1,2,3,4;12
Row element just includes 12 kinds of affective states, 12 kinds of affective states, comprising: angry, awake, controlled, friendly, tranquil, branch
Match, is painful, interested, humble, excited, is overcautious and powerful.
It as one embodiment, is physiological characteristic, facial expression feature and the voice of each inmate in training sample
Feature marks psychological condition and assesses vector, is labeled based on PAD Emotion identification scale:
To the same subject after the same virtual reality Scenario experiences, the three dimensional mood identified amount of subject is acquired
Table;N times are acquired altogether;
To the same subject under the same virtual reality scene, by the value of the three dimensional mood identification scale of n times acquisition
Processing of averaging is carried out, obtained psychological condition assessment vector is current subject in the same virtual reality Scenario experiences
Psychological condition afterwards assesses vector;
To the same subject, replaces next virtual reality scenario and experienced, obtain next virtual reality scenario
Psychological condition assess vector;And then obtain the same subject, the psychological condition under different virtual reality scenarios assess to
Amount;
Then next subject is replaced, and so on, different subjects can be obtained in different virtual reality scenario bodies
Psychological condition under testing assesses vector;
Then, vector is assessed using psychological condition of the obtained different subjects under the experience of different virtual reality scenarios,
Physiological characteristic, facial expression feature and the phonetic feature that different subjects are extracted under the experience of different virtual reality scenarios
It is labeled.
As one embodiment, virtual reality scene, comprising: reproduced according to the typical case that Psychological Evaluation scale is formulated
Social danger analysis virtual reality scenario, virtual reality scenario of being reborn actively is transformed in virtual reality scenario.
The Psychological Evaluation scale, comprising: six subtest scales or the world under " Chinese mind of convict assessment system "
The general personality inventory in various countries prison;One of described countries in the world prison general personality inventory, including following scale are more
Kind: Eisenke Personality Questionnaire, Minnesota Multiphasic personality measurement table or Cartel Sixteen Personality Factor Questionnaire.
As one embodiment, as shown in figure 3, the acquisition modes of physiological signal, comprising:
Pass through the photoelectricity folder acquisition blood volume beat signals or heart rate signal being arranged on subject's thumb;
The electrocardiosignal that electrode by the way that subject's wrist and ankle is arranged in acquires;
The skin conductance signal acquired by the conductivity sensor being arranged on finger;
Pass through the electromyogram for the subject that the electrode being arranged on forearm acquires;
The breath signal that sensor by the way that subject's thorax position is arranged in acquires;Or,
The EEG signals acquired by brain electrical testing electrode.
It is to be understood that above-mentioned signal is only the explanation of some exemplaries.
As one embodiment, the physiological signal feature, refers to:
Blood volume beat signals feature, comprising: mean value, the blood volume beat signals amplitude of blood volume beat signals amplitude
Variance, the maximum value of blood volume beat signals amplitude, the minimum value or blood volume beat signals of blood volume beat signals amplitude
The intermediate value of amplitude;
Heart rate signal feature, comprising: the mean value of heart rate signal amplitude, the variance of heart rate signal amplitude, heart rate signal amplitude
Maximum value, the intermediate value of the minimum value of heart rate signal amplitude or heart rate signal amplitude;
Electrocardiosignal feature is that 0-10Hz frequency range in ECG signal frequency spectrum is divided into 8 nonoverlapping son frequencies
Band obtains the Fourier transformation mean value of each sub-band as feature, while 8 sub-bands are merged into two sub-bands, 1-
3 sub-bands merge into low-frequency band, and 4-8 sub-band merges into high frequency band, calculate two sub-bands and are averaged the ratio of Fourier transformation value
Value is used as feature;
Skin conductance signal feature, comprising: the mean value of skin conductance signal amplitude, the variance of skin conductance signal amplitude,
The first-order difference mean value of skin conductance signal amplitude, the root mean square of skin conductance signal amplitude or skin conductance signal amplitude
Adjacent difference absolute value mean value;
Electromyogram signal feature, comprising: electromyogram signal power spectral density;
Breath signal feature, comprising: 0-0.1Hz, 0.1-0.2Hz, 0.2- are chosen on the power spectrum of breath signal
Average power spectral density in tetra- frequency bands of 0.3Hz and 0.3-0.4Hz;
EEG signals feature, comprising: EEG signals power spectral density, i.e. signal power in per unit band.
As one embodiment, the acquisition modes of the facial expression feature are:
Facial expression image of the inmate after virtual reality Scenario experiences is acquired by camera;To facial expression
Image carries out image and converts EDS extended data set, then carries out feature extraction, obtains the textural characteristics of image;
As one embodiment, the acquisition modes of the phonic signal character are:
Voice signal of the inmate after virtual reality Scenario experiences is acquired by microphone;Voice signal is divided
For several frames, Fast Fourier Transform (FFT) is carried out to each frame voice signal, obtains frequency domain character, feature is carried out to voice signal
It extracts, extracts tonality feature or Sound Speed Characteristics.
As one embodiment, the criminal type of the criminal type of inmate to be measured and the inmate being transformed
It is identical.
As one embodiment, extraction physiological characteristic, facial expression feature, phonetic feature and the psychology marked are utilized
Status assessment vector, is trained neural network model;Obtain the specific steps of preparatory trained neural network model
Are as follows:
Fusion Features are carried out to physiological characteristic, facial expression feature and phonetic feature, fused feature is input to mind
Through in model, output inmate quantifies the predicted value of psychological condition assessment vector, calculates inmate's quantization psychological condition and comment
The difference between psychological condition assessment vector that the predicted value and inmate for estimating vector have marked stops in difference minimum
Training, obtains trained prediction model.
Embodiment two, the present embodiment additionally provide the assessment of inmate's psychological health states based on multi-modal information system
System;
As shown in Figure 1, inmate's psychological health states assessment system based on multi-modal information, comprising:
Good inmate's data acquisition module is transformed: the inmate being transformed is obtained, in virtual reality scene body
Physiological signal, facial expression image and voice signal after testing;
Good inmate's data characteristics extraction module is transformed: extracting physiological signal feature, face from the signal of acquisition
Facial expression image feature and phonic signal character;
Inmate's data acquisition module to be measured: obtaining inmate to be measured, the physiology after virtual reality Scenario experiences
Signal, facial expression image and voice signal;
Inmate's data characteristics extraction module to be measured: physiological signal feature, facial expression are extracted from the signal of acquisition
Feature and phonic signal character;
First psychological condition assesses vector output module: by the physiological signal feature for the inmate being transformed, face
Facial expression image feature and phonic signal character are input in preparatory trained neural network model, export the clothes being transformed
The psychological condition of punishment personnel assesses vector;
Second psychological condition assesses vector output module: the physiological signal feature of inmate to be measured, facial expression is special
Phonic signal character of seeking peace is input in preparatory trained neural network model, exports the psychological condition of inmate to be measured
Assess vector;
The people that serves a sentence that inmate's psychological health states evaluation module to be measured: calculating inmate to be measured and has been transformed
The Euclidean distance of the psychological condition assessment vector of member;Inmate's psychological health states to be measured are assessed according to distance.
As one embodiment, inmate's psychological health states assessment system based on multi-modal information, further includes:
Inmate's criminal type discriminating module, according to inmate's typing finger print information, it is known that criminal type and
One-to-one relationship between the finger print information of inmate identifies the criminal type of inmate.In inmate
When being put in prison, prison guard can carry out data record to the criminal type of inmate, and the fingerprint of criminal type and inmate are believed
Breath is associated.
Virtual reality emotion excitation module is transferred corresponding virtual according to the criminal type of inmate from database
Reality scene is watched by virtual implementing helmet display reality to inmate.
As one embodiment, inmate's psychological health states assessment system based on multi-modal information, further includes:
Collecting training data module obtains the inmate as training sample after virtual reality Scenario experiences
Physiological signal, facial expression image and voice signal;Physiological characteristic, facial expression feature and language are extracted from the signal of acquisition
Sound feature;
Training data labeling module is physiological characteristic, facial expression feature and the language of each inmate in training sample
Sound feature marks psychological condition and assesses vector;
Machine learning emotion prediction model training module utilizes extraction physiological characteristic, facial expression feature, phonetic feature
The psychological condition assessment vector marked, is trained neural network model;Obtain preparatory trained neural network mould
Type.
Embodiment three, the present embodiment additionally provide a kind of electronic equipment, including memory and processor and are stored in
The computer instruction run on reservoir and on a processor, when the computer instruction is run by processor, completes first reality
The step of applying the method.
Example IV, the present embodiment additionally provides a kind of computer readable storage medium, for storing computer instruction,
When the computer instruction is executed by processor, complete one embodiment the method the step of.
Embodiment five, the present embodiment additionally provide the assessment of inmate's psychological health states based on multi-modal information system
System;
Inmate's psychological health states assessment system based on multi-modal information, comprising:
Electronics described in physiological parameter acquisition device, image collecting device, voice acquisition device and third embodiment
Equipment;
Physiological parameter acquisition device, image collecting device and voice acquisition device set the data transmission electron of acquisition
It is standby;
Electronic equipment according to the collected data assesses the psychological health states of inmate.
One of described physiological parameter acquisition device, including following device are a variety of: photoelectricity folder, electrode or conductivity
Sensor.
Described image acquisition device, comprising: camera;
The voice acquisition device, comprising: microphone.
The disclosure builds virtual reality situation experience platform, excites subject's mood, is established based on this based on face
The multi-modal quantization Mental health evaluation strategy that expression and a variety of physiological signals combine, can accurately objectively respond the people that serves a sentence
The correctional effect of member, provides reference frame for further retrofit work.
The affection computation method of physiology signal is based on emotion model, the emotion model including basic emotion opinion,
The emotion model of dimensional space opinion and the emotion model of Cognitive Neuroscience.The disclosure is commented using PAD three-dimensional emotion model as psychology
Valence foundation, as shown in Fig. 2 (a) and Fig. 2 (b), three dimensions respectively include:
Pleasant degree P indicates the positive negative characteristic of individual affective state;
Arousal A indicates individual nervous physiology activation level;
Dominance D indicates individual to situation and other people state of a control.
Each dimension is divided into four projects, and project scoring assigns to 4 points from -4.
Neural network is the system with learning ability, can be original more than designer to know by developing knowledge
Know horizontal.Supervision or the study for having tutor are its learning training modes of one kind.The disclosure is by the psychological condition of subject
Quantitative expression, supervision label corresponding to the multi-source information as acquisition, training design are carried out by PAD three-dimensional emotion model
Neural network framework, obtain one can pass through the multi-modal letter such as human body face expression, a variety of physiological signals, speech intonation
Cease the intelligence system that objective quantification assessment is carried out to mental health state.
Acquiring way avoids interference subject from watching VR program as far as possible.Photographic device is installed in shelter, obtains experiencer's
Synchronous facial expression.After different experience links terminates, allows subject to tell that viewing is known from experience, obtain voice messaging.Each ring
Section completes the affective state for determining subject by three dimensional mood identification scale (PAD) by professional psychiatrist, respectively to
Physiological signal, facial expression information, the voice messaging of acquisition carry out affective state mark, establish a complete sample.Simultaneously
Mean value is taken to all scales of PAD stage by stage for participating in establishing data set personnel, indicates each stage not with three ten bivectors
Same standard affective state, each dimension value is between -4 to 4.Calculation formula is as follows:
I is collecting sample, and n is sample size, and j is experience link, and Standard (1) indicates that subject tests in first time
Standard emotional experience obtained, Standard (2) indicate that subject is testing standard emotional experience obtained for the second time,
Standard (3) indicates that subject is testing standard emotional experience obtained for the third time.
As shown in Figure 3 and Figure 4, using multilayer neural network as physiological signal affective state identification model, to there is supervision
Based on learning strategy, a standard and mode of evolution are designed, makes the error between result and standard fewer and fewer, until accidentally
Difference is narrowed down in reasonable range and is restrained.
Neural metwork training step are as follows:
(1) use value appropriate as initialization weight;
(2) " input " the i.e. physiological signal feature in training data { input, correct to export } is inputted in neural network,
The output for obtaining model is emotional expression of the subject based on the simplified scale of PAD, and relatively more correct output is the psychological doctor of profession
Teacher calculates the increment δ of error vector E and output node, i.e., for the psychological condition mark and the difference of model output of subject
E=D-Y
(3) the increment δ for calculating backpropagation output node, calculates the increment of next hiding node layer, i.e.,
E(k)=WTδ
(4) (3) step is repeated, until until calculating proceeds to the hidden layer close to input layer;
(5) weighted value is adjusted according to specific learning rules, i.e.,
Δωij=α δixj
ωij←ωij+Δωij
(6) (2)-(5) step is repeated to all training data nodes;
(7) (2)-(6) step is repeated, until training ideal neural network model.
Based on subject's facial expression acquired in virtual reality experience link, each picture of acquisition is carried out
Four kinds of transformation: rotation, horizontal translation, vertical translation, flip horizontal, EDS extended data set, using single convolutional neural networks
(CNN) psychic feedback that model shows the micro- expression of face carries out classification annotation, and notation methods are 12 dimensional vector emotion tables
It reaches.For CNN it can be found that being hidden in the feature in picture, the feature than manually extracting has more preferably discrimination, does not need simultaneously
Excessive pretreatment is done to initial data.
As shown in figure 5, grayscale image of the disclosure to input fixed size, building convolutional layer, pond layer, full articulamentum are taken
Convolutional neural networks structure is built, each pond layer is after corresponding convolutional layer, while the neuron of a convolutional layer is only
It is connected with upper one layer of a part of neuron, so that each neuron is experienced local visual signature, then higher
Layer integrates local message, finally obtains the description information of entire picture.Entire picture is extracted using weight sharing policy
Feature, the weight on each side for connecting each neuron and upper one layer of partial nerve member is other with current layer
The weight on each side that neuron is connected with upper one layer is identical, effectively reduces trained number of parameters.Using down-sampled strategy
By one pixel of a certain range of pixel boil down to, the dimension enhancing generalization of feature is reduced.In all convolutional layers
Later, the full articulamentum comprising 256 inputs neurons and 12 output neurons is connected, activation primitive uses ReLU,
The parameter training of CNN uses stochastic gradient descent algorithm.
As shown in fig. 6, the disclosure using inmate collected virtual reality situation experience after language expression and
Corresponding psychological condition quantization mark, establishes sound bank, trains on this basis using multiple perceptron model as the intelligence of frame
It can psychological condition measurement structure.
As shown in fig. 7, every section of voice is divided into multiframe, to every frame voice by fast Fourier transform, obtain in frequency domain
Expression, the frequency spectrum of every frame voice is indicated by coordinate, then is rotated by 90 °, then these magnitude maps to one
Gray level expressing (continuous amplitude quantizing is 256 quantized values), color is deeper, and range value is bigger, obtains one section of voice frequency
Spectrum, that is, describe the sonograph of voice signal, while including static and dynamic information.After being extracted the characteristic parameter of voice, lean on
A large amount of connection weight carries out nonlinear operation to input pattern in multi-layer perception (MLP), generates maximum excited input point and just represents
The corresponding psychological condition assessment of input pattern.The link weight coefficients of neural network are in use according to the correct of recognition result
Whether continuous carry out adaptive correction.
Link is excited by effective emotion of virtual reality experience platform, subject can be obtained and believed based on multi-source physiology
Number, the psychological condition expression of facial information, speech dictation etc..By the trained machine of these information inputs of inmate
Learning model may finally obtain objectively psychological quantitative evaluation vector, specific formula is as follows:
Wherein j is the different links of VR experience, and j=1 indicates that Related Cases reappear link, and j=2 is indicated caused by case
Social danger link, j=3 indicate excitation link.α is background gactor, age, criminal type, education according to inmate
The setting of the information specific aim such as background, gender.α1To assess vector weight, α based on the psychological condition of multi-source physiologic information2For based on
The psychological condition of facial expression assesses vector weight, α3Psychological condition to be expressed based on language assesses vector weight, α1+α2+
α3=1.By the way that background gactor strategy is arranged, the psychological condition based on multi-modal information is assessed machine learning by the disclosure
Prediction model is combined with the significant difference between inmate's Different Individual, while meeting objective and accurate property and specific aim.
As shown in figure 8, quantify Te st grogram for the mental health degree based on Euclidean distance, specific formula is as follows:
Health (j)=[xi,1,xi,2......xi,12]
Out (j)=[yi,1,yi,2......yi,12]
Wherein j indicates experience link, and i indicates that subject, Health (j) indicate by jth time virtual reality situation experience
After link, the healthy psychology standard vector for the inmate being transformed.
Out (j) indicates subject after virtual reality situation experiences link, with its physiological signal, expression, language expression
Based on equal multi-modal informations, vector is assessed by the determining psychological condition of intelligence assessment.By calculating assessed value and standard value
Euclidean distance obtain degrees of offset, by mental health indication quantify show.It is such as larger in experience 1 degrees of offset of link,
Illustrate that inmate lacks the cognition of objective reason to criminal type;Such as larger in experience 2 degrees of offset of link, explanation is served a sentence
Personnel's degree of showing repentance is lower;It is such as larger in experience 3 degrees of offset of link, illustrate inmate's actively positive psychological assessment shortcoming.
The disclosure can effectively assist prediction to recommit possibility.
Technical term introduction:
1, typical case reproduces, and refers to: the larger realistic case with typical representative of social danger in economic crime;
The larger realistic case with typical representative of social danger in violent crime, occupational crime.By virtual reality technology by case
Part scene true reappearance makes subject obtain lively specific emotional experience.
2, social danger analyze, refer to: Related Cases to harm caused by citizen's property and personal safety, to society
Harm caused by order and economic order etc..Different types of social danger is passed through into field of vision by virtual reality technology
Scape and simultaneous voice explanation are expressed, and subject is made to obtain lively specific emotional experience.
3, actively transformation is reborn, and is referred to: transformation process includes that tissue criminal is engaged in productive labor, is carried out to criminal
Transformation people is realized in ideological education, culture and education, technical education, educates people, and finally realizing keeps criminal's heightened awareness itself guilty
Row return society will not criminal again purpose.Society well and is successfully returned to transformation by virtual reality and video technique
Transformation process and the thought expression statement of different type inmate carries out the true presentation of audiovisual combination, obtains viewer
Lively specific emotional experience.
4, criminal type, comprising: violent offenders, occupational crime and economic crime.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field
For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any
Modification, equivalent replacement, improvement etc., should be included within the scope of protection of this application.
Claims (10)
1. inmate's psychological health states appraisal procedure based on multi-modal information, characterized in that the method is not used in disease
The diagnosis of disease;The method, comprising:
Obtain the inmate being transformed, physiological signal, facial expression image and voice after virtual reality Scenario experiences
Signal extracts physiological signal feature, facial expression image feature and phonic signal character from the signal of acquisition;
Inmate to be measured is obtained, physiological signal, facial expression image and voice signal after virtual reality Scenario experiences, from
Physiological signal feature, facial expression feature and phonic signal character are extracted in the signal of acquisition;
The physiological signal feature, facial expression image feature and phonic signal character of the inmate being transformed are input to pre-
First in trained neural network model, the psychological condition assessment vector for the inmate being transformed is exported;
Physiological signal feature, facial expression feature and the phonic signal character of inmate to be measured are input to trained in advance
In neural network model, the psychological condition assessment vector of inmate to be measured is exported;
Inmate to be measured is calculated at a distance from the psychological condition assessment vector for the inmate being transformed;
Inmate's psychological health states are assessed according to distance.
2. the method as described in claim 1, characterized in that the training step of trained neural network model includes: in advance
Construct neural network model;
Obtain physiological signal of the inmate after virtual reality Scenario experiences as training sample, facial expression image and
Voice signal;
Physiological characteristic, facial expression feature and phonetic feature are extracted from the signal of acquisition;For the people that each serves a sentence in training sample
Physiological characteristic, facial expression feature and the phonetic feature mark psychological condition of member assesses vector;
Vector is assessed using physiological characteristic, facial expression feature, phonetic feature and the psychological condition marked is extracted, to nerve net
Network model is trained;Obtain preparatory trained neural network model.
3. method according to claim 2, characterized in that the psychological condition assesses vector, be 12 row * 1 column to
Amount, the element value that every row includes are the quantized values of every kind of affective state, and the quantized value is integer, the value model of the quantized value
Enclose is -4, -3, -2, -1,0,1,2,3,4;12 row elements just include 12 kinds of affective states, 12 kinds of affective states, comprising: anger
Anger, it is awake, controlled, friendly, tranquil, dominate, it is painful, interested, humble, excited, overcautious and powerful.
4. method according to claim 2, characterized in that for the physiological characteristic of each inmate, face in training sample
Expressive features and phonetic feature mark psychological condition assess vector, are labeled based on PAD Emotion identification scale:
To the same subject after the same virtual reality Scenario experiences, the three dimensional mood identification scale of subject is acquired;One
N times are acquired altogether;
To the same subject under the same virtual reality scene, the value of the three dimensional mood identification scale of n times acquisition is carried out
It averages processing, obtained psychological condition assessment vector is the heart of the current subject after the same virtual reality Scenario experiences
Manage status assessment vector;
To the same subject, replaces next virtual reality scenario and experienced, obtain the heart of next virtual reality scenario
Manage status assessment vector;And then the same subject is obtained, the psychological condition under different virtual reality scenarios assesses vector;
Then next subject is replaced, and so on, different subjects can be obtained under the experience of different virtual reality scenarios
Psychological condition assess vector;
Then, vector is assessed using psychological condition of the obtained different subjects under the experience of different virtual reality scenarios, to not
Physiological characteristic, facial expression feature and the phonetic feature extracted under the experience of different virtual reality scenarios with subject is marked
Note.
5. the method as described in claim 1, characterized in that the acquisition modes of physiological signal, comprising:
Pass through the photoelectricity folder acquisition blood volume beat signals or heart rate signal being arranged on subject's thumb;
The electrocardiosignal that electrode by the way that subject's wrist and ankle is arranged in acquires;
The skin conductance signal acquired by the conductivity sensor being arranged on finger;
Pass through the electromyogram for the subject that the electrode being arranged on forearm acquires;
The breath signal that sensor by the way that subject's thorax position is arranged in acquires;Or,
The EEG signals acquired by brain electrical testing electrode.
6. the method as described in claim 1, characterized in that the physiological signal feature refers to:
Blood volume beat signals feature, comprising: the side of the mean value of blood volume beat signals amplitude, blood volume beat signals amplitude
Difference, the maximum value of blood volume beat signals amplitude, the minimum value of blood volume beat signals amplitude or blood volume beat signals amplitude
Intermediate value;
Heart rate signal feature, comprising: the mean value of heart rate signal amplitude, the variance of heart rate signal amplitude, heart rate signal amplitude are most
The intermediate value of big value, the minimum value of heart rate signal amplitude or heart rate signal amplitude;
Electrocardiosignal feature is that 0-10Hz frequency range in ECG signal frequency spectrum is divided into 8 nonoverlapping sub-bands, obtains
It takes the Fourier transformation mean value of each sub-band as feature, while 8 sub-bands is merged into two sub-bands, 1-3 frequency
Tape merge is low-frequency band, and 4-8 sub-band merges into high frequency band, calculates two sub-bands and is averaged the ratio conduct of Fourier transformation value
Feature;
Skin conductance signal feature, comprising: the mean value of skin conductance signal amplitude, the variance of skin conductance signal amplitude, skin
The adjacent difference of the first-order difference mean value of conductance signal amplitude, the root mean square of skin conductance signal amplitude or skin conductance signal amplitude
Absolute value mean value;
Electromyogram signal feature, comprising: electromyogram signal power spectral density;
Breath signal feature, comprising: on the power spectrum of breath signal choose 0-0.1Hz, 0.1-0.2Hz, 0.2-0.3Hz and
Average power spectral density in tetra- frequency bands of 0.3-0.4Hz;
EEG signals feature, comprising: EEG signals power spectral density, i.e. signal power in per unit band;
Alternatively,
The acquisition modes of the facial expression feature are:
Facial expression image of the inmate after virtual reality Scenario experiences is acquired by camera;To facial expression image into
Row image converts EDS extended data set, then carries out feature extraction, obtains the textural characteristics of image;
Alternatively,
The acquisition modes of the phonic signal character are:
Voice signal of the inmate after virtual reality Scenario experiences is acquired by microphone;Voice signal is divided into several
Frame, carries out Fast Fourier Transform (FFT) to each frame voice signal, obtains frequency domain character, carries out feature extraction to voice signal, mentions
Take tonality feature or Sound Speed Characteristics;
Alternatively,
Vector is assessed using physiological characteristic, facial expression feature, phonetic feature and the psychological condition marked is extracted, to nerve net
Network model is trained;Obtain the specific steps of preparatory trained neural network model are as follows:
Fusion Features are carried out to physiological characteristic, facial expression feature and phonetic feature, fused feature is input to neural mould
In type, output inmate quantify psychological condition assessment vector predicted value, calculate inmate quantify psychological condition assess to
The difference between psychological condition assessment vector that the predicted value of amount and inmate have marked, in difference minimum, deconditioning,
Obtain trained prediction model.
7. inmate's psychological health states assessment system based on multi-modal information, characterized in that include:
Good inmate's data acquisition module is transformed: the inmate being transformed is obtained, after virtual reality Scenario experiences
Physiological signal, facial expression image and voice signal;
Good inmate's data characteristics extraction module is transformed: extracting physiological signal feature, facial expression from the signal of acquisition
Characteristics of image and phonic signal character;
Inmate's data acquisition module to be measured: obtaining inmate to be measured, physiological signal after virtual reality Scenario experiences,
Facial expression image and voice signal;
Inmate's data characteristics extraction module to be measured: physiological signal feature, facial expression feature are extracted from the signal of acquisition
And phonic signal character;
First psychological condition assesses vector output module: by physiological signal feature, the facial expression of the inmate being transformed
Characteristics of image and phonic signal character are input in preparatory trained neural network model, export the inmate being transformed
Psychological condition assess vector;
Second psychological condition assess vector output module: by the physiological signal feature of inmate to be measured, facial expression feature and
Phonic signal character is input in preparatory trained neural network model, export inmate to be measured psychological condition assess to
Amount;
Inmate's psychological health states evaluation module to be measured: the heart of inmate to be measured with the inmate being transformed are calculated
Manage the Euclidean distance of status assessment vector;Inmate's psychological health states to be measured are assessed according to distance.
8. a kind of electronic equipment, characterized in that on a memory and on a processor including memory and processor and storage
The computer instruction of operation when the computer instruction is run by processor, is completed described in any one of claim 1-6 method
Step.
9. a kind of computer readable storage medium, characterized in that for storing computer instruction, the computer instruction is processed
When device executes, step described in any one of claim 1-6 method is completed.
10. inmate's psychological health states assessment system based on multi-modal information, characterized in that include:
Physiological parameter acquisition device, image collecting device, voice acquisition device and electronic equipment according to any one of claims 8;
The data of acquisition are transferred to electronic equipment by physiological parameter acquisition device, image collecting device and voice acquisition device;
Electronic equipment according to the collected data assesses the psychological health states of inmate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910784156.6A CN110507335B (en) | 2019-08-23 | 2019-08-23 | Multi-mode information based criminal psychological health state assessment method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910784156.6A CN110507335B (en) | 2019-08-23 | 2019-08-23 | Multi-mode information based criminal psychological health state assessment method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110507335A true CN110507335A (en) | 2019-11-29 |
CN110507335B CN110507335B (en) | 2021-01-01 |
Family
ID=68627532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910784156.6A Active CN110507335B (en) | 2019-08-23 | 2019-08-23 | Multi-mode information based criminal psychological health state assessment method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110507335B (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028919A (en) * | 2019-12-03 | 2020-04-17 | 北方工业大学 | Phobia self-diagnosis and treatment system based on artificial intelligence algorithm |
CN111125525A (en) * | 2019-12-24 | 2020-05-08 | 山东大学 | Individual modification and correction strategy recommendation system for prisoners and operation method thereof |
CN111145851A (en) * | 2019-12-27 | 2020-05-12 | 山东华尚电气有限公司 | Mental state monitoring and evaluating system based on intelligent bracelet |
CN111222464A (en) * | 2020-01-07 | 2020-06-02 | 中国医学科学院生物医学工程研究所 | Emotion analysis method and system |
CN111449684A (en) * | 2020-04-09 | 2020-07-28 | 济南康硕生物技术有限公司 | Method and system for quickly acquiring cardiac ultrasound standard scanning section |
CN111507592A (en) * | 2020-04-08 | 2020-08-07 | 山东大学 | Evaluation method for active modification behaviors of prisoners |
CN111513732A (en) * | 2020-04-29 | 2020-08-11 | 山东大学 | Intelligent psychological stress assessment early warning system for various groups of people under epidemic disease condition |
CN111723869A (en) * | 2020-06-22 | 2020-09-29 | 山东大学 | Special personnel-oriented intelligent behavior risk early warning method and system |
CN111967355A (en) * | 2020-07-31 | 2020-11-20 | 华南理工大学 | Prison-crossing intention evaluation method for prisoners based on body language |
CN112155577A (en) * | 2020-10-15 | 2021-01-01 | 深圳大学 | Social pressure detection method and device, computer equipment and storage medium |
CN112185493A (en) * | 2020-08-26 | 2021-01-05 | 山东大学 | Personality preference diagnosis device and project recommendation system based on same |
CN112185558A (en) * | 2020-09-22 | 2021-01-05 | 珠海中科先进技术研究院有限公司 | Mental health and rehabilitation evaluation method, device and medium based on deep learning |
CN112190264A (en) * | 2020-10-09 | 2021-01-08 | 安徽美心信息科技有限公司 | Intelligent psychological body and mind feedback analysis system |
CN112287873A (en) * | 2020-11-12 | 2021-01-29 | 广东恒电信息科技股份有限公司 | Judicial service early warning system |
CN112450932A (en) * | 2020-12-14 | 2021-03-09 | 深圳市艾利特医疗科技有限公司 | Psychological disorder detection system and method |
CN112597967A (en) * | 2021-01-05 | 2021-04-02 | 沈阳工业大学 | Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals |
CN112735585A (en) * | 2021-04-02 | 2021-04-30 | 四川京炜数字科技有限公司 | Arthritis rehabilitation diagnosis and treatment method and system based on neural network and machine learning |
CN112842337A (en) * | 2020-11-11 | 2021-05-28 | 郑州大学第一附属医院 | Emotion dispersion system and method for mobile ward-round scene |
CN113035232A (en) * | 2021-03-23 | 2021-06-25 | 北京智能工场科技有限公司 | Psychological state prediction system, method and device based on voice recognition |
CN113140312A (en) * | 2020-01-19 | 2021-07-20 | Oppo广东移动通信有限公司 | User data processing method and device, session data processing method and device, and electronic equipment |
CN113255635A (en) * | 2021-07-19 | 2021-08-13 | 中国科学院自动化研究所 | Multi-mode fused psychological stress analysis method |
CN113284618A (en) * | 2021-04-14 | 2021-08-20 | 北京育学园健康管理中心有限公司 | Infant health assessment method |
CN114091844A (en) * | 2021-11-01 | 2022-02-25 | 山东心法科技有限公司 | Early warning method, device and storage medium for crime reoccurrence of violent personnel |
WO2022100187A1 (en) * | 2020-11-11 | 2022-05-19 | 西北工业大学 | Mobile terminal-based method for identifying and monitoring emotions of user |
CN114550860A (en) * | 2022-01-28 | 2022-05-27 | 中国人民解放军总医院第一医学中心 | Hospitalizing satisfaction evaluation method based on process data and intelligent network model |
CN115601819A (en) * | 2022-11-29 | 2023-01-13 | 四川大学华西医院(Cn) | Multimode violence tendency recognition method, device, equipment and medium |
CN116548971A (en) * | 2023-05-17 | 2023-08-08 | 郑州师范学院 | Psychological crisis auxiliary monitoring system based on physiological parameters of object |
CN117292466A (en) * | 2023-10-17 | 2023-12-26 | 江苏新巢天诚智能技术有限公司 | Multi-mode computer vision and biological recognition based Internet of things unlocking method |
CN117352002A (en) * | 2023-10-08 | 2024-01-05 | 广州点子信息科技有限公司 | Remote intelligent voice analysis supervision method |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5601090A (en) * | 1994-07-12 | 1997-02-11 | Brain Functions Laboratory, Inc. | Method and apparatus for automatically determining somatic state |
KR20080009458A (en) * | 2006-07-24 | 2008-01-29 | 중앙대학교 산학협력단 | System for recognizing emotion using neural network |
CN102564424A (en) * | 2011-12-29 | 2012-07-11 | 上海电机学院 | Multiple sensor-based data fusion method |
US20140330089A1 (en) * | 2013-05-03 | 2014-11-06 | The Charles Stark Draper Laboratory, Inc. | Physiological feature extraction and fusion to assist in the diagnosis of post-traumatic stress disorder |
CN105224961A (en) * | 2015-11-04 | 2016-01-06 | 中国电子科技集团公司第四十一研究所 | A kind of diffuse reflectance infrared spectroscopy of high resolution extracts and matching process |
CN105683724A (en) * | 2013-09-19 | 2016-06-15 | 欧莱雅公司 | Systems and methods for measuring and categorizing colors and spectra of surfaces |
CN106446550A (en) * | 2016-09-28 | 2017-02-22 | 湖南老码信息科技有限责任公司 | Cold prediction method and system based on incremental neutral network model |
CN106861012A (en) * | 2017-02-22 | 2017-06-20 | 南京邮电大学 | User emotion adjusting method based on Intelligent bracelet under VR experience scenes |
CN107007291A (en) * | 2017-04-05 | 2017-08-04 | 天津大学 | Intense strain intensity identifying system and information processing method based on multi-physiological-parameter |
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
CN107242876A (en) * | 2017-04-20 | 2017-10-13 | 合肥工业大学 | A kind of computer vision methods for state of mind auxiliary diagnosis |
CN107437090A (en) * | 2016-05-28 | 2017-12-05 | 郭帅杰 | The continuous emotion Forecasting Methodology of three mode based on voice, expression and electrocardiosignal |
CN108806722A (en) * | 2017-04-21 | 2018-11-13 | 艾于德埃林公司 | The method and automation affective state inference system inferred for automatic affective state |
CN108888281A (en) * | 2018-08-16 | 2018-11-27 | 华南理工大学 | State of mind appraisal procedure, equipment and system |
CN109124655A (en) * | 2018-07-04 | 2019-01-04 | 中国电子科技集团公司电子科学研究院 | State of mind analysis method, device, equipment, computer media and multifunctional chair |
CN109157231A (en) * | 2018-10-24 | 2019-01-08 | 阿呆科技(北京)有限公司 | Portable multi-channel Depression trend assessment system based on emotional distress task |
CN109998570A (en) * | 2019-03-11 | 2019-07-12 | 山东大学 | Inmate's psychological condition appraisal procedure, terminal, equipment and system |
-
2019
- 2019-08-23 CN CN201910784156.6A patent/CN110507335B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5601090A (en) * | 1994-07-12 | 1997-02-11 | Brain Functions Laboratory, Inc. | Method and apparatus for automatically determining somatic state |
KR20080009458A (en) * | 2006-07-24 | 2008-01-29 | 중앙대학교 산학협력단 | System for recognizing emotion using neural network |
CN102564424A (en) * | 2011-12-29 | 2012-07-11 | 上海电机学院 | Multiple sensor-based data fusion method |
US20140330089A1 (en) * | 2013-05-03 | 2014-11-06 | The Charles Stark Draper Laboratory, Inc. | Physiological feature extraction and fusion to assist in the diagnosis of post-traumatic stress disorder |
CN105683724A (en) * | 2013-09-19 | 2016-06-15 | 欧莱雅公司 | Systems and methods for measuring and categorizing colors and spectra of surfaces |
CN105224961A (en) * | 2015-11-04 | 2016-01-06 | 中国电子科技集团公司第四十一研究所 | A kind of diffuse reflectance infrared spectroscopy of high resolution extracts and matching process |
CN107437090A (en) * | 2016-05-28 | 2017-12-05 | 郭帅杰 | The continuous emotion Forecasting Methodology of three mode based on voice, expression and electrocardiosignal |
CN106446550A (en) * | 2016-09-28 | 2017-02-22 | 湖南老码信息科技有限责任公司 | Cold prediction method and system based on incremental neutral network model |
CN106861012A (en) * | 2017-02-22 | 2017-06-20 | 南京邮电大学 | User emotion adjusting method based on Intelligent bracelet under VR experience scenes |
CN107007291A (en) * | 2017-04-05 | 2017-08-04 | 天津大学 | Intense strain intensity identifying system and information processing method based on multi-physiological-parameter |
CN107242876A (en) * | 2017-04-20 | 2017-10-13 | 合肥工业大学 | A kind of computer vision methods for state of mind auxiliary diagnosis |
CN108806722A (en) * | 2017-04-21 | 2018-11-13 | 艾于德埃林公司 | The method and automation affective state inference system inferred for automatic affective state |
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
CN109124655A (en) * | 2018-07-04 | 2019-01-04 | 中国电子科技集团公司电子科学研究院 | State of mind analysis method, device, equipment, computer media and multifunctional chair |
CN108888281A (en) * | 2018-08-16 | 2018-11-27 | 华南理工大学 | State of mind appraisal procedure, equipment and system |
CN109157231A (en) * | 2018-10-24 | 2019-01-08 | 阿呆科技(北京)有限公司 | Portable multi-channel Depression trend assessment system based on emotional distress task |
CN109998570A (en) * | 2019-03-11 | 2019-07-12 | 山东大学 | Inmate's psychological condition appraisal procedure, terminal, equipment and system |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028919A (en) * | 2019-12-03 | 2020-04-17 | 北方工业大学 | Phobia self-diagnosis and treatment system based on artificial intelligence algorithm |
CN111125525A (en) * | 2019-12-24 | 2020-05-08 | 山东大学 | Individual modification and correction strategy recommendation system for prisoners and operation method thereof |
CN111125525B (en) * | 2019-12-24 | 2023-09-15 | 山东大学 | Personalized transformation correction strategy recommendation system for prisoner and operation method thereof |
CN111145851A (en) * | 2019-12-27 | 2020-05-12 | 山东华尚电气有限公司 | Mental state monitoring and evaluating system based on intelligent bracelet |
CN111222464B (en) * | 2020-01-07 | 2023-11-07 | 中国医学科学院生物医学工程研究所 | Emotion analysis method and system |
CN111222464A (en) * | 2020-01-07 | 2020-06-02 | 中国医学科学院生物医学工程研究所 | Emotion analysis method and system |
CN113140312A (en) * | 2020-01-19 | 2021-07-20 | Oppo广东移动通信有限公司 | User data processing method and device, session data processing method and device, and electronic equipment |
CN111507592A (en) * | 2020-04-08 | 2020-08-07 | 山东大学 | Evaluation method for active modification behaviors of prisoners |
CN111507592B (en) * | 2020-04-08 | 2022-03-15 | 山东大学 | Evaluation method for active modification behaviors of prisoners |
CN111449684A (en) * | 2020-04-09 | 2020-07-28 | 济南康硕生物技术有限公司 | Method and system for quickly acquiring cardiac ultrasound standard scanning section |
CN111449684B (en) * | 2020-04-09 | 2023-05-05 | 济南康硕生物技术有限公司 | Method and system for rapidly acquiring standard scanning section of heart ultrasound |
CN111513732A (en) * | 2020-04-29 | 2020-08-11 | 山东大学 | Intelligent psychological stress assessment early warning system for various groups of people under epidemic disease condition |
CN111723869A (en) * | 2020-06-22 | 2020-09-29 | 山东大学 | Special personnel-oriented intelligent behavior risk early warning method and system |
CN111967355A (en) * | 2020-07-31 | 2020-11-20 | 华南理工大学 | Prison-crossing intention evaluation method for prisoners based on body language |
CN111967355B (en) * | 2020-07-31 | 2023-09-01 | 华南理工大学 | Prisoner jail-breaking intention assessment method based on limb language |
CN112185493A (en) * | 2020-08-26 | 2021-01-05 | 山东大学 | Personality preference diagnosis device and project recommendation system based on same |
CN112185558A (en) * | 2020-09-22 | 2021-01-05 | 珠海中科先进技术研究院有限公司 | Mental health and rehabilitation evaluation method, device and medium based on deep learning |
CN112190264A (en) * | 2020-10-09 | 2021-01-08 | 安徽美心信息科技有限公司 | Intelligent psychological body and mind feedback analysis system |
CN112155577A (en) * | 2020-10-15 | 2021-01-01 | 深圳大学 | Social pressure detection method and device, computer equipment and storage medium |
CN112155577B (en) * | 2020-10-15 | 2023-05-05 | 深圳大学 | Social pressure detection method and device, computer equipment and storage medium |
CN112842337A (en) * | 2020-11-11 | 2021-05-28 | 郑州大学第一附属医院 | Emotion dispersion system and method for mobile ward-round scene |
WO2022100187A1 (en) * | 2020-11-11 | 2022-05-19 | 西北工业大学 | Mobile terminal-based method for identifying and monitoring emotions of user |
CN112287873A (en) * | 2020-11-12 | 2021-01-29 | 广东恒电信息科技股份有限公司 | Judicial service early warning system |
CN112450932A (en) * | 2020-12-14 | 2021-03-09 | 深圳市艾利特医疗科技有限公司 | Psychological disorder detection system and method |
CN112597967A (en) * | 2021-01-05 | 2021-04-02 | 沈阳工业大学 | Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals |
CN113035232A (en) * | 2021-03-23 | 2021-06-25 | 北京智能工场科技有限公司 | Psychological state prediction system, method and device based on voice recognition |
CN112735585B (en) * | 2021-04-02 | 2021-08-03 | 刘思佳 | Arthritis rehabilitation diagnosis and treatment method and system based on neural network and machine learning |
CN112735585A (en) * | 2021-04-02 | 2021-04-30 | 四川京炜数字科技有限公司 | Arthritis rehabilitation diagnosis and treatment method and system based on neural network and machine learning |
CN113284618B (en) * | 2021-04-14 | 2022-07-22 | 北京育学园健康管理中心有限公司 | Infant health assessment method |
CN113284618A (en) * | 2021-04-14 | 2021-08-20 | 北京育学园健康管理中心有限公司 | Infant health assessment method |
CN113255635A (en) * | 2021-07-19 | 2021-08-13 | 中国科学院自动化研究所 | Multi-mode fused psychological stress analysis method |
CN114091844A (en) * | 2021-11-01 | 2022-02-25 | 山东心法科技有限公司 | Early warning method, device and storage medium for crime reoccurrence of violent personnel |
CN114550860A (en) * | 2022-01-28 | 2022-05-27 | 中国人民解放军总医院第一医学中心 | Hospitalizing satisfaction evaluation method based on process data and intelligent network model |
CN115601819A (en) * | 2022-11-29 | 2023-01-13 | 四川大学华西医院(Cn) | Multimode violence tendency recognition method, device, equipment and medium |
CN116548971A (en) * | 2023-05-17 | 2023-08-08 | 郑州师范学院 | Psychological crisis auxiliary monitoring system based on physiological parameters of object |
CN116548971B (en) * | 2023-05-17 | 2023-10-13 | 郑州师范学院 | Psychological crisis auxiliary monitoring system based on physiological parameters of object |
CN117352002A (en) * | 2023-10-08 | 2024-01-05 | 广州点子信息科技有限公司 | Remote intelligent voice analysis supervision method |
CN117292466A (en) * | 2023-10-17 | 2023-12-26 | 江苏新巢天诚智能技术有限公司 | Multi-mode computer vision and biological recognition based Internet of things unlocking method |
Also Published As
Publication number | Publication date |
---|---|
CN110507335B (en) | 2021-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110507335A (en) | Inmate's psychological health states appraisal procedure and system based on multi-modal information | |
CN106886792B (en) | Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism | |
CN112120716A (en) | Wearable multi-mode emotional state monitoring device | |
US10806390B1 (en) | System and method for detecting physiological state | |
CN110464314A (en) | Method and system are estimated using mankind's emotion of deep physiological mood network | |
RU2708807C2 (en) | Algorithm of integrated remote contactless multichannel analysis of psychoemotional and physiological state of object based on audio and video content | |
CA2962083A1 (en) | System and method for detecting invisible human emotion | |
CN113729707A (en) | FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG | |
CN111920420B (en) | Patient behavior multi-modal analysis and prediction system based on statistical learning | |
CN113197579A (en) | Intelligent psychological assessment method and system based on multi-mode information fusion | |
Wang et al. | Maximum weight multi-modal information fusion algorithm of electroencephalographs and face images for emotion recognition | |
CN113397546A (en) | Method and system for constructing emotion recognition model based on machine learning and physiological signals | |
CN112185493A (en) | Personality preference diagnosis device and project recommendation system based on same | |
Villegas et al. | A novel stuttering disfluency classification system based on respiratory biosignals | |
CN115299947A (en) | Psychological scale confidence evaluation method and system based on multi-modal physiological data | |
CN113974627B (en) | Emotion recognition method based on brain-computer generated confrontation | |
Dar et al. | YAAD: young adult’s affective data using wearable ECG and GSR sensors | |
Vijayakumar et al. | ECG noise classification using deep learning with feature extraction | |
Li et al. | Multi-modal emotion recognition based on deep learning of EEG and audio signals | |
Bakkialakshmi et al. | AMIGOS: a robust emotion detection framework through Gaussian ResiNet | |
CN110135357A (en) | A kind of happiness real-time detection method based on long-range remote sensing | |
CN114983434A (en) | System and method based on multi-mode brain function signal recognition | |
CN113040773A (en) | Data acquisition and processing method | |
Hasan et al. | Emotion prediction through EEG recordings using computational intelligence | |
Wan et al. | Learning immersion assessment model based on multi-dimensional physiological characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |