CN108630299A - Personality analysis method and system, storage medium based on skin resistance feature - Google Patents

Personality analysis method and system, storage medium based on skin resistance feature Download PDF

Info

Publication number
CN108630299A
CN108630299A CN201810395841.5A CN201810395841A CN108630299A CN 108630299 A CN108630299 A CN 108630299A CN 201810395841 A CN201810395841 A CN 201810395841A CN 108630299 A CN108630299 A CN 108630299A
Authority
CN
China
Prior art keywords
user
skin resistance
personality
resistance signal
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810395841.5A
Other languages
Chinese (zh)
Inventor
孙晓
聂挺
丁帅
杨善林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201810395841.5A priority Critical patent/CN108630299A/en
Publication of CN108630299A publication Critical patent/CN108630299A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

The present invention provides a kind of personality analysis method and system, storage medium based on skin resistance feature, and this method includes:Obtain skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;Determine the Time-domain Statistics feature and frequency domain statistical nature of the skin resistance signal;By in the Time-domain Statistics feature and the frequency domain statistical nature input emotion prediction model trained in advance, the affective characteristics of the user to be measured are obtained;By in affective characteristics input personality prediction model trained in advance, the personality characteristics of the user to be measured is obtained.The present invention is based on skin resistance features to carry out personality analysis, can improve the accuracy of personality analysis.

Description

Personality analysis method and system, storage medium based on skin resistance feature
Technical field
The present invention relates to personality analysis technical fields, and in particular to a kind of personality analysis method based on skin resistance feature With system, storage medium.
Background technology
Currently, personality analysis is widely used in the fields such as selection of talented people, psychological consultation and occupational counseling.In the prior art There is the personality analysis method based on social media, since the confidence level of the information shown in social media is not high, causes to analyze Personality accuracy it is not high.
Invention content
(1) the technical issues of solving
In view of the deficiencies of the prior art, the present invention provides a kind of personality analysis method based on skin resistance feature and it is System, storage medium, can improve the accuracy of personality analysis.
(2) technical solution
In order to achieve the above object, the present invention is achieved by the following technical programs:
The present invention provides a kind of personality analysis method based on skin resistance feature, including:
Obtain skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;
Determine the Time-domain Statistics feature and frequency domain statistical nature of the skin resistance signal;
By in the Time-domain Statistics feature and the frequency domain statistical nature input emotion prediction model trained in advance, obtain The affective characteristics of the user to be measured;
By in affective characteristics input personality prediction model trained in advance, the personality for obtaining the user to be measured is special Sign.
The present invention also provides a kind of personality analysis systems based on skin resistance feature, including:
Signal acquisition module, for obtaining skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;
Characteristic determination module, the Time-domain Statistics feature for determining the skin resistance signal and frequency domain statistical nature;
Sentiment analysis module, the feelings for training the Time-domain Statistics feature and frequency domain statistical nature input in advance Feel in prediction model, obtains the affective characteristics of the user to be measured;
Personality analysis module, for by affective characteristics input personality prediction model trained in advance, obtaining described The personality characteristics of user to be measured.
The present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, which is characterized in that Processor can realize the above method when executing the computer program.
(3) advantageous effect
Personality analysis method and system, storage medium provided in an embodiment of the present invention based on skin resistance feature, it is to be measured User generates different emotions under the stimulation of emotion stimulus, and then influences the skin resistance signal of user to be measured, Jin Erji In skin resistance signal, the affective characteristics of user to be measured are predicted, and then go out the personality of user to be measured according to emotional feature analysis Feature.Since skin resistance signal is truer relative to social media information, the present invention can improve personality analysis Accuracy.Moreover, the personality characteristics not only strong and weak difference that the present invention analyzes, can compare science analyzes use to be measured The truth at family.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 is the flow diagram of the personality analysis method based on skin resistance feature in one embodiment of the invention;
Fig. 2 is the schematic diagram of emotion stimulus in one embodiment of the invention;
Fig. 3~8 are the signals for being filtered rear signal in one embodiment of the invention to the skin resistance signal of 6 kinds of emotions Figure;
Fig. 9 is the structure diagram of the personality analysis system based on skin resistance feature in one embodiment of the invention.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art The every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
In a first aspect, the present invention provides a kind of personality analysis method based on skin resistance feature, as shown in Figure 1, the party Method includes:
S101, skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus is obtained;
Wherein, emotion stimulus may include visual transmission source, and the visual transmission source is by multiple video clips and sets It sets the tranquil segment before each video clip to be formed, the multiple video clip can excite a variety of emotions of user.
For example, as shown in Fig. 2, from multiple segments well-chosen editing go out to be suitble to excite 19 of young people's emotion Vidclip, this primary experiment of 19 segments composition, primary experiment can inspire six kinds of emotions, be followed successively by it is glad, surprised, detest It dislikes, is sad, indignation, fear.One segment about 3 minutes or so, tranquil segment are to allow user to be measured in excitation feelings every time Tranquility is in before sense, and the mood back of the body for recording user to be measured is used as baseline.In addition, the excitation of one section of mood generally exists Below just can be effective, so the skin resistance signal corresponding to latter 50 seconds of the playing process of one section of video clip can be obtained, and It is not the skin resistance signal obtained corresponding to the entire playing process of one section of video clip.
Certainly, emotion stimulus can also include other stimulus, for example, sonic stimulation source etc..
Before user to be measured watches video clip, it can be the tool that its finger wears upper profession acquisition skin resistance, adopt Collect and records skin resistance signal.When acquiring skin resistance signal, per second can extract 120 times, it is then per second to obtain 120 A data point.If after a user to be measured watches a video clip, 50*120 data point can be got.
In practical application, can also be normalized before carrying out subsequent processing according to the skin resistance signal of acquisition The pretreatment operations such as handle, be filtered.
The reason of being normalized be:The individual difference of electrodermal response foundation level is very big, the skin of different people The horizontal different or even same person of skin electricity all can be different under different time, varying environment.It is different in order to study The horizontal relationship between emotion of application on human skin electricity, needs the foundation level difference for removing each user's skin electrical signal, that is, Individual difference can just work out the variation that certain internal characteristics of electrodermal response are generated with emotion difference.
Specifically normalized process includes:By skin resistance letter of the user to be measured under the stimulation of each video clip Number with corresponding reference signal make it is poor, obtain normalized skin resistance signal;Wherein, the reference signal is the use to be measured The mean value of skin resistance signal under tranquil segment of the family corresponding to the video clip.
For example, user to be measured is individually subtracted in 6000 data points for the rear 50s for watching first video clip process The mean value of all skin resistance signals of tranquil segment of the user to be measured before watching first video clip, obtains 6000 Normalized skin resistance signal, just eliminates individual difference in this way.
The reason of being filtered is that extraneous very faint interference generates prodigious interference after amplifier, or even floods No useful signal.The characteristics of Butterworth filter (Butterworth), is that the frequency response curve of passband is most smooth, and With flat to greatest extent, do not rise and fall, and it is zero to be then gradually reduced in suppressed frequency band, this makes it be suitable for many occasions.Skin The useful frequency band of resistance signal, that is, electrodermal response signal is mainly in 0.2Hz hereinafter, its interference signal and electrodermal response letter Number frequency band be nonoverlapping, therefore Butterworth filter may be used, the skin resistance signal is filtered; Wherein, the exponent number of the Butterworth filter is 2, cutoff frequency 0.3Hz.High-frequency Interference can be effectively filtered out in this way, Removal High-frequency Interference is exactly the process of smoothing processing.For example, Fig. 3~8 are the skin using 6 kinds of emotions of Butterworth filter pair Resistance signal be filtered after signal schematic representation.
Certainly, the case where being flooded completely by noise for signal then intercepts and abandons these nugatory data.
S102, the Time-domain Statistics feature and frequency domain statistical nature for determining the skin resistance signal;
The step can determine the feature of skin resistance signal in terms of time domain and frequency domain two:
Time domain:
The Time-domain Statistics feature include the mean value of the skin resistance signal, intermediate value, standard deviation, minimum value, maximum value, At least one of difference, minimum value ratio and maximum value ratio of maxima and minima, the single order of the skin resistance signal The mean value of difference processing result, intermediate value, standard deviation, minimum value, maximum value, the difference of maxima and minima, minimum value ratio and At least one of maximum value ratio, and/or, mean value, intermediate value, the standard of the second differnce result of the skin resistance signal At least one of difference, minimum value, maximum value, the difference of maxima and minima, minimum value ratio and maximum value ratio.Namely Say, Time-domain Statistics feature include some statistical values of skin resistance signal itself, first-order difference result some statistical values and/or Some statistical values of second differnce result.Wherein, first-order difference embodies the variation tendency and variation speed of signal, if to one Order difference has taken absolute value, has been ignored as the trend of variation, and only considers to change speed.First-order difference can be used for detecting signal The extreme point of part, second differnce can be used for detecting the inflection point of signal part, and then convenient for obtaining the spy of skin resistance signal Sign.
Frequency domain:
The information that can't see in the time domain, such as the frequency change information of signal, after Fourier transform, frequency domain can To be clearly seen that.Fourier transformation is divided into continuous and discrete two kinds of situations.We to handle discrete numerical value it is necessary to carry out from Dissipate Fourier transformation (DFT).But the calculation amount of discrete transform is too big, it is difficult to real-time processing problem, therefore we use quick Fu In leaf transformation so that operation efficiency is greatly improved.Specially:Fast Fourier change is carried out to the skin resistance signal It changes, and by the mean value of transformation results, intermediate value, standard deviation, minimum value, maximum value, the difference of maxima and minima, minimum value ratio It is used as the frequency domain statistical nature at least one of maximum value ratio.
S103, the emotion prediction model for training the Time-domain Statistics feature and frequency domain statistical nature input in advance In, obtain the affective characteristics of the user to be measured;
Emotion prediction model is trained in advance, and training process may include steps of S201~S204:
Skin resistance signal under emotion stimulus stimulation described in S201, the multiple users of acquisition;
Since the data volume that training uses is bigger, the predictive ability for the model that training obtains is more accurate, therefore the number of user Measure it is The more the better, for example, 100 users.
Above-mentioned steps can be referred to by obtaining the process of skin resistance signal of each user under the stimulation of emotion stimulus S101, which is not described herein again.
S202, determine in the multiple user each user in the emotion stimulus under the stimulation of each video clip Skin resistance signal Time-domain Statistics feature and frequency domain statistical nature, and obtain the user in the post-stimulatory feelings of the video clip Feel feature;
The Time-domain Statistics of skin resistance signal of each user under the stimulation of each video clip are determined in the step The process of feature and frequency domain statistical nature can refer to above-mentioned steps S102, and details are not described herein again.
For example, an emotion subjectiveness account can be provided for each user, as shown in Fig. 2, each user After having watched a video clip, the emotional information for having watched this section of video is filled in account, has six in account Kind mood:It is glad, surprised, detest, be sad, is indignation, frightened, therefore user can be obtained from emotion subjectiveness account and regarded at this The post-stimulatory affective characteristics of frequency segment.The form that vector may be used in affective characteristics indicates, for example, [0,0,0,1,1,0], contains Justice is that a user produces sad and indignation mood after finishing watching a video clip.
S203, by each user in the multiple user each video clip stimulation under skin resistance signal time domain Statistical nature and frequency domain statistical nature are as first training sample, and the emotion by the user under video clip stimulation Affective tag of the feature as the first training sample of this, according to corresponding first training sample of the multiple user and institute Affective tag is stated, emotion tranining database is built;
The step is to build the process of emotion tranining database, and a training sample includes a user in a video The Time-domain Statistics feature and frequency domain statistical nature of skin resistance signal under segment stimulation, there are one the first training sample tools Affective tag.
For example, visual transmission source includes 19 vidclips, thus a user watched 19 vidclips it Afterwards, 19 the first training samples are will produce, 19 affective characteristics are also will produce.
A large amount of first training sample caused by a large number of users and corresponding affective tag are formed into emotion and train number According to library.
S204, it is based on the emotion tranining database, model training is carried out using support vector machines, it is pre- to obtain the emotion Survey model.
Detailed process may include:
By the first training sample and the format that is required according to libsvm of corresponding affective tag in emotion tranining database It arranges.Regularization is carried out with svmscale pairs of the first training sample and corresponding affective tag, data is made to fall on [- 1,1] In range.Using grid.py cross validations select optimal parameter c and g, then svmtrain using obtain optimal parameter c with G, linear classifier, RBF kernel functions are trained the first training sample in entire emotion tranining database, obtain supporting to Amount machine model parameter.After having trained .model model files are as a result saved as, have just obtained six graders, that is, six The prediction model of kind emotion.
Based on above step S201~S204, emotion prediction model can be obtained, and then by the Time-domain Statistics of user to be measured Feature and frequency domain statistical nature input emotion prediction model, and then obtain the affective characteristics of user to be measured.
In S104, the personality prediction model for training affective characteristics input in advance, the people of the user to be measured is obtained Lattice feature.
Personality prediction model is trained in advance, and training process may include steps of S301~S303:
S301, the personality characteristics for obtaining each user in the multiple user;
For example, it after having carried out skin resistance experiment to multiple users, can be provided for each user one big The personality assessment of five Personality tests tests table, and user fills in the information in relation to Personality test in personality assessment's test table, in turn Go out five kinds of personalities of user for the filled out content analysis of user:OfNeuroticism (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), doing one's duty property (Conscientiousness).Therefore the personality characteristics of user can be determined according to personality assessment's test table that user fills in.Its In, personality characteristics is similar with affective characteristics, can also be indicated in the form of vector.
S302, multiple affective characteristics of each user in the multiple user under the stimulation of the multiple video clip are made For second training sample, and using the personality characteristics of the user as the personality label of the second training sample of this, according to institute Corresponding second training sample of multiple users and the personality label are stated, personality tranining database is built;
For example, visual transmission source includes 19 vidclips, thus a user watched 19 vidclips it Afterwards, 19 affective characteristics are will produce, this 19 affective characteristics form second training sample.One the second training sample and one Personal case marker label are corresponding.
S303, it is based on the personality tranining database, model training is carried out by the way of deep learning, obtains the people Lattice test model.
Detailed process may include:
The emotion tranining database is divided into five word banks, a word bank corresponds to a kind of personality.It will be in each word bank Second training sample is according to 8:2 ratio cut partition training set and verification collect, and multilayer sense is trained using the libraries keras based on theano Machine model is known, including four layers of full articulamentum and one layer sigmoid layers.It is trained training sample as input data, training Process includes preceding to training and backward training.Forward direction training process is unsupervised learning from bottom to top, i.e., since bottom, one The past top layer of one layer of layer is trained, in training process, after training study obtains the (n-1)th layer parameter, using n-1 layers of output as n-th The input of layer, training n-th layer, thus respectively obtains the parameter of each layer;Backward training process is top-down supervised learning, i.e., The top-down transmission of training error, is finely adjusted parameter.After the completion of training, five model.pkl model files are obtained, respectively Corresponding five kinds of personalities.
Based on S301~S303, personality prediction model can be obtained, by the affective characteristics input personality prediction of user to be measured In model, the value of five kinds of personalities of user to be measured can be obtained, it is user's to be measured that the value of this five kinds of personalities, which forms a vector, Personality characteristics.
Personality analysis method provided by the invention, user to be measured generate different emotions under the stimulation of emotion stimulus, And then the skin resistance signal of user to be measured is influenced, and then it is based on skin resistance signal, the affective characteristics of user to be measured are predicted, And then go out the personality characteristics of user to be measured according to emotional feature analysis.Due to skin resistance signal relative to social media information more Add really, therefore the present invention can improve the accuracy of personality analysis.Moreover, the personality characteristics that the present invention analyzes is not only strong Weak difference can compare the truth for analyzing user to be measured of science.
The present invention also provides a kind of personality analysis systems based on skin resistance feature, as shown in figure 9, the system includes:
Signal acquisition module, for obtaining skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;
Characteristic determination module, the Time-domain Statistics feature for determining the skin resistance signal and frequency domain statistical nature;
Sentiment analysis module, the feelings for training the Time-domain Statistics feature and frequency domain statistical nature input in advance Feel in prediction model, obtains the affective characteristics of the user to be measured;
Personality analysis module, for by affective characteristics input personality prediction model trained in advance, obtaining described The personality characteristics of user to be measured.
In some embodiments, the emotion stimulus includes visual transmission source, and the visual transmission source is by multiple videos Segment and the tranquil segment being arranged before each video clip are formed, and the multiple video clip can excite user's A variety of emotions.
In some embodiments, the system also includes:
Normalize module, Time-domain Statistics feature and frequency domain for determining the skin resistance signal in characteristic determination module Before statistical nature, skin resistance signal of the user to be measured under the stimulation of each video clip is believed with corresponding benchmark Number work is poor, obtains normalized skin resistance signal;Wherein, the reference signal is the user to be measured in the video clip institute The mean value of skin resistance signal under corresponding calmness segment.
In some embodiments, the system also includes:
First model construction module is specifically included for building emotion prediction model:Emotion described in multiple users is obtained to pierce Skin resistance signal under stimulus stimulation;Determine in the multiple user each user in the emotion stimulus each regard The Time-domain Statistics feature and frequency domain statistical nature of skin resistance signal under the stimulation of frequency segment, and the user is obtained in the piece of video The post-stimulatory affective characteristics of section;By skin resistance signal of each user in the multiple user under the stimulation of each video clip Time-domain Statistics feature and frequency domain statistical nature as first training sample, and by the user the video clip stimulation under Affective tag of the affective characteristics as the first training sample of this, according to the corresponding first training sample of the multiple user Originally with the affective tag, emotion tranining database is built;Based on the emotion tranining database, carried out using support vector machines Model training obtains the emotion prediction model.
In some embodiments, the system also includes:
Second model construction model is specifically included for building the personality prediction model:Obtain the multiple user In each user personality characteristics;By multiple feelings of each user in the multiple user under the stimulation of the multiple video clip Feature is felt as second training sample, and using the personality characteristics of the user as the personality mark of the second training sample of this Label build personality tranining database according to corresponding second training sample of the multiple user and the personality label;Base In the personality tranining database, model training is carried out by the way of deep learning, obtains the Personality test model.
In some embodiments, the system also includes:
Filter module, Time-domain Statistics feature and frequency for determining the skin resistance signal in the characteristic determination module Before the statistical nature of domain, the skin resistance signal is filtered using Butterworth filter;Wherein, the Bart The exponent number of Butterworth filter is 2, cutoff frequency 0.3Hz.
In some embodiments, the Time-domain Statistics feature includes mean value, intermediate value, the standard of the skin resistance signal At least one of difference, minimum value, maximum value, the difference of maxima and minima, minimum value ratio and maximum value ratio, the skin The mean value of the first-order difference handling result of skin resistance signal, intermediate value, standard deviation, minimum value, maximum value, maxima and minima it At least one of difference, minimum value ratio and maximum value ratio, and/or, the second differnce result of the skin resistance signal In mean value, intermediate value, standard deviation, minimum value, maximum value, the difference of maxima and minima, minimum value ratio and maximum value ratio It is at least one.
In some embodiments, characteristic determination module is specifically used for:Fast Fourier is carried out to the skin resistance signal Transformation, and by the mean value of transformation results, intermediate value, standard deviation, minimum value, maximum value, the difference of maxima and minima, minimum value ratio At least one of rate and maximum value ratio are used as the frequency domain statistical nature.
It will be appreciated that personality analysis system provided by the invention is corresponding with personality analysis method provided by the invention, The part such as its explanation in relation to content, citing and advantageous effect can refer to the corresponding portion in personality analysis method, herein not It repeats again.
The present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, are executed in processor Above-mentioned personality analysis method can be realized when the computer program.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to the foregoing embodiments Invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each implementation Technical solution recorded in example is modified or equivalent replacement of some of the technical features;And these modification or It replaces, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of personality analysis method based on skin resistance feature, which is characterized in that including:
Obtain skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;
Determine the Time-domain Statistics feature and frequency domain statistical nature of the skin resistance signal;
By in the Time-domain Statistics feature and the frequency domain statistical nature input emotion prediction model trained in advance, obtain described The affective characteristics of user to be measured;
By in affective characteristics input personality prediction model trained in advance, the personality characteristics of the user to be measured is obtained.
2. described to regard according to the method described in claim 1, it is characterized in that, the emotion stimulus includes visual transmission source Frequency stimulus is made of multiple video clips and the tranquil segment being arranged before each video clip, the multiple video Segment can excite a variety of emotions of user.
3. according to the method described in claim 2, it is characterized in that, the Time-domain Statistics of the determination skin resistance signal are special It seeks peace before frequency domain statistical nature, the method further includes:
Skin resistance signal of the user to be measured under the stimulation of each video clip and corresponding reference signal work is poor, it obtains To normalized skin resistance signal;Wherein, the reference signal is that the user to be measured is flat corresponding to the video clip The mean value of skin resistance signal under stationary plate section.
4. according to the method described in claim 2, it is characterized in that, the building process of the emotion prediction model includes:
Obtain the skin resistance signal under emotion stimulus stimulation described in multiple users;
Determine skin pricktest of each user in the emotion stimulus under the stimulation of each video clip in the multiple user The Time-domain Statistics feature and frequency domain statistical nature of signal are hindered, and obtains the user in the post-stimulatory affective characteristics of the video clip;
By the Time-domain Statistics feature of skin resistance signal of each user in the multiple user under the stimulation of each video clip With frequency domain statistical nature as first training sample, and using the user the video clip stimulation under affective characteristics as The affective tag of the first training sample of this, according to corresponding first training sample of the multiple user and the emotion mark Label build emotion tranining database;
Based on the emotion tranining database, model training is carried out using support vector machines, obtains the emotion prediction model.
5. according to the method described in claim 4, it is characterized in that, the building process of the personality prediction model includes:
Obtain the personality characteristics of each user in the multiple user;
Using multiple affective characteristics of each user in the multiple user under the stimulation of the multiple video clip as one article the Two training samples, and using the personality characteristics of the user as the personality label of the second training sample of this, according to the multiple use Corresponding second training sample in family and the personality label build personality tranining database;
Based on the personality tranining database, model training is carried out by the way of deep learning, obtains the Personality test mould Type.
6. according to Claims 1 to 5 any one of them method, which is characterized in that the determination skin resistance signal Before Time-domain Statistics feature and frequency domain statistical nature, the method further includes:Using Butterworth filter to the skin pricktest Resistance signal is filtered;Wherein, the exponent number of the Butterworth filter is 2, cutoff frequency 0.3Hz.
7. according to Claims 1 to 5 any one of them method, which is characterized in that the Time-domain Statistics feature includes the skin The mean value of skin resistance signal, intermediate value, standard deviation, minimum value, maximum value, the difference of maxima and minima, minimum value ratio and most At least one of big value ratio, the mean value of the first-order difference handling result of the skin resistance signal, intermediate value, standard deviation, most At least one of small value, maximum value, the difference of maxima and minima, minimum value ratio and maximum value ratio, and/or, it is described The mean value of the second differnce result of skin resistance signal, intermediate value, standard deviation, minimum value, maximum value, maxima and minima it At least one of difference, minimum value ratio and maximum value ratio.
8. according to Claims 1 to 5 any one of them method, which is characterized in that determine the frequency domain of the skin resistance signal Statistical nature, including:
Fast Fourier Transform (FFT) is carried out to the skin resistance signal, and by the mean value of transformation results, intermediate value, standard deviation, minimum At least one of value, maximum value, the difference of maxima and minima, minimum value ratio and maximum value ratio are united as the frequency domain Count feature.
9. a kind of personality analysis system based on skin resistance feature, which is characterized in that including:
Signal acquisition module, for obtaining skin resistance signal of the user to be measured under the stimulation of preset emotion stimulus;
Characteristic determination module, the Time-domain Statistics feature for determining the skin resistance signal and frequency domain statistical nature;
Sentiment analysis module, for the Time-domain Statistics feature and the frequency domain statistical nature input emotion trained in advance is pre- It surveys in model, obtains the affective characteristics of the user to be measured;
Personality analysis module, for by affective characteristics input personality prediction model trained in advance, obtaining described to be measured The personality characteristics of user.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that execute institute in processor Claim 1~7 any one of them method can be realized when stating computer program.
CN201810395841.5A 2018-04-27 2018-04-27 Personality analysis method and system, storage medium based on skin resistance feature Pending CN108630299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810395841.5A CN108630299A (en) 2018-04-27 2018-04-27 Personality analysis method and system, storage medium based on skin resistance feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810395841.5A CN108630299A (en) 2018-04-27 2018-04-27 Personality analysis method and system, storage medium based on skin resistance feature

Publications (1)

Publication Number Publication Date
CN108630299A true CN108630299A (en) 2018-10-09

Family

ID=63694986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810395841.5A Pending CN108630299A (en) 2018-04-27 2018-04-27 Personality analysis method and system, storage medium based on skin resistance feature

Country Status (1)

Country Link
CN (1) CN108630299A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109498039A (en) * 2018-12-25 2019-03-22 北京心法科技有限公司 Personality assessment's method and device
CN109672930A (en) * 2018-12-25 2019-04-23 北京心法科技有限公司 Personality association type emotional arousal method and apparatus
CN109683840A (en) * 2018-12-25 2019-04-26 北京心法科技有限公司 The exciting method and device of plyability cognitive information processing mechanism
CN109697413A (en) * 2018-12-13 2019-04-30 合肥工业大学 Personality analysis method, system and storage medium based on head pose
CN109711291A (en) * 2018-12-13 2019-05-03 合肥工业大学 Personality prediction technique based on eye gaze thermodynamic chart
CN110638472A (en) * 2019-09-27 2020-01-03 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN111537056A (en) * 2020-07-08 2020-08-14 浙江浙能天然气运行有限公司 Pipeline along-line third-party construction dynamic early warning method based on SVM and time-frequency domain characteristics
CN114650857A (en) * 2019-11-13 2022-06-21 日本烟草国际股份有限公司 Inhaler with mental stress monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012144967A1 (en) * 2011-04-19 2012-10-26 Gryn Vladyslav Kostyantynovych Method for individual rehabilitation through the creation of a positive emotional background
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system
CN107348962A (en) * 2017-06-01 2017-11-17 清华大学 A kind of personal traits measuring method and equipment based on brain-computer interface technology
US20180014739A1 (en) * 2016-07-13 2018-01-18 Sentio Solutions, Inc. Unobtrusive emotion recognition system
CN107918487A (en) * 2017-10-20 2018-04-17 南京邮电大学 A kind of method that Chinese emotion word is identified based on skin electrical signal
CN107944473A (en) * 2017-11-06 2018-04-20 南京邮电大学 A kind of physiological signal emotion identification method based on the subjective and objective fusion of multi-categorizer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012144967A1 (en) * 2011-04-19 2012-10-26 Gryn Vladyslav Kostyantynovych Method for individual rehabilitation through the creation of a positive emotional background
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
US20180014739A1 (en) * 2016-07-13 2018-01-18 Sentio Solutions, Inc. Unobtrusive emotion recognition system
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system
CN107348962A (en) * 2017-06-01 2017-11-17 清华大学 A kind of personal traits measuring method and equipment based on brain-computer interface technology
CN107918487A (en) * 2017-10-20 2018-04-17 南京邮电大学 A kind of method that Chinese emotion word is identified based on skin electrical signal
CN107944473A (en) * 2017-11-06 2018-04-20 南京邮电大学 A kind of physiological signal emotion identification method based on the subjective and objective fusion of multi-categorizer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FURKAN GÜRPINAR ETAL.: "Multimodal fusion of audio, scene, and face features for first impression estimation", 《2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》 *
RAMANATHAN SUBRAMANIAN ETAL.: "ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors", 《IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697413A (en) * 2018-12-13 2019-04-30 合肥工业大学 Personality analysis method, system and storage medium based on head pose
CN109711291A (en) * 2018-12-13 2019-05-03 合肥工业大学 Personality prediction technique based on eye gaze thermodynamic chart
CN109498039A (en) * 2018-12-25 2019-03-22 北京心法科技有限公司 Personality assessment's method and device
CN109672930A (en) * 2018-12-25 2019-04-23 北京心法科技有限公司 Personality association type emotional arousal method and apparatus
CN109683840A (en) * 2018-12-25 2019-04-26 北京心法科技有限公司 The exciting method and device of plyability cognitive information processing mechanism
CN110638472A (en) * 2019-09-27 2020-01-03 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN110638472B (en) * 2019-09-27 2022-07-05 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN114650857A (en) * 2019-11-13 2022-06-21 日本烟草国际股份有限公司 Inhaler with mental stress monitoring
CN111537056A (en) * 2020-07-08 2020-08-14 浙江浙能天然气运行有限公司 Pipeline along-line third-party construction dynamic early warning method based on SVM and time-frequency domain characteristics

Similar Documents

Publication Publication Date Title
CN108630299A (en) Personality analysis method and system, storage medium based on skin resistance feature
Norman-Haignere et al. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex
US20180276540A1 (en) Modeling of the latent embedding of music using deep neural network
Chen et al. The AMG1608 dataset for music emotion recognition
CN108523906A (en) Personality analysis method and system, storage medium based on pulse characteristics
Khare et al. Classification of emotions from EEG signals using time‐order representation based on the S‐transform and convolutional neural network
Bailes et al. Comparative time series analysis of perceptual responses to electroacoustic music
CN103489445B (en) A kind of method and device identifying voice in audio frequency
Khan et al. A novel audio forensic data-set for digital multimedia forensics
KR20060110988A (en) Method for classifying a music genre and recognizing a musical instrument signal using bayes decision rule
Wang et al. Shallow and deep feature fusion for digital audio tampering detection
Cai Fault diagnosis of rolling bearing based on empirical mode decomposition and higher order statistics
Yang et al. A fault diagnosis approach for roller bearing based on improved intrinsic timescale decomposition de-noising and kriging-variable predictive model-based class discriminate
Mahajan Emotion recognition via EEG using neural network classifier
Rowe et al. Acoustic auto-encoders for biodiversity assessment
To et al. A general rule for sensory cue summation: evidence from photographic, musical, phonetic and cross-modal stimuli
Long et al. Denoising of seismic signals based on empirical mode decomposition-wavelet thresholding
Wataraka Gamage et al. Speech-based continuous emotion prediction by learning perception responses related to salient events: A study based on vocal affect bursts and cross-cultural affect in AVEC 2018
Verma et al. Classification and mapping of sound sources in local urban streets through AudioSet data and Bayesian optimized Neural Networks
Fan et al. Compound fault diagnosis of rolling element bearings using multipoint sparsity–multipoint optimal minimum entropy deconvolution adjustment and adaptive resonance-based signal sparse decomposition
CN115273904A (en) Angry emotion recognition method and device based on multi-feature fusion
Lei et al. Epileptic seizure detection in EEG signals using discriminative Stein kernel-based sparse representation
Huang et al. Order-statistic filtering Fourier decomposition and its application to rolling bearing fault diagnosis
Heravi et al. Structural health monitoring by probability density function of autoregressive-based damage features and fast distance correlation method
Lee et al. A single microphone noise reduction algorithm based on the detection and reconstruction of spectro-temporal features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181009