CN108735233A - A kind of personality recognition methods and device - Google Patents
A kind of personality recognition methods and device Download PDFInfo
- Publication number
- CN108735233A CN108735233A CN201710272170.9A CN201710272170A CN108735233A CN 108735233 A CN108735233 A CN 108735233A CN 201710272170 A CN201710272170 A CN 201710272170A CN 108735233 A CN108735233 A CN 108735233A
- Authority
- CN
- China
- Prior art keywords
- acoustic feature
- feature information
- personality
- measurand
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 47
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 238000005259 measurement Methods 0.000 claims description 45
- 230000002996 emotional effect Effects 0.000 claims description 44
- 230000036651 mood Effects 0.000 claims description 29
- 206010022998 Irritability Diseases 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 17
- 235000013399 edible fruits Nutrition 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 28
- 230000006870 function Effects 0.000 description 27
- 238000012706 support-vector machine Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 206010020400 Hostility Diseases 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000002790 cross-validation Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 241000406668 Loxodonta cyclotis Species 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012417 linear regression Methods 0.000 description 3
- 206010002368 Anger Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 206010029216 Nervousness Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Hospice & Palliative Care (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Child & Adolescent Psychology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a kind of personality recognition methods and device, the method, including:Obtain the sound bite of measurand;According to the sound bite, the acoustic feature information in the sound bite is extracted;The acoustic feature information is handled using preset algorithm, determines the personality recognition result of measurand.It using method provided by the invention, is handled by the sound characteristic information to extraction, to accurately analyze the personality of measurand, and obtains the personality recognition result of measurand in time.
Description
Technical field
The present invention relates to personality analysis technical field more particularly to a kind of personality recognition methods and devices.
Background technology
In recent years, psychological educational circles is formd on personality descriptive model than more consistent common recognition, it is proposed that personality it is big by five
Model.Five speciality of big five model are respectively:Nervousness, extropism, opening, compatibility and doing one's duty property, wherein Mei Gete
Matter contains six sub- dimensions.Indignation and hostility are to weigh unstrung sub- dimension respectively, reaction be personal experience's indignation and
The tendency of correlated condition, such as setback and pain, measurement is that a people generates angry easy degree.Indignation and hostility have height
It is intrinsic and low intrinsic.The relatively high people of score shows as being easy to be under fire, especially when experiencing the treatment that oneself is subject to not
Animosity can be full of after just, the relatively high people of score can become irascible, indignation and feel to baffle.The relatively low crowd of score,
It then can easily control the emotion of oneself, it is not easy to it is under fire and angry, in daily life, the relatively low crowd's meeting of score
Close friend is showed, has an amiable disposition, be not easy the image flared up.Generally speaking, unstrung sub- dimension can reflect the choler of people,
Indignation and the high people of hostility score show as irritability, and indignation and the relatively low people of hostility score show as not irritability.
The method for measuring big five models, five speciality in the prior art is scaling method, and with indignation and hostility, this sub- dimension is
Example illustrates, that is, is directed to this sub- dimension, measurand needs to complete this problem corresponding to sub- dimension, then by measuring people
The score of this sub- dimension of member's statistics, obtains the characteristic of this sub- dimension, so as to show whether measurand is with easy
Anger personality.In addition, for the accurate anlage for assessing measurand and showing, when to the answer of each problem
Using ranking mode, and the range evaluated is generally divided into five grades, i.e., from very different meaning to agreeing to very much.Existing skill
Common scale has NEO-PI-R (the Revised NEO Personality Inventory) in the scaling method used in art
Deng.
However when being measured in the prior art to five features of measurand using scaling method, can generally exist following
Problem:(1) in person to person's interactive process, the choler timeliness that measurand is obtained by way of filling in scale is relatively low,
It can not timely feedback whether measurand has choler personality;(2) it is only filled in the people of measurand Long Term Contact
Whether the accurate reaction measurand of answer ability of the problems in scale has choler personality.
How accurately and in time in conclusion, whether reflection measurand has the technology that choler personality is urgently to be resolved hurrily
One of problem.
Invention content
A kind of personality recognition methods of offer of the embodiment of the present invention and device, accurately to analyze the personality of measurand, together
The timeliness that Shi Tigao analyzes measurand personality.
The embodiment of the present invention provides a kind of personality recognition methods, including:
Obtain the sound bite of measurand;
According to the sound bite, the acoustic feature information in the sound bite is extracted;
The acoustic feature information is handled using preset algorithm, determines the personality recognition result of measurand.
Preferably, being handled the acoustic feature information using preset algorithm, the personality identification of measurand is determined
As a result, specifically including:
The acoustic feature information is handled using preset algorithm, determines the corresponding mood of the acoustic feature information
Measured value;
Compare the emotional measurement value and obtains comparison result with default mood threshold value;
According to comparison result, the personality recognition result of measurand is determined.
Further, it according to comparison result, determines the personality recognition result of measurand, specifically includes:
If the comparison result, which is the emotional measurement value, is more than or equal to the default mood threshold value, it is determined that tested pair
The personality recognition result of elephant is irritability personality;Or
If the comparison result, which is the emotional measurement value, is less than the default mood threshold value, it is determined that measurand
Personality recognition result is not irritability personality.
The acoustic feature information is handled using preset algorithm, determines the corresponding mood of the acoustic feature information
Measured value specifically includes:
The acoustic feature information is handled according to following formula, determines the corresponding mood of the acoustic feature information
Measured value:
Wherein, x indicates the acoustic feature information;
F (x) indicates the corresponding emotional measurement value of the acoustic feature information;
ai *Indicate optimal Lagrange multiplier vector;
b*Indicate optimal hyperlane intercept;
yiFor predetermined value, value is { -1 ,+1 };
N indicates the quantity of sample in training set;
K (x, z) is gaussian kernel function, andWherein z indicates the acoustic feature information
Mean value;σ indicates the standard deviation of the acoustic feature.
Preferably, the acoustic feature information is included at least with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic wave are made an uproar
Acoustic ratio and 12 Jan Vermeer frequency cepstral coefficients.
The embodiment of the present invention provides a kind of personality identification device, including:
Acquiring unit, the sound bite for obtaining measurand;
Extraction unit, for according to the sound bite, extracting the acoustic feature information in the sound bite;
Determination unit determines the property of measurand for being handled the acoustic feature information using preset algorithm
Lattice recognition result.
Preferably, the determination unit, specifically includes:First determination subelement, comparing subunit and second determine that son is single
Member, wherein:
First determination subelement determines institute for being handled the acoustic feature information using preset algorithm
State the corresponding emotional measurement value of acoustic feature information;
The comparing subunit obtains comparison result for the emotional measurement value and default mood threshold value;
Second determination subelement, the comparison result for being obtained according to the comparing subunit determine measurand
Personality recognition result.
Further, second determination subelement, if being the emotional measurement value specifically for the comparison result
More than or equal to the default mood threshold value, it is determined that the personality recognition result of measurand is irritability personality;Or it is if described
Comparison result is that the emotional measurement value is less than the default mood threshold value, it is determined that the personality recognition result of measurand is not
Irritability personality.
First determination subelement, specifically for being handled the acoustic feature information according to following formula, really
Determine the corresponding emotional measurement value of the acoustic feature information:
Wherein, x indicates the acoustic feature information;
F (x) indicates the corresponding emotional measurement value of the acoustic feature information;
ai *Indicate optimal Lagrange multiplier vector;
b*Indicate optimal hyperlane intercept;
yiFor predetermined value, value is { -1 ,+1 };
N indicates the quantity of sample in training set;
K (x, z) is gaussian kernel function, andWherein z indicates the acoustic feature information
Mean value;σ indicates the standard deviation of the acoustic feature.
Preferably, the acoustic feature information is included at least with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic wave are made an uproar
Acoustic ratio and 12 Jan Vermeer frequency cepstral coefficients.
Personality recognition methods provided in an embodiment of the present invention and device are carried according to the sound bite of the measurand of acquisition
Take the acoustic feature information in the sound bite;And the acoustic feature information is handled using preset algorithm, it determines
The personality recognition result of measurand.The personality of measurand can not only accurately be analyzed, moreover it is possible to obtain tested pair in time
The personality recognition result of elephant, more improves user experience.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and is obtained in book, claims and attached drawing.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and constitutes the part of the present invention, this hair
Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 a are the flow diagram for the personality recognition methods that the embodiment of the present invention one provides;
Fig. 1 b are the personality recognition result that measurand is determined in the personality recognition methods that the embodiment of the present invention one provides
Flow diagram;
Fig. 2 is the structural schematic diagram of personality identification device provided by Embodiment 2 of the present invention.
Specific implementation mode
A kind of personality recognition methods of offer of the embodiment of the present invention and device, accurately to analyze the personality of measurand, together
The timeliness of Shi Tigao measurand character analysis.
Below in conjunction with Figure of description, preferred embodiment of the present invention will be described, it should be understood that described herein
Preferred embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention, and in the absence of conflict, this hair
The feature in embodiment and embodiment in bright can be combined with each other.
Embodiment one
As shown in Figure 1a, be the flow diagram of personality recognition methods that the embodiment of the present invention one provides, may include with
Lower step:
S11, the sound bite for obtaining measurand.
S12, according to the sound bite, extract the acoustic feature information in the sound bite.
When it is implemented, when extracting the acoustic feature information of measurand sound bite, openSMILE may be used and open
Source tool extracts acoustic feature information.Specifically, the acoustic feature information can be the acoustic feature information based on 384 dimensions.
First with openSMILE extract 16 Wiki eigens, specifically include zero-crossing rate, energy root mean square, fundamental frequency, harmonic to noise ratio and
12 Jan Vermeer frequency cepstral coefficients and its first-order difference and statistical property, shown in particular content reference table 1, in extraction essential characteristic
When, it can be extracted based on frame level, the essential characteristic and its first-order difference are finally obtained by 12 statistical functions
16*2*12=384 dimensional features.
Table 1
Preferably, the acoustic feature information is included at least with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic wave are made an uproar
Acoustic ratio and 12 Jan Vermeer frequency cepstral coefficients.
Specifically, bilingual relationship is in table 1:LLD(low-level descriptors):Bottom describes the factor,
Functional:Function, Mean:Mean value, Standard devision:Standard deviation, Kurtosis:Kurtosis, Skewness:Partially
Gradient, Extremes:Value, rel.position, range:Maximum value minimum, the position of maximum frame, minimum frame
Position, maxima and minima range, Linear regression:offset,slope,MSE:The linear regression of biasing,
The linear regression of slope, mean square error linear return.
S13, the acoustic feature information is handled using preset algorithm, determines the personality identification knot of measurand
Fruit.
When it is implemented, can according to method shown in Fig. 1 b using preset algorithm to the acoustic feature information at
Reason, determines the personality recognition result of measurand, may include including the following steps:
S131, the acoustic feature information is handled using preset algorithm, determines that the acoustic feature information corresponds to
Emotional measurement value.
Emotional measurement value of the present invention is for reflecting whether measurand has the evaluation parameter of irritability personality.
When it is implemented, the embodiment of the present invention uses SVM (the Support Vector with gaussian kernel function
Machine, support vector machines) algorithm handles the acoustic feature information, determine that the acoustic feature information is corresponding
Emotional measurement value.
It should be noted that the basic principle of SVM algorithm is:Classification prediction is carried out to data based on Statistical Learning Theory.
It is dedicated to finding the generalization ability that structuring least risk further increases learning machine, to reach empiric risk and fiducial range
Minimum, it is final so that in the case where statistical sample amount is fewer, can also obtain good learning effect.The present invention is real
The SVM algorithm that example uses is applied as nonlinear support vector machines.I.e.:Kernel function is applied in SVM algorithm, uses one first
The data for becoming linearly inseparable of changing commanders are mapped to a new higher dimensional space from original space data are become linear separability, then
The rule of the method for linear classification study training data is used in new higher dimensional space.Kernel function is applied in support vector machines
(SVM) in, exactly a new feature is mapped to using a nonlinear original input space-theorem in Euclid space Rn that changes commanders that becomes
Space-Hilbert space H can be thus that the hypersurface inside original space becomes in new Hilbert space
Hyperplane.
Specifically, gaussian kernel function provided in an embodiment of the present invention is one kind in kernel function.Certainly it can also use
Polynomial kernel function applies it in SVM algorithm and executes personality recognition methods provided by the invention.
Emotional measurement value described in S132, comparison obtains comparison result with default mood threshold value.
S133, according to comparison result, determine the personality recognition result of measurand.
Preferably, if the comparison result, which is the emotional measurement value, is more than or equal to the default mood threshold value, really
The personality recognition result for determining measurand is irritability personality;Or
If the comparison result, which is the emotional measurement value, is less than the default mood threshold value, it is determined that measurand
Personality recognition result is not irritability personality.
When it is implemented, when determining that the emotional measurement value is more than or equal to the default mood threshold value, then quilt is assert
Surveying object has the characteristics that it is more irascible easily to show angry and indignation, temper for irritability, excited type personality;When determining
When the emotional measurement value is less than the default mood threshold value, then assert that measurand has the characteristics that not irritability, mood high order smooth pattern
Personality shows as being not easy to be under fire and anger, temper is more amiable.
Preferably, when executing step S133, following methods can also be used:If the comparison result is the mood
Measured value is equal to the prediction mood threshold value, it is determined that the personality recognition result of measurand is irritability personality;Or if institute
It is that the emotional measurement value is not equal to the default mood threshold value to state comparison result, it is determined that the personality recognition result of measurand
For not irritability personality.Specifically, the default mood threshold value can be -1.
For a better understanding of the present invention, the embodiment of the present invention one with preset algorithm be the support with gaussian kernel function to
It is illustrated for amount machine SVM algorithm, when executing step S131, the acoustic feature information can be determined according to formula (1)
Corresponding emotional measurement value:
Wherein, wherein N indicates the quantity of sample in training set;ai *Indicate optimal Lagrange multiplier vector, b*It indicates most
Excellent hyperplane intercept, yiFor setting, value is { -1 ,+1 }.
When it is implemented, yiValue can be empirically determined.
K (x, z) is gaussian kernel function, shown in expression formula reference formula (2):
X indicates the acoustic feature information in formula (1);F (x) indicates the corresponding emotional measurement of the acoustic feature information
Value;Z indicates the mean value of the acoustic feature information;σ indicates the standard deviation of the acoustic feature information.
It, will be in the acoustic feature information input to the formula (1) based on above-mentioned formula (1), you can determine described
The corresponding emotional measurement value of acoustic feature information.
When it is implemented, before executing personality recognition methods provided by the invention, it is also necessary to being imputed in advance in the present invention
The relevant parameter that method is related to is determined, and substantially process is:Sound bite (training set data) is obtained first from corpus,
After obtaining sound bite, the corresponding personality recognition result of the sound bite is determined according to sound bite, i.e.,:In advance with known personality
The sound bite of recognition result is input in preset algorithm, based on relevant parameter in this determination preset algorithm.
For a better understanding of the present invention, it is illustrated by taking the SVM algorithm with gaussian kernel function as an example, is executing this hair
Before bright, it is thus necessary to determine that a in above-mentioned formula (1)i *, z, σ and b*Occurrence.Based on sound bite in corpus, in advance may be used
To determine the corresponding personality recognition result of the sound bite.Then the acoustic feature information in sound bite is extracted, you can with
To the x in formula (1).Unknown due to there is 4 parameters in formula (1), it is therefore desirable to using at least four sound bite and its
Corresponding personality recognition result determines the parameter in formula (1), for example, if personality recognition result is irritability personality, it is public
F (x) in formula (1) is+1, if personality recognition result is not irritability personality, the f (x) in formula (1) is -1.So far, may be used
To determine the relevant parameter in the SVM algorithm with gaussian kernel function.
Further, when determining the relevant parameter of SVM algorithm according to the sound bite of known personality recognition result, in advance
The sound bite of input is labeled, NEO-PI-R methods mark sound bite specifically may be used, in order to ensure that personality is known
The corresponding personality recognition result of sound bite of the accuracy of other result, selection should not be single.
Preferably, in the relevant parameter in determining formula (1), it can also be using the method for crossing over many times verification come really
It is fixed, for example, the number of cross validation can be, but not limited to be 5 times.It is possible thereby to avoid the randomness of parameter determined.
When it is implemented, being illustrated by taking 5 cross validations as an example, 5 sound bites are obtained first from corpus, point
Not Que Ding the corresponding personality recognition result of sound bite, be then directed to each sound bite and extract the acoustic feature of the sound bite
Information, composing training collection, the training set include training sample, and the quantity for the sample for being included is N.The training sample
For the corresponding acoustic feature information of sound bite and personality recognition result.Based on the basic principle of 5 cross validations, from training set
In 4 groups of training samples of any selection as training data, another group is used as test data, carries out a cross validation and obtains one group
Then the parameter value of SVM algorithm repeats 4 times, respectively obtains the parameter value of 4 groups of SVM algorithms, according to according to 5 groups of obtained SVM
The parameter value of algorithm determines parameter value of the optimal class value as final SVM algorithm again.
Preferably, in order to improve the accuracy of personality recognition result, in SVM algorithm provided in an embodiment of the present invention, may be used also
To introduce penalty coefficient C, then the parameter value in SVM algorithm is determined according to formula (3):
Wherein, wherein aiIndicate Lagrange multiplier vector, hi,hjFor the corresponding personality recognition result of sample in training set
Corresponding value, value is { -1 ,+1 }, shown in the constraints reference formula (4) of formula (3):
When it is implemented, if the corresponding personality recognition result of sample is irritability personality, corresponding value in training set
It is -1;Otherwise its corresponding value is+1.When determining penalty coefficient C, grid data service may be used and select best punishment system
Number, is then determined in SVM algorithm most further according to formula (3) and (4)
Excellent Lagrange multiplier vector ai *.Then a positioned at open interval (0, C) is choseni *Calculate optimal hyperlane interceptBased on this, by introducing penalty coefficient so that executing property provided in an embodiment of the present invention
When lattice recognition methods, the SVM algorithm that can be determined according to penalty coefficient obtains more accurate personality recognition result, improves
User experience.
Preferably, in order to improve the accuracy of personality recognition result, in SVM algorithm provided in an embodiment of the present invention, may be used also
To introduce kernel function gamma parameter γ, i.e., the Gaussian kernel in the SVM algorithm used in the embodiment of the present invention with gaussian kernel function
Function can introduce gamma parameter γ, that is, the gaussian kernel function expression formula for introducing gamma parameters is:
When it is implemented, gamma parameter γ can be determined using grid data service, based on this using formula (5) as new
Gaussian kernel function substitute into formula (1), then determine the personality recognition result of measurand so that according to gamma parameters
The SVM algorithm of gaussian kernel function determine more accurate personality recognition result, improve user experience.
Preferably, the corresponding emotional measurement value of the acoustic feature information can also be determined using following preset algorithm:ELM
(Extreme Learning Machine, extreme learning machine) algorithm, GMM (Gaussian Mixture Model, Gaussian Mixture
Model), ANN (Artificial Neural Machine, artificial neural network) algorithms and LR (Logistic
Regression, logistic regression) algorithm.
Personality recognition methods provided in an embodiment of the present invention, by obtaining the sound bite of measurand, from the language of acquisition
Acoustic feature information is extracted in tablet section, then the acoustic feature is handled using preset algorithm, it is first determined go out institute
The corresponding emotional measurement value of acoustic feature information is stated, is then compared the emotional measurement value with default mood threshold value, by
This obtains the personality recognition result of measurand.Using method provided by the invention, tested pair can not only be accurately analyzed
The personality of elephant, moreover it is possible to which the personality recognition result for obtaining measurand in time more improves user experience.
Embodiment two
Based on same inventive concept, a kind of personality identification device is additionally provided in the embodiment of the present invention, due to above-mentioned apparatus
The principle solved the problems, such as is similar to personality recognition methods, therefore the implementation of above-mentioned apparatus may refer to the implementation of method, repetition
Place repeats no more.
As shown in Fig. 2, for the structural schematic diagram of personality identification device provided by Embodiment 2 of the present invention, including acquiring unit
21, extraction unit 22 and determination unit 23, wherein:
Acquiring unit 21, the sound bite for obtaining measurand;
Extraction unit 22, for according to the sound bite, extracting the acoustic feature information in the sound bite;
Determination unit 23 determines measurand for being handled the acoustic feature information using preset algorithm
Personality recognition result.
When it is implemented, the determination unit 23, specifically includes:First determination subelement, comparing subunit and second are really
Stator unit, wherein:
First determination subelement determines institute for being handled the acoustic feature information using preset algorithm
State the corresponding emotional measurement value of acoustic feature information;
The comparing subunit obtains comparison result for the emotional measurement value and default mood threshold value;
Second determination subelement, the comparison result for being obtained according to the comparing subunit determine measurand
Personality recognition result.
Further, second determination subelement, if being the emotional measurement value specifically for the comparison result
More than or equal to the default mood threshold value, it is determined that the personality recognition result of measurand is irritability personality;Or it is if described
Comparison result is that the emotional measurement value is less than the default mood threshold value, it is determined that the personality recognition result of measurand is not
Irritability personality.
Preferably, first determination subelement, is specifically used for carrying out the acoustic feature information according to following formula
Processing, determines the corresponding emotional measurement value of the acoustic feature information:
Wherein, x indicates the acoustic feature information;
F (x) indicates the corresponding emotional measurement value of the acoustic feature information;
ai *Indicate optimal Lagrange multiplier vector;
b*Indicate optimal hyperlane intercept;
yiFor predetermined value, value is { -1 ,+1 };
N indicates the quantity of sample in training set;
K (x, z) is gaussian kernel function, andWherein z indicates the acoustic feature information
Mean value;σ indicates the standard deviation of the acoustic feature.
Preferably, the acoustic feature information is included at least with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic wave are made an uproar
Acoustic ratio and 12 Jan Vermeer frequency cepstral coefficients.
For convenience of description, above each section is divided by function describes respectively for each module (or unit).Certainly, exist
Implement the function of each module (or unit) can be realized in same or multiple softwares or hardware when the present invention.
The personality identification device that embodiments herein is provided can be realized by computer program.Those skilled in the art
It should be appreciated that above-mentioned module dividing mode is only one kind in numerous module dividing modes, if being divided into other moulds
Block or non-division module all should be within the protection domains of the application as long as personality identification device has above-mentioned function.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer
The computer program production implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or
The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications can be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of personality recognition methods, which is characterized in that including:
Obtain the sound bite of measurand;
According to the sound bite, the acoustic feature information in the sound bite is extracted;
The acoustic feature information is handled using preset algorithm, determines the personality recognition result of measurand.
2. the method as described in claim 1, which is characterized in that using preset algorithm to the acoustic feature information at
Reason, determines the personality recognition result of measurand, specifically includes:
The acoustic feature information is handled using preset algorithm, determines the corresponding emotional measurement of the acoustic feature information
Value;
Compare the emotional measurement value and obtains comparison result with default mood threshold value;
According to comparison result, the personality recognition result of measurand is determined.
3. method as claimed in claim 2, which is characterized in that according to comparison result, determine the personality identification knot of measurand
Fruit specifically includes:
If the comparison result, which is the emotional measurement value, is more than or equal to the default mood threshold value, it is determined that measurand
Personality recognition result is irritability personality;Or
If the comparison result, which is the emotional measurement value, is less than the default mood threshold value, it is determined that the personality of measurand
Recognition result is not irritability personality.
4. method as claimed in claim 2, which is characterized in that using preset algorithm to the acoustic feature information at
Reason, determines the corresponding emotional measurement value of the acoustic feature information, specifically includes:
The acoustic feature information is handled according to following formula, determines the corresponding emotional measurement of the acoustic feature information
Value:
Wherein, x indicates the acoustic feature information;
F (x) indicates the corresponding emotional measurement value of the acoustic feature information;
ai *Indicate optimal Lagrange multiplier vector;
b*Indicate optimal hyperlane intercept;
yiFor predetermined value, value is { -1 ,+1 };
N indicates the quantity of sample in training set;
K (x, z) is gaussian kernel function, andWherein z indicates the mean value of the acoustic feature information;
σ indicates the standard deviation of the acoustic feature.
5. the method as described in Claims 1 to 4 any claim, which is characterized in that the acoustic feature information is at least wrapped
It includes with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic to noise ratio and 12 Jan Vermeer frequency cepstral coefficients.
6. a kind of personality identification device, which is characterized in that including:
Acquiring unit, the sound bite for obtaining measurand;
Extraction unit, for according to the sound bite, extracting the acoustic feature information in the sound bite;
Determination unit determines that the personality of measurand is known for being handled the acoustic feature information using preset algorithm
Other result.
7. device as claimed in claim 6, which is characterized in that the determination unit specifically includes:First determination subelement,
Comparing subunit and the second determination subelement, wherein:
First determination subelement determines the sound for being handled the acoustic feature information using preset algorithm
Learn the corresponding emotional measurement value of characteristic information;
The comparing subunit obtains comparison result for the emotional measurement value and default mood threshold value;
Second determination subelement, the comparison result for being obtained according to the comparing subunit determine the property of measurand
Lattice recognition result.
8. device as claimed in claim 7, which is characterized in that second determination subelement, if being specifically used for the ratio
Relatively result is that the emotional measurement value is more than or equal to the default mood threshold value, it is determined that the personality recognition result of measurand is
Irritability personality;Or if the comparison result, which is the emotional measurement value, is less than the default mood threshold value, it is determined that tested
The personality recognition result of object is not irritability personality.
9. device as claimed in claim 7, which is characterized in that first determination subelement is specifically used for according to following public affairs
Formula handles the acoustic feature information, determines the corresponding emotional measurement value of the acoustic feature information:
Wherein, x indicates the acoustic feature information;
F (x) indicates the corresponding emotional measurement value of the acoustic feature information;
ai *Indicate optimal Lagrange multiplier vector;
b*Indicate optimal hyperlane intercept;
yiFor predetermined value, value is { -1 ,+1 };
N indicates the quantity of sample in training set;
K (x, z) is gaussian kernel function, andWherein z indicates the mean value of the acoustic feature information;
σ indicates the standard deviation of the acoustic feature.
10. the device as described in claim 6~9 any claim, which is characterized in that the acoustic feature information is at least wrapped
It includes with the next item down:Zero-crossing rate, energy root mean square, fundamental frequency, harmonic to noise ratio and 12 Jan Vermeer frequency cepstral coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710272170.9A CN108735233A (en) | 2017-04-24 | 2017-04-24 | A kind of personality recognition methods and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710272170.9A CN108735233A (en) | 2017-04-24 | 2017-04-24 | A kind of personality recognition methods and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108735233A true CN108735233A (en) | 2018-11-02 |
Family
ID=63934381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710272170.9A Pending CN108735233A (en) | 2017-04-24 | 2017-04-24 | A kind of personality recognition methods and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108735233A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1975856A (en) * | 2006-10-30 | 2007-06-06 | 邹采荣 | Speech emotion identifying method based on supporting vector machine |
CN102663432A (en) * | 2012-04-18 | 2012-09-12 | 电子科技大学 | Kernel fuzzy c-means speech emotion identification method combined with secondary identification of support vector machine |
CN103544963A (en) * | 2013-11-07 | 2014-01-29 | 东南大学 | Voice emotion recognition method based on core semi-supervised discrimination and analysis |
CN104200814A (en) * | 2014-08-15 | 2014-12-10 | 浙江大学 | Speech emotion recognition method based on semantic cells |
US20160019915A1 (en) * | 2014-07-21 | 2016-01-21 | Microsoft Corporation | Real-time emotion recognition from audio signals |
CN106250855A (en) * | 2016-08-02 | 2016-12-21 | 南京邮电大学 | A kind of multi-modal emotion identification method based on Multiple Kernel Learning |
CN106504772A (en) * | 2016-11-04 | 2017-03-15 | 东南大学 | Speech-emotion recognition method based on weights of importance support vector machine classifier |
CN106548788A (en) * | 2015-09-23 | 2017-03-29 | 中国移动通信集团山东有限公司 | A kind of intelligent emotion determines method and system |
CN106570496A (en) * | 2016-11-22 | 2017-04-19 | 上海智臻智能网络科技股份有限公司 | Emotion recognition method and device and intelligent interaction method and device |
-
2017
- 2017-04-24 CN CN201710272170.9A patent/CN108735233A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1975856A (en) * | 2006-10-30 | 2007-06-06 | 邹采荣 | Speech emotion identifying method based on supporting vector machine |
CN102663432A (en) * | 2012-04-18 | 2012-09-12 | 电子科技大学 | Kernel fuzzy c-means speech emotion identification method combined with secondary identification of support vector machine |
CN103544963A (en) * | 2013-11-07 | 2014-01-29 | 东南大学 | Voice emotion recognition method based on core semi-supervised discrimination and analysis |
US20160019915A1 (en) * | 2014-07-21 | 2016-01-21 | Microsoft Corporation | Real-time emotion recognition from audio signals |
CN104200814A (en) * | 2014-08-15 | 2014-12-10 | 浙江大学 | Speech emotion recognition method based on semantic cells |
CN106548788A (en) * | 2015-09-23 | 2017-03-29 | 中国移动通信集团山东有限公司 | A kind of intelligent emotion determines method and system |
CN106250855A (en) * | 2016-08-02 | 2016-12-21 | 南京邮电大学 | A kind of multi-modal emotion identification method based on Multiple Kernel Learning |
CN106504772A (en) * | 2016-11-04 | 2017-03-15 | 东南大学 | Speech-emotion recognition method based on weights of importance support vector machine classifier |
CN106570496A (en) * | 2016-11-22 | 2017-04-19 | 上海智臻智能网络科技股份有限公司 | Emotion recognition method and device and intelligent interaction method and device |
Non-Patent Citations (2)
Title |
---|
奚吉 等: "基于改进多核学习的语音情感识别算法", 《数据采集与处理》 * |
赵力: "《语音信号处理》", 30 June 2009, 机械工业出版社 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200300829A1 (en) | Utilization of electronic nose-based analysis of odorants | |
CN108197115A (en) | Intelligent interactive method, device, computer equipment and computer readable storage medium | |
Khan et al. | Beyond activity recognition: skill assessment from accelerometer data | |
CN110033022A (en) | Processing method, device and the storage medium of text | |
CN110413988A (en) | Method, apparatus, server and the storage medium of text information matching measurement | |
US20140317032A1 (en) | Systems and Methods for Generating Automated Evaluation Models | |
CN110085211A (en) | Speech recognition exchange method, device, computer equipment and storage medium | |
CN111382573A (en) | Method, apparatus, device and storage medium for answer quality assessment | |
WO2010021723A1 (en) | Content and quality assessment method and apparatus for quality searching | |
CN105786898B (en) | A kind of construction method and device of domain body | |
CN112348417A (en) | Marketing value evaluation method and device based on principal component analysis algorithm | |
WO2020170593A1 (en) | Information processing device and information processing method | |
CN111126552A (en) | Intelligent learning content pushing method and system | |
KR102500782B1 (en) | Method, apparatus and computer-readable recording medium for extracting customized question for each difficulty level based on the student's learning level | |
Velmurugan et al. | Developing a fidelity evaluation approach for interpretable machine learning | |
KR101745874B1 (en) | System and method for a learning course automatic generation | |
CN114037545A (en) | Client recommendation method, device, equipment and storage medium | |
CN108735232A (en) | A kind of personality recognition methods and device | |
Tsandilas et al. | Gesture Elicitation as a Computational Optimization Problem | |
CN107274043A (en) | Quality evaluating method, device and the electronic equipment of forecast model | |
CN108735233A (en) | A kind of personality recognition methods and device | |
Alden et al. | Applying spartan to Understand Parameter Uncertainty in Simulations. | |
CN109119157A (en) | A kind of prediction technique and system of infant development | |
CN114330285B (en) | Corpus processing method and device, electronic equipment and computer readable storage medium | |
Chen | On the use of different speech representations for speaker modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181102 |
|
RJ01 | Rejection of invention patent application after publication |