CN103440864A - Personality characteristic forecasting method based on voices - Google Patents
Personality characteristic forecasting method based on voices Download PDFInfo
- Publication number
- CN103440864A CN103440864A CN2013103292952A CN201310329295A CN103440864A CN 103440864 A CN103440864 A CN 103440864A CN 2013103292952 A CN2013103292952 A CN 2013103292952A CN 201310329295 A CN201310329295 A CN 201310329295A CN 103440864 A CN103440864 A CN 103440864A
- Authority
- CN
- China
- Prior art keywords
- personality
- multinomial
- voice
- acoustics
- prosodic features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a personality characteristic forecasting method based on voices. The method includes the following implementation steps that personality evaluation and determination are carried out on multiple reference determination persons to obtain multiterm personality characteristic factor evaluated values; voice fragments of the determination persons are collected, multiterm acoustics prosodic characteristics are extracted, and multiterm statistical characteristic values are extracted; a voice personality forecasting machine learning model is built, the multiterm personality characteristic factor evaluated values of all the reference determination persons and the multiterm statistical characteristic values are respectively input into the voice personality forecasting machine learning model for training; the voice fragments of a determination person are collected, acoustics prosodic characteristics and statistical characteristics are extracted and input into the voice personality forecasting machine learning model to obtain the multiterm personality characteristic factor evaluated values corresponding to all terms of acoustics prosodic characteristics, and all the personality characteristic factor evaluated values of all the characteristics are weighted and summed to obtain and output the multiterm personality characteristic factor evaluated value of the determination person. The personality characteristic forecasting method has the advantages of being rapid in forecasting process and objective and accurate in effect, and forecasting material is easy and convenient to collect.
Description
Technical field
The present invention relates to the Computer Applied Technology field, be specifically related to a kind of voice-based personality characteristics Forecasting Methodology.
Background technology
On internet, the personality Forecasting Methodology generally adopts the form based on the word test paper at present.Although the personality Forecasting Methodology of word test paper has had abundant achievement in research, as five-factor model personality test (Big Five), Cartel 16 factor personality tests (Sixteen Personality Factor Questionnaire, 16PF) etc.But the user need to spend the plenty of time and carries out answer in this manner, the prediction required time depends on exercise question quantity and tester's answer speed, and prediction steps is various tediously long, the tester easily produces and is sick of and the psychology of conflicting, the accuracy of test result depends on tester's subjective cooperate degree, therefore this method not very applicable internet advocate simple and convenient " fast food type " application model.
The technical scheme that number of patent application is 201010606120.8 discloses a kind of personality method of testing and device that proposes the mutual question and answer mode of a kind of speech type based on multiple dialect background, the word interrogation reply system of personality test is changed into to the voice question and answer mode, solved to a certain extent adaptability and the convenience problem of special population, but not from solving in essence the too tediously long situation of test process.In addition, the technical scheme that number of patent application is 201310059465.X discloses a kind of user's of utilization handwriting picture and has carried out the analyses and prediction personality characteristics, although can remove tediously long answer predicted time from, but at present in mobile social activity, network social intercourse, the use of hand-written picture is not extensive, has problems such as being difficult to gather predicted data.The present invention is based on the personality prediction mode of voice, step is few and simple to operate, can under moving internet, mobile environment, in numerous application, promote, and then provide further social service for the user accurately and efficiently.Therefore, how to overcome personality characteristics prediction mode length consuming time, effect on internet, mobile platform and be subject to the deficiencies such as subjective factor affects, determination data is difficult to obtain, for the user provides " fast food type " personality that is simple and easy to use Approaches For Prediction, become a technical matters urgently to be resolved hurrily.
Summary of the invention
For the above-mentioned enumeration problem of prior art, the enumeration problem that the present invention will solve be to provide a kind of prediction consuming time short, effect is objective and accurate, material collection voice-based personality characteristics Forecasting Methodology simply and easily.
In order to solve above-mentioned enumeration problem, the technical solution used in the present invention is:
A kind of voice-based personality characteristics Forecasting Methodology, implementation step is as follows:
1) set up voice personality prediction machine learning model: carry out personality assessment's mensuration for a plurality of reference mensuration people that select and obtain the multinomial personality characteristics factor score value of marking as the true benchmark of the personality characteristics factor with reference to the mensuration people; Gather a plurality of described sound bites with reference to measuring people's normal articulation voice, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features, extract the multinomial statistical characteristics of described acoustics prosodic features; Foundation comprises the acoustics prosodic features to the voice personality of personality characteristics factor score value mapping relations prediction machine learning model, each is inputted respectively to described voice personality prediction machine learning model with reference to multinomial personality characteristics factor score value of measuring the people and multinomial statistical characteristics corresponding to the every acoustics prosodic features of sound bite and trained;
2) personality characteristics prediction: gather the normal articulation voice of measuring the people and obtain sound bite to be predicted, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features and corresponding multinomial statistical nature, described multinomial acoustics prosodic features and corresponding multinomial statistical nature are inputted to described voice personality prediction machine learning model to carry out the regretional analysis of personality characteristics factor score value and obtains every acoustics prosodic features and multinomial personality characteristics factor score value corresponding to statistical nature, corresponding all personality characteristic factor score values weighted sum by each acoustics prosodic features respectively, finally obtain measuring the people multinomial personality characteristics factor score value output.
As further improvement in the technical proposal of the present invention:
A plurality ofly with reference to measuring people, carry out a kind of in the specifically fingering row five-factor model personality test of personality assessment's mensuration, the multinomial personality test in Minnesota, Cartel 16 personalities tests for what select in described step 1).
Described step 1) and step 2) in to sound bite, carry out pre-service and extract the detailed step of multinomial acoustics prosodic features as follows: sound bite is carried out to pre-emphasis, windowing process, divide frame, end-point detection obtains pretreated sound bite, pretreated each sound bite is extracted respectively and comprises the Mel frequency cepstral coefficient, the linear prediction cepstrum coefficient coefficient, the perception linear predictor coefficient, pitch, front two resonance peaks, energy, sound segment length, unvoiced segments length, the perception linear predictor coefficient, short-time zero-crossing rate, the humorous ratio of making an uproar, multinomial at interior acoustics prosodic features when long in averaging spectrum.
The multinomial statistical characteristics of extracting the acoustics prosodic features in described step 1) specifically refers to multiple in the maximal value of extracting described acoustics prosodic features, minimum value, average, variance, relative entropy, slope, difference value.
Voice personality in described step 1) prediction machine learning model specifically refers to a kind of in gaussian kernel support vector machine statistical model, logistic regression method model, decision-tree model, LEAST SQUARES MODELS FITTING, perceptron algorithm model, Boost method model, Hidden Markov Model (HMM), gauss hybrid models, neural network model, degree of deep learning model.
The present invention has following technique effect: the present invention is by the voice personality prediction machine learning model of setting up in advance, any sound bite that can utilize the user to provide is realized the personality characteristics prediction by voice personality prediction machine learning model, set up the mapping relations between phonetic feature and personality characteristics factor by utilizing statistical learning method, dope every personality factors index, overcome traditional personality and predicted length consuming time, effect is subject to the subjective factor impact, measure the deficiencies such as material is not easy to obtain, can take full advantage of current network social intercourse, mobile social activity is easy to obtain the characteristics of sound materials, the acoustics prosodic features of any sound bite of person to be measured of submitting to by the extraction user, utilize statistical learning method to calculate a plurality of personality characteristics factor score values that this sound bite is corresponding, a plurality of personality characteristics factor score values are weighted to summation to be obtained predicting people's final personality characteristics comprehensive grading value and provides social service for the user based on this, can provide the best personality coupling marriage and making friend based on personality characteristics for the user, the interpersonal relation prediction, the social class personality prediction such as job market planning application quick service, there is prediction consuming time short, effect is objective and accurate, material collection is simple and convenient, the advantage had wide range of applications.
The accompanying drawing explanation
The method flow schematic diagram that Fig. 1 is the embodiment of the present invention.
The principle schematic that Fig. 2 is personality characteristics prediction in the embodiment of the present invention.
Embodiment
As shown in Figure 1, the implementation step of the voice-based personality characteristics Forecasting Methodology of the present embodiment is as follows:
One, set up voice personality prediction machine learning model.
1.1) carry out with reference to measuring people the multinomial personality characteristics factor score value that personality assessment's mensuration obtains measuring as reference people's the true benchmark scoring of personality characteristics factor for a plurality of of selection.In the present embodiment, measure the people for a plurality of references of selecting and carry out the specifically fingering row five-factor model personality test (Big Five) of personality assessment's mensuration, show that each is with reference to measuring people's nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness).In addition, carry out the personality assessment for a plurality of reference mensuration people and can also adopt the multinomial personality test in Minnesota or Cartel 16 personality tests etc., its result equally also can obtain multinomial personality characteristics factor score value, and the item number of personality characteristics factor score value can be different and different due to concrete personality assessment's assay method.
1.2) gather a plurality of sound bites with reference to measuring people's normal articulation voice, sound bite is carried out pre-service and extracts multinomial acoustics prosodic features.In the present embodiment, select altogether 400 with reference to measuring the people, each records 10 sections any normal articulation voice about 15 seconds with reference to measuring the people, obtain altogether 4000 sound bites, because being greater than 300, general experiment image data amount meets the psychological analysis needs, therefore the present embodiment is set up the sound bite that voice personality prediction machine learning model uses and is met the correlated sampling standard, will wherein approximately 2/3rds sound bite be for training set in the present embodiment, and remaining 1/3rd for test set.In the present embodiment, sound bite is carried out to pre-service and obtain the detailed step of multinomial acoustics prosodic features of sound bite as follows: sound bite is carried out to the voice pre-service and (carry out successively pre-emphasis, windowing process, divide frame, end-point detection) obtain a plurality of sound bites, each sound bite is extracted respectively to Mel frequency cepstral coefficient (MFCC), pitch (Pitch, design per second vocal cord vibration number of times, be related to the tone and intonation), front two resonance peaks (First formant F1 and second-order resonance peak F2), energy (Energy), sound segment length (L0), unvoiced segments length (L1, be used for L0 in conjunction with rear relevant to pronunciation speed), perception linear predictor coefficient (Perceptual Linear Predictive), short-time zero-crossing rate, humorous making an uproar than (Harmonics-to-Noise-Ratio), the multinomial multinomial acoustics prosodic features obtained as extraction when long in averaging spectrum (Long-Term Average Spectrum).
1.3) extract the multinomial statistical characteristics of acoustics prosodic features.In the present embodiment, the multinomial statistical characteristics of extracting the acoustics prosodic features specifically refers to multiple in the maximal value (Max) of multinomial acoustics prosodic features, minimum value (Min), average (Mean), variance (Stdev), relative entropy KL, slope, difference value.
1.4) set up and to comprise the voice personality prediction machine learning model of acoustics prosodic features to personality characteristics factor score value mapping relations, each is inputted respectively to the voice personality with reference to multinomial personality characteristics factor score value of measuring the people and multinomial statistical characteristics corresponding to the every acoustics prosodic features of sound bite and predict that machine learning model is trained.
In the present embodiment, the voice personality prediction machine learning model of the multinomial statistical characteristics input of each multinomial acoustics prosodic features with reference to a plurality of sound bites of the multinomial personality characteristics factor score value of measuring the people and correspondence is specifically referred to gaussian kernel support vector machine statistical model (gaussian kernel Support Vector Machine) model, each of each sound bite acoustics prosodic features comprises corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness).In addition, can also adopt as required other voice personality prediction machine learning model that comprises logistic regression method model, decision-tree model, LEAST SQUARES MODELS FITTING, perceptron algorithm model, Boost method model, Hidden Markov Model (HMM), gauss hybrid models, neural network model, degree of deep learning model, no matter but for any voice personality prediction machine learning model, the sample size of its degree of accuracy and training is relevant, and the sample size of training degree of accuracy more at most is higher.The present embodiment is by after the multinomial statistical characteristics input gaussian kernel support vector machine statistical model of each multinomial acoustics prosodic features with reference to the multinomial personality characteristics factor of measuring the people and correspondence, gaussian kernel support vector machine statistical model is completed to training and obtain comprising the gaussian kernel support vector machine statistical model of acoustics prosodic features to the mapping of personality characteristics factor score value, obtain aforesaid voice personality prediction machine learning model.Because comprising the acoustics prosodic features, this voice personality prediction machine learning model shines upon to personality characteristics factor score value, therefore can utilize any sound bite that the user provides to carry out the personality prediction, by utilizing the mapping relations between sound bite acoustics prosodic features and personality characteristics factor score value to dope every personality factors index, thereby lay the foundation for the personality signatures to predict.
Two, personality characteristics prediction.
2.1) normal articulation voice that gather to measure the people obtain sound bite to be predicted, sound bite is carried out pre-service and extracts the multinomial statistical nature that multinomial acoustics prosodic features is corresponding.Gather sound bite and can pass through two kinds of modes: one, gather the voice of measuring the people, the user can use mobile phone, computer, flat board or other electronic equipments to choose the sound bite file of having recorded, and is committed to the voice collecting interface of application the present embodiment method by network; Two, the user can select the real-time recording function of the system of application the present embodiment method, records one section sound bite and is committed to the voice collecting interface.In the present embodiment, specifically by the voice collecting interface, receive the sound bite audio file that the user submits to by network, its sampling rate is 11025Hz, and the sound bite audio file all saves as the wav form.In addition, sound bite is carried out to pre-service and obtains the step and step 1.2 of the multinomial acoustics prosodic features of sound bite) identical, do not repeat them here.
2.2) multinomial acoustics prosodic features and corresponding multinomial statistical nature input voice personality prediction machine learning model are carried out to the regretional analysis of personality characteristics factor score value obtain every acoustics prosodic features and multinomial personality characteristics factor score value corresponding to statistical nature.Because comprising the acoustics prosodic features, voice personality prediction machine learning model shines upon to personality characteristics factor score value, therefore multinomial acoustics prosodic features input voice personality prediction machine learning model is carried out to the regretional analysis of personality characteristics factor score value, can obtain a plurality of personality characteristics factor score values of corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factors of doing one's duty property (Conscientiousness).Finally, obtain five personality characteristics factor score values that every acoustics prosodic features is corresponding, each acoustics prosodic features is to there being five personality characteristics factor score values.
2.3) corresponding all personality characteristic factor score values weighted sum by each acoustics prosodic features respectively, finally obtain measuring the people multinomial personality characteristics factor score value output.
As shown in Figure 2, the present embodiment is at first by step 2.1) gather sound bite and extract multinomial acoustics prosodic features and statistical nature, extract multinomial acoustics prosodic features and comprise pitch, resonance peak (front two resonance peaks, comprise First formant F1 and second-order resonance peak F2) etc., calculate multinomial statistical nature and comprise maximal value, minimum value, average, variance, relative entropy etc., due to through step 2.2) after input voice personality prediction machine learning model carries out the regretional analysis of personality characteristics factor score value, each acoustics prosodic features obtains corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), five personality characteristics factor score values of doing one's duty property (Conscientiousness), the present embodiment is finally by step 2.3) by corresponding nervousness (Neuroticism), extropism (Extroversion), open (Openness), agreeableness (Agreeableness), the personality characteristics factor score value that doing one's duty property (Conscientiousness) is five is weighted the final scoring that summation draws five personality factors, dopes five personality factors exponential quantities measuring the people and predicts the outcome as the final personality characteristics of measuring the people.
In sum, the present embodiment is by the voice personality prediction machine learning model of setting up in advance, can utilize any sound bite that the user provides to carry out the personality prediction, set up the mapping relations between phonetic feature and personality characteristics factor by utilizing statistical learning method, dope every personality factors index, overcome traditional personality and predicted length consuming time, effect is subject to the subjective factor impact, measure the deficiencies such as material is not easy to obtain, can take full advantage of current network social intercourse, mobile social activity is easy to obtain the characteristics of sound materials, the acoustics of any sound bite of person to be measured of submitting to by the extraction user, the features such as the rhythm, utilize statistical learning method to calculate a plurality of personality characteristics factor score values that this sound bite is corresponding, a plurality of personality characteristics factor score values are weighted to summation to be obtained predicting people's final personality characteristics comprehensive grading value and provides social service for the user based on this, by for a plurality of, with reference to measuring the people, having carried out respectively personality characteristics prediction accuracy contrast experiment, experimental data shows that the priori accuracy of the present embodiment can reach 67% left and right, the actual measurement accuracy 75% of carrying out artificial personality assessment's mensuration with prior art is more approaching, can meet personality characteristics forecast demand fast and accurately, can provide the best personality coupling marriage and making friend based on personality characteristics for the user, the interpersonal relation prediction, the social class personality prediction such as job market planning application quick service, there is prediction consuming time short, effect is objective and accurate, material collection is simple and convenient, the advantage had wide range of applications.
The above is only the preferred embodiment of the present invention, and protection scope of the present invention also not only is confined to above-described embodiment, and all technical schemes belonged under thinking of the present invention all belong to protection scope of the present invention.It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (5)
1. a voice-based personality characteristics Forecasting Methodology is characterized in that implementation step is as follows:
1) set up voice personality prediction machine learning model: carry out personality assessment's mensuration for a plurality of reference mensuration people that select and obtain the multinomial personality characteristics factor score value of marking as the true benchmark of the personality characteristics factor with reference to the mensuration people; Gather a plurality of described sound bites with reference to measuring people's normal articulation voice, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features, extract the multinomial statistical characteristics of described acoustics prosodic features; Foundation comprises the acoustics prosodic features to the voice personality of personality characteristics factor score value mapping relations prediction machine learning model, each is inputted respectively to described voice personality prediction machine learning model with reference to multinomial personality characteristics factor score value of measuring the people and multinomial statistical characteristics corresponding to the every acoustics prosodic features of sound bite and trained;
2) personality characteristics prediction: gather the normal articulation voice of measuring the people and obtain sound bite to be predicted, described sound bite is carried out pre-service and extracts multinomial acoustics prosodic features and corresponding multinomial statistical nature, described multinomial acoustics prosodic features and corresponding multinomial statistical nature are inputted to described voice personality prediction machine learning model to carry out the regretional analysis of personality characteristics factor score value and obtains every acoustics prosodic features and multinomial personality characteristics factor score value corresponding to statistical nature, corresponding all personality characteristic factor score values weighted sum by each feature respectively, finally obtain measuring the people multinomial personality characteristics factor score value output.
2. voice-based personality characteristics Forecasting Methodology according to claim 1 is characterized in that: in described step 1), for what select, a plurality ofly with reference to measuring people, carry out a kind of in the specifically fingering row five-factor model personality test of personality assessment's mensuration, the multinomial personality test in Minnesota, Cartel 16 personalities tests.
3. voice-based personality characteristics Forecasting Methodology according to claim 2, it is characterized in that: described step 1) and step 2) in to sound bite, carry out pre-service and extract the detailed step of multinomial acoustics prosodic features as follows: sound bite is carried out to pre-emphasis, windowing process, divide frame, end-point detection obtains pretreated sound bite, pretreated each sound bite is extracted respectively and comprises the Mel frequency cepstral coefficient, the linear prediction cepstrum coefficient coefficient, the perception linear predictor coefficient, pitch, front two resonance peaks, energy, sound segment length, unvoiced segments length, the perception linear predictor coefficient, short-time zero-crossing rate, the humorous ratio of making an uproar, multinomial at interior acoustics prosodic features when long in averaging spectrum.
4. voice-based personality characteristics Forecasting Methodology according to claim 3 is characterized in that: the multinomial statistical characteristics of extracting the acoustics prosodic features in described step 1) specifically refers to multiple in the maximal value of extracting the acoustics prosodic features, minimum value, average, variance, relative entropy, slope, difference value.
5. according to the described voice-based personality characteristics Forecasting Methodology of any one in claim 1~4, it is characterized in that: voice personality in described step 1) prediction machine learning model specifically refers to a kind of in gaussian kernel support vector machine statistical model, logistic regression method model, decision-tree model, LEAST SQUARES MODELS FITTING, perceptron algorithm model, Boost method model, Hidden Markov Model (HMM), gauss hybrid models, neural network model, degree of deep learning model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103292952A CN103440864A (en) | 2013-07-31 | 2013-07-31 | Personality characteristic forecasting method based on voices |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103292952A CN103440864A (en) | 2013-07-31 | 2013-07-31 | Personality characteristic forecasting method based on voices |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103440864A true CN103440864A (en) | 2013-12-11 |
Family
ID=49694555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013103292952A Pending CN103440864A (en) | 2013-07-31 | 2013-07-31 | Personality characteristic forecasting method based on voices |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103440864A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105022929A (en) * | 2015-08-07 | 2015-11-04 | 北京环度智慧智能技术研究所有限公司 | Cognition accuracy analysis method for personality trait value test |
CN105069294A (en) * | 2015-08-07 | 2015-11-18 | 北京环度智慧智能技术研究所有限公司 | Calculation and analysis method for testing cognitive competence values |
CN105147304A (en) * | 2015-08-07 | 2015-12-16 | 北京环度智慧智能技术研究所有限公司 | Stimulus information compiling method of personality trait value test |
CN107348962A (en) * | 2017-06-01 | 2017-11-17 | 清华大学 | A kind of personal traits measuring method and equipment based on brain-computer interface technology |
CN107689012A (en) * | 2017-09-06 | 2018-02-13 | 王锦程 | A kind of marriage and making friend's matching process |
CN108175424A (en) * | 2015-08-07 | 2018-06-19 | 北京环度智慧智能技术研究所有限公司 | A kind of test system for cognition ability value test |
CN108829668A (en) * | 2018-05-30 | 2018-11-16 | 平安科技(深圳)有限公司 | Text information generation method and device, computer equipment and storage medium |
CN109192277A (en) * | 2018-08-29 | 2019-01-11 | 沈阳康泰电子科技股份有限公司 | A kind of psychological characteristics measure based on general effective question and answer scale |
CN109672930A (en) * | 2018-12-25 | 2019-04-23 | 北京心法科技有限公司 | Personality association type emotional arousal method and apparatus |
CN110111810A (en) * | 2019-04-29 | 2019-08-09 | 华院数据技术(上海)有限公司 | Voice personality prediction technique based on convolutional neural networks |
CN110652294A (en) * | 2019-09-16 | 2020-01-07 | 清华大学 | Creativity personality trait measuring method and device based on electroencephalogram signals |
CN111460245A (en) * | 2019-01-22 | 2020-07-28 | 刘宏军 | Multi-dimensional crowd characteristic measuring method |
CN112561474A (en) * | 2020-12-14 | 2021-03-26 | 华南理工大学 | Intelligent personality characteristic evaluation method based on multi-source data fusion |
CN112786054A (en) * | 2021-02-25 | 2021-05-11 | 深圳壹账通智能科技有限公司 | Intelligent interview evaluation method, device and equipment based on voice and storage medium |
CN116631446A (en) * | 2023-07-26 | 2023-08-22 | 上海迎智正能文化发展有限公司 | Behavior mode analysis method and system based on speech analysis |
EP3186751B1 (en) * | 2014-08-26 | 2024-07-24 | Google LLC | Localized learning from a global model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694162A (en) * | 2005-03-31 | 2005-11-09 | 金庆镐 | Phonetic recognition analysing system and service method |
CN101359995A (en) * | 2008-09-28 | 2009-02-04 | 腾讯科技(深圳)有限公司 | Method and apparatus providing on-line service |
CN101375304A (en) * | 2006-01-31 | 2009-02-25 | 松下电器产业株式会社 | Advice apparatus, advice method, advice program and recording medium storing the advice program |
EP2233077A1 (en) * | 2007-12-07 | 2010-09-29 | Zaidanhojin Shin-Iryozaidan | Personality testing apparatus |
CN101999903A (en) * | 2010-12-27 | 2011-04-06 | 中国人民解放军第四军医大学 | Voice type personality characteristic detection system based on multiple dialect backgrounds |
CN103106346A (en) * | 2013-02-25 | 2013-05-15 | 中山大学 | Character prediction system based on off-line writing picture division and identification |
-
2013
- 2013-07-31 CN CN2013103292952A patent/CN103440864A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694162A (en) * | 2005-03-31 | 2005-11-09 | 金庆镐 | Phonetic recognition analysing system and service method |
CN101375304A (en) * | 2006-01-31 | 2009-02-25 | 松下电器产业株式会社 | Advice apparatus, advice method, advice program and recording medium storing the advice program |
EP2233077A1 (en) * | 2007-12-07 | 2010-09-29 | Zaidanhojin Shin-Iryozaidan | Personality testing apparatus |
CN101359995A (en) * | 2008-09-28 | 2009-02-04 | 腾讯科技(深圳)有限公司 | Method and apparatus providing on-line service |
CN101999903A (en) * | 2010-12-27 | 2011-04-06 | 中国人民解放军第四军医大学 | Voice type personality characteristic detection system based on multiple dialect backgrounds |
CN103106346A (en) * | 2013-02-25 | 2013-05-15 | 中山大学 | Character prediction system based on off-line writing picture division and identification |
Non-Patent Citations (2)
Title |
---|
GELAREH MOHAMMADI ET AL.: "Automatic Personality Perception: Prediction of Trait Attribution Based on Prosodic Features", 《IEEE TRANSACTIONS ON AFFECTIVE COMPUTING》 * |
赵力: "《语音信号处理》", 31 March 2003 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3186751B1 (en) * | 2014-08-26 | 2024-07-24 | Google LLC | Localized learning from a global model |
CN105022929A (en) * | 2015-08-07 | 2015-11-04 | 北京环度智慧智能技术研究所有限公司 | Cognition accuracy analysis method for personality trait value test |
CN105069294A (en) * | 2015-08-07 | 2015-11-18 | 北京环度智慧智能技术研究所有限公司 | Calculation and analysis method for testing cognitive competence values |
CN105147304A (en) * | 2015-08-07 | 2015-12-16 | 北京环度智慧智能技术研究所有限公司 | Stimulus information compiling method of personality trait value test |
CN105147304B (en) * | 2015-08-07 | 2018-01-09 | 北京环度智慧智能技术研究所有限公司 | A kind of stimulus information preparation method of personal traits value test |
CN108065942B (en) * | 2015-08-07 | 2021-02-05 | 北京智能阳光科技有限公司 | Method for compiling stimulation information aiming at oriental personality characteristics |
CN108042147A (en) * | 2015-08-07 | 2018-05-18 | 北京环度智慧智能技术研究所有限公司 | A kind of stimulus information provides device |
CN108065942A (en) * | 2015-08-07 | 2018-05-25 | 北京环度智慧智能技术研究所有限公司 | A kind of preparation method of stimulus information for east personality characteristics |
CN105069294B (en) * | 2015-08-07 | 2018-06-15 | 北京环度智慧智能技术研究所有限公司 | A kind of calculation and analysis method for cognition ability value test |
CN108175424A (en) * | 2015-08-07 | 2018-06-19 | 北京环度智慧智能技术研究所有限公司 | A kind of test system for cognition ability value test |
CN108175424B (en) * | 2015-08-07 | 2020-12-11 | 北京智能阳光科技有限公司 | Test system for cognitive ability value test |
CN107348962B (en) * | 2017-06-01 | 2019-10-18 | 清华大学 | A kind of personal traits measurement method and equipment based on brain-computer interface technology |
CN107348962A (en) * | 2017-06-01 | 2017-11-17 | 清华大学 | A kind of personal traits measuring method and equipment based on brain-computer interface technology |
CN107689012A (en) * | 2017-09-06 | 2018-02-13 | 王锦程 | A kind of marriage and making friend's matching process |
CN108829668B (en) * | 2018-05-30 | 2021-11-16 | 平安科技(深圳)有限公司 | Text information generation method and device, computer equipment and storage medium |
CN108829668A (en) * | 2018-05-30 | 2018-11-16 | 平安科技(深圳)有限公司 | Text information generation method and device, computer equipment and storage medium |
CN109192277B (en) * | 2018-08-29 | 2021-11-02 | 沈阳康泰电子科技股份有限公司 | Psychological characteristic measuring method based on universal effective question-answering ruler |
CN109192277A (en) * | 2018-08-29 | 2019-01-11 | 沈阳康泰电子科技股份有限公司 | A kind of psychological characteristics measure based on general effective question and answer scale |
CN109672930A (en) * | 2018-12-25 | 2019-04-23 | 北京心法科技有限公司 | Personality association type emotional arousal method and apparatus |
CN111460245A (en) * | 2019-01-22 | 2020-07-28 | 刘宏军 | Multi-dimensional crowd characteristic measuring method |
CN110111810B (en) * | 2019-04-29 | 2020-12-18 | 华院数据技术(上海)有限公司 | Voice personality prediction method based on convolutional neural network |
CN110111810A (en) * | 2019-04-29 | 2019-08-09 | 华院数据技术(上海)有限公司 | Voice personality prediction technique based on convolutional neural networks |
CN110652294B (en) * | 2019-09-16 | 2020-08-25 | 清华大学 | Creativity personality trait measuring method and device based on electroencephalogram signals |
CN110652294A (en) * | 2019-09-16 | 2020-01-07 | 清华大学 | Creativity personality trait measuring method and device based on electroencephalogram signals |
CN112561474A (en) * | 2020-12-14 | 2021-03-26 | 华南理工大学 | Intelligent personality characteristic evaluation method based on multi-source data fusion |
CN112561474B (en) * | 2020-12-14 | 2024-04-30 | 华南理工大学 | Intelligent personality characteristic evaluation method based on multi-source data fusion |
CN112786054A (en) * | 2021-02-25 | 2021-05-11 | 深圳壹账通智能科技有限公司 | Intelligent interview evaluation method, device and equipment based on voice and storage medium |
CN112786054B (en) * | 2021-02-25 | 2024-06-11 | 深圳壹账通智能科技有限公司 | Intelligent interview evaluation method, device, equipment and storage medium based on voice |
CN116631446A (en) * | 2023-07-26 | 2023-08-22 | 上海迎智正能文化发展有限公司 | Behavior mode analysis method and system based on speech analysis |
CN116631446B (en) * | 2023-07-26 | 2023-11-03 | 上海迎智正能文化发展有限公司 | Behavior mode analysis method and system based on speech analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103440864A (en) | Personality characteristic forecasting method based on voices | |
Solimine et al. | An experimental investigation into passive acoustic damage detection for structural health monitoring of wind turbine blades | |
CN104200804B (en) | Various-information coupling emotion recognition method for human-computer interaction | |
Le et al. | Investigation of spectral centroid features for cognitive load classification | |
CN103559892B (en) | Oral evaluation method and system | |
Huang et al. | Depression Detection from Short Utterances via Diverse Smartphones in Natural Environmental Conditions. | |
CN103559894B (en) | Oral evaluation method and system | |
Gillespie et al. | Cross-Database Models for the Classification of Dysarthria Presence. | |
CN103730130A (en) | Detection method and system for pathological voice | |
Narendra et al. | Automatic assessment of intelligibility in speakers with dysarthria from coded telephone speech using glottal features | |
Esmaili et al. | Automatic classification of speech dysfluencies in continuous speech based on similarity measures and morphological image processing tools | |
CN102623009A (en) | Abnormal emotion automatic detection and extraction method and system on basis of short-time analysis | |
Ryant et al. | Highly accurate mandarin tone classification in the absence of pitch information | |
CN107456208A (en) | The verbal language dysfunction assessment system and method for Multimodal interaction | |
CN103366759A (en) | Speech data evaluation method and speech data evaluation device | |
Drygajlo | Automatic speaker recognition for forensic case assessment and interpretation | |
CN102592593B (en) | Emotional-characteristic extraction method implemented through considering sparsity of multilinear group in speech | |
CN101996635A (en) | English pronunciation quality evaluation method based on accent highlight degree | |
Sun et al. | Investigating glottal parameters for differentiating emotional categories with similar prosodics | |
Sabir et al. | Improved algorithm for pathological and normal voices identification | |
Morrison et al. | Voting ensembles for spoken affect classification | |
CN202758611U (en) | Speech data evaluation device | |
CN111341346A (en) | Language expression capability evaluation method and system for fusion depth language generation model | |
Hansen et al. | TEO-based speaker stress assessment using hybrid classification and tracking schemes | |
Wang et al. | MFCC-based deep convolutional neural network for audio depression recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20131211 |
|
WD01 | Invention patent application deemed withdrawn after publication |