CN108877840A - Emotion identification method and system based on nonlinear characteristic - Google Patents
Emotion identification method and system based on nonlinear characteristic Download PDFInfo
- Publication number
- CN108877840A CN108877840A CN201810712624.4A CN201810712624A CN108877840A CN 108877840 A CN108877840 A CN 108877840A CN 201810712624 A CN201810712624 A CN 201810712624A CN 108877840 A CN108877840 A CN 108877840A
- Authority
- CN
- China
- Prior art keywords
- characteristic
- matching
- mood
- model
- emotion identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
Abstract
The present invention relates to Emotion identification method and technology field, specially the Emotion identification method and system based on nonlinear characteristic, this approach includes the following steps:Speech acquisition step, the voice that acquisition user speaks;Characteristic extraction step carries out processing analysis to the voice of user, extracts matching characteristic;The matching characteristic includes audio frequency characteristics, nonlinear characteristic and semantic feature;Model Matching step, the matching characteristic extracted according to characteristic extraction step are matched with preset mood model, find out the highest mood model of matching degree as Emotion identification result.Emotion identification method and system provided by the invention based on nonlinear characteristic can carry out comprehensive and accurate analysis to user emotion from multi-angle, various aspects according to the voice input being used for and identify.
Description
Technical field
The present invention relates to Emotion identification method and technology fields, specially the Emotion identification method based on nonlinear characteristic and are
System.
Background technique
Mood is a kind of feeling for combining people, the state of thought and act, it includes people to extraneous or autostimulation
Psychoreaction also includes the physiological reaction with this psychoreaction.The mood of the mankind and physically and mentally healthy substantial connection, if people
Class is chronically under the states such as anxiety, sorrow, sadness, angry, oppressive, be may cause neural division, hypertension, heart disease, is burst
A variety of diseases such as ulcer, stomach trouble and cancer, commonly referred to as psychogenic disorder, therefore grasp the mood feelings of a people, especially old man
Condition, it is highly beneficial for grasping mental and physical.
Emotion identification analysis has very big value for old man, especially disability and Empty nest elderly.With Chinese society
Attraction of the aggravation and big city of meeting aging to young man's employment, education etc., this specific group of Empty nest elderly
The universal phenomenon of society will certainly be become.Although but the just gradually concern by society of this group, still lack effective
Mode for Empty nest elderly provides timely health supervision and psychological consolation.It is analyzed, can be reflected in real time old by mood
The emotional status of people, and the mood for allowing them more to will appreciate that parent to associated medical person and children is timely feedbacked, to increase
It care to old man and timely treats.And there are no the product kimonos that corresponding maturation is perfect on this field, China market
Business.
Therefore, how the Emotion identification method and system of more objective, accurate the elderly a kind of is provided, this field is become
The problem of urgent need to resolve.
Summary of the invention
It, can be defeated according to the voice being used for the invention is intended to provide the Emotion identification method and system based on nonlinear characteristic
Enter from multi-angle, various aspects and comprehensive and accurate analysis identification is carried out to user emotion.
In order to solve the above-mentioned technical problem, this patent provides the following technical solutions:
Emotion identification method based on nonlinear characteristic, includes the following steps:
Speech acquisition step, the voice that acquisition user speaks;
Characteristic extraction step carries out processing analysis to the voice of user, extracts matching characteristic;The matching characteristic includes sound
Frequency feature, nonlinear characteristic and semantic feature;
Model Matching step, the matching characteristic extracted according to characteristic extraction step and the progress of preset mood model
Match, finds out the highest mood model of matching degree as Emotion identification result.
In technical solution of the present invention, matching characteristic includes audio frequency characteristics, nonlinear characteristic and three kinds of semantic feature, to sound
Frequency is traditional linear character according to comprehensive analysis, audio frequency characteristics are carried out, and can analyze more stable voice by it
Signal, and it is directed to voice signal that is jiggly, changing greatly, Speaker-independent continuous language can solve by nonlinear characteristic
The problem of conventional audios features such as cent analysis, high quality Low-ratespeech coding cann't be solved, makes up the deficiency of audio frequency characteristics,
And the particular content talked by analysis of semantic characteristics user, from macroscopic view angle analysis user emotional state, facilitate into
One step accurately judges the emotional state of user.In the technical solution of the application, by combine these three in terms of feature comprehensively,
The accurately mood of identification user.
Further, the characteristic extraction step includes:
Step 1:Nonlinear characteristic is calculated according to collected voice;
Step 2:Collected voice is divided into multiple segments;
Step 3:Calculate the audio frequency characteristics of each segment;
Step 4:Semantics recognition is carried out to collected voice;
Step 5:Semantic feature is extracted from the semantic content recognized.
The extraction of audio frequency characteristics is based on lineary system theory, needs for voice signal to be divided into some short sections and is located again
Reason to guarantee that each segment is considered as determining stationary signal, and then generates audio frequency characteristics calculating after processing.
Further, the step of semantic feature includes keyword feature, the characteristic extraction step five specifically includes:
Keyword extraction step extracts all keywords and appearance in semantic content according to preset keywords database
Frequency.
The mood of user, such as " anger ", " happiness ", " feeling bad " etc. are analyzed by extracting keyword, is closed by statistics
The frequency that keyword occurs can further confirm that whether user is accidentally to mention some keyword, reduce accidentalia bring and miss
Difference.
It further, include the weight of each matching characteristic in mood model, the Model Matching step includes:
Step 1:For each preset mood model, each matching is calculated according to the weighted value of each matching characteristic
The score of feature;
Step 2:The score of matching characteristic is summed, matching degree score is obtained;
Step 3:Each mood model is compared according to matching degree score, chooses matching degree highest scoring
Mood corresponding to mood model is as Emotion identification result.Each mood model has different weight distribution ratios, passes through
Weighted sum calculates matching degree score, generates matching degree score of the current speech relative to each mood model, according to
The score value you can get it Emotion identification result.
Further, the audio frequency characteristics include pitch, energy, formant, zero-crossing rate, Teager energy calculation and plum
That cepstral coefficients.These audio frequency characteristics are mostly important some features during audio analysis, can be with by these features
Realize the identification, analysis and processing to stationary speech.
Further, the nonlinear characteristic include Hurst Exponent, curvature index, Shannon entropy, Lempel-Ziv complexity,
Interactive information, relevant dimension and lyapunov index.By these parameters can noise, fluctuation etc. to audio locate
Reason improves mood precision of analysis.
Further, a kind of Emotion identification system based on nonlinear characteristic for having used the above method is also disclosed in the application
System, the system include:
Voice acquisition module, the voice spoken for acquiring user;
Characteristic extracting module, for extracting matching characteristic from collected voice;
Model fitting module filters out matching degree for being matched according to matching characteristic with preset mood model
Highest mood model is as Emotion identification result;
Wherein characteristic extracting module includes:
Nonlinear feature extraction submodule, for extracting nonlinear characteristic from collected voice;
Audio feature extraction submodule, for extracting audio frequency characteristics from collected voice;
Semantic feature extraction submodule, for carrying out semantics recognition to collected voice and extracting semantic feature.
Further, the audio feature extraction submodule includes audio cutter unit and audio frequency characteristics computing unit, described
Audio cutter unit is used to be cut into collected voice multiple segments, and the audio frequency characteristics computing unit is every for calculating
A clip audio feature.
Further, the semantic feature extraction submodule includes semantics recognition unit, keyword extracting unit and frequency note
Unit is recorded, the voice recognition unit is used to carry out semantics recognition to collected voice, and the keyword extracting unit is used for
Keyword is extracted from semantic content according to preset keywords database, the frequency record unit goes out for recording each keyword
Existing number.
Further, the model fitting module includes model sub-module stored, matching degree computational submodule and matching
Degree Comparative sub-module, for storing mood model, the matching degree computational submodule is used for the model sub-module stored
The score of each matching characteristic is calculated according to mood model and calculates matching degree score, and the matching degree Comparative sub-module is used
It is compared in the matching degree score to each mood model, filters out the mood model of matching degree highest scoring and with this
The corresponding mood of mood model is as recognition result.
Detailed description of the invention
Fig. 1 is that the present invention is based on the logic diagrams in the Emotion identification system embodiment of nonlinear characteristic.
Specific embodiment
It is further described below by specific embodiment:
The Emotion identification method based on nonlinear characteristic of the present embodiment is based on the Emotion identification method of nonlinear characteristic, packet
Include following steps:
Speech acquisition step, the voice that acquisition user speaks.By accompanying the intelligent terminal of old man in the present embodiment, obtaining
The dialogic voice of old man and other old men talk and the language of intelligent terminal and old man's dialogue are acquired in the case where obtaining old man's authorization
Sound.
Characteristic extraction step carries out processing analysis to the voice of user, extracts matching characteristic;Matching characteristic includes audio spy
Sign, nonlinear characteristic and semantic feature.
Specifically, characteristic extraction step includes:
Step 1:Nonlinear characteristic is calculated according to collected voice;
Step 2:Collected voice is divided into multiple segments;
Step 3:Calculate the audio frequency characteristics of each segment;
Step 4:Semantics recognition is carried out to collected voice;
Step 5:Semantic feature is extracted from the semantic content recognized.
Audio frequency characteristics include pitch, energy, formant, zero-crossing rate, Teager energy calculation and Mel-cepstral system
Number.Nonlinear characteristic includes Hurst Exponent, curvature index, Shannon entropy, Lempel-Ziv complexity, interactive information, correlation dimension
Degree and lyapunov index.By these parameters can noise, fluctuation etc. to audio handle, improve mood analysis
Accuracy.Semantic feature includes keyword feature.
The step of characteristic extraction step five includes:Keyword extraction step is extracted in semanteme according to preset keywords database
All keywords in appearance and the frequency of appearance.
The extraction and analysis of audio frequency characteristics is based on lineary system theory, and voice signal is divided into some short sections and is located again
Reason, it is ensured that each segment is considered as determining stationary signal, and then generates audio frequency characteristics calculating after processing, together
When by one section of speech modification be that multiple segments carry out more microcosmic analysis and processing, processing accuracy can be further increased.Pitch,
The audio frequency characteristics such as energy, formant, zero-crossing rate, Teager energy calculation and Mel Cepstral Frequency Coefficients are audio analysis processes
In mostly important some features, identification, analysis and the processing to stationary speech may be implemented by these features.
Voice signal is a complicated non-linear process.It is analyzed with acoustics and Aerodynamics, voice not only has
The Non-Linear Vibration process of glottis, by tongue, the variation of vocal tract shape, voice signal (especially fricative, plosive etc.) meeting exists
Sound channel boundary layer generates vortex, and ultimately forms turbulent flow, and when sending out sound other, the air-flow that glottis sprays is and rapid still with the presence of turbulent flow
Flow inherently a kind of chaos.Voice time domain waveform has self-similarity, and shows periodicity and randomness.The present embodiment
In, pass through Hurst Exponent, curvature index, Shannon entropy, Lempel-Ziv complexity, interactive information, relevant dimension and Li Ya
The parameters such as Pu Nuofu index can noise, fluctuation, periodicity etc. to audio handle, improve mood precision of analysis.
The mood of user, such as " anger ", " happiness ", " feeling bad " etc. are analyzed by extracting keyword, is closed by statistics
The frequency that keyword occurs can further confirm that whether user is accidentally to mention some keyword, reduce accidentalia bring and miss
Difference.
Model Matching step, the matching characteristic extracted according to characteristic extraction step and the progress of preset mood model
Match, finds out the highest mood model of matching degree as Emotion identification result.In the present embodiment, mood model includes anger, opens
The heart is detested, fears, is neutral and six kinds sad, includes the weight of all matching characteristics in each mood model, different
The weight distribution of mood module is different, and Model Matching step includes:
Step 1:For each preset mood model, each matching is calculated according to the weighted value of each matching characteristic
The score of feature;Specifically, directly obtain score multiplied by characteristic value according to weight for nonlinear characteristic, for audio frequency characteristics,
Some audio frequency characteristics is then calculated first in the average value of each segment, is then scored again with average value multiplied by weight, it is right
In keyword, then score is calculated multiplied by frequency multiplied by weight with keyword.
Step 2:The score of matching characteristic is summed, matching degree score is obtained;
Step 3:Each mood model is compared according to matching degree score, chooses matching degree highest scoring
Mood corresponding to mood model is as Emotion identification result.
It further include sub-data recording step, the sub-data recording step is used for result and semantic content according to Emotion identification,
Mood and event correlation are got up, and are associated event according to the big data event correlation rule of background server, then
This relationship of event and event and event and mood is stored, user emotion event base is constructed;Such as when old man talks
Rise oneself child work when mood be it is happy, then by child work this event be happily associated, when old
People is sad, then this event and the sad mood pass of going home of child celebrating the New Year or the Spring Festival when speaking of the thing that oneself child goes home the New Year
Connection gets up, and further according to the preset correlation rule of background server, and child's work and child the two things of going home the New Year are closed
Connection gets up, these correlation rules are obtained by administrative staff according to big data analysis, can also manually be formulated by administrative staff.
It further include mood processing step, the mood processing step includes:
Step 1:According to the mood of active user, judge whether active user's mood is in passive states, it is angry, detest,
Fear to belong to passive states with sad, if it is, step 2 is executed, if it is not, then terminating operation;
Step 2:According to the voice content of the dialogue acquired before, the correlating event of passive states is judged, as old
The reason of people's mood swing, is sent to the relatives or supervisor of user;
Step 3:It is happy mood and old with current initiation that association mood is found in the user emotion event base of user
The associated event of the event of people's negative feeling, and be presented to the user by forms such as voice, videos, and then reach dissuasion effect
Fruit.Such as old man is because child stays out and sad the New Year, then is automatically associated to child and works this event, and then broadcast to old man
It puts child the things such as to have a successful career, guidance old man considers in terms of positive with regard to relevant thing, reaches comfort effect.Pass through
The step, can be with regard to same event or dependent event, when user mood is bad, from the point of view of allowing users in terms of positive
To event, the effect of mood comfort is realized.
In the technical solution of the present embodiment, matching characteristic includes audio frequency characteristics, nonlinear characteristic and three kinds of semantic feature,
Comprehensive analysis is carried out to audio data, audio frequency characteristics are traditional linear character, be can analyze more smoothly by it
Voice signal, and it is directed to voice signal that is jiggly, changing greatly, unspecified person can solve by nonlinear characteristic and connect
The problem of conventional audios features such as continuous speech analysis, high quality Low-ratespeech coding cann't be solved, makes up audio frequency characteristics
Deficiency, and the particular content talked by analysis of semantic characteristics user are helped from the emotional state of the angle analysis user of macroscopic view
In the emotional state for further accurately judging user.In the technical solution of the application, by combine these three in terms of feature
Comprehensively and accurately identify the mood of user.Each mood model has different weight distribution ratios, is calculated by weighted sum
Matching degree score generates matching degree score of the current speech relative to each mood model, can be obtained according to the score value
Emotion identification result out.
As shown in Figure 1, also disclosing a kind of mood based on nonlinear characteristic for having used the above method in the present embodiment
Identifying system, the system include:
Voice acquisition module, the voice spoken for acquiring user;
Characteristic extracting module, for extracting matching characteristic from collected voice;Characteristic extracting module includes:It is non-linear
Feature extraction submodule, audio feature extraction submodule and semantic feature extraction submodule, Nonlinear feature extraction submodule are used
In extracting nonlinear characteristic from collected voice;Audio feature extraction submodule is for extracting sound from collected voice
Frequency feature;Semantic feature extraction submodule is used to carry out semantics recognition to collected voice and extracts semantic feature.Audio is special
Sign extracting sub-module includes audio cutter unit and audio frequency characteristics computing unit, and audio cutter unit is used for collected voice
Multiple segments are cut into, audio frequency characteristics computing unit is for calculating each clip audio feature.Semantic feature extraction submodule
Including semantics recognition unit, keyword extracting unit and frequency record unit, voice recognition unit is used for collected voice
Semantics recognition is carried out, keyword extracting unit is used to extract keyword, frequency from semantic content according to preset keywords database
Recording unit is used to record the number that each keyword occurs.
Model fitting module filters out matching degree for being matched according to matching characteristic with preset mood model
Highest mood model is as Emotion identification result;Model fitting module includes model sub-module stored, matching degree calculating
Module and matching degree Comparative sub-module, model sub-module stored is for storing mood model, matching degree computational submodule
For calculating the score of each matching characteristic according to mood model and calculating matching degree score, matching degree Comparative sub-module is used
It is compared in the matching degree score to each mood model, filters out the mood model of matching degree highest scoring and with this
The corresponding mood of mood model is as recognition result.
Data recordin module, the data recordin module is used for result and semantic content according to Emotion identification, by mood
Get up with event correlation, and be associated event according to the big data event correlation rule of background server, then by event
It is stored with this relationship of event and event and mood, constructs user emotion event base;Such as when old man talks oneself
Child work when mood be it is happy, then by child work this event be happily associated, when old man speaks of
Oneself child celebrate the New Year or the Spring Festival the thing gone home when be it is sad, then child's this event of going home the New Year has been associated with sad mood
Come, and further according to the preset correlation rule of background server, the two things of going home that child's work and child are celebrated the New Year or the Spring Festival have been associated with
Come, these correlation rules are obtained by administrative staff according to big data analysis, can also manually be formulated by administrative staff.
It further include mood processing module, the mood processing module is used for the mood according to active user, judges current use
Whether family mood is in passive states, and anger is detested, fears to belong to passive states with sad, if it is, according to adopting before
The voice content of the dialogue of collection judges the correlating event of passive states, is sent to use as the reason of old man's mood swing
The relatives or supervisor at family;Simultaneously in the user emotion event base of user find association mood be happy mood and with
The current associated event of event for causing old man's negative feeling, and be presented to the user by forms such as voice, videos, Jin Erda
To dissuasion effect.Such as old man is because child stays out and sad the New Year, then is automatically associated to child and works this event, in turn
It plays child to old man the things such as to have a successful career, guidance old man considers in terms of positive with regard to relevant thing, reaches comfort
Effect.By the step, when user mood is bad, can be allowed users to from positive with regard to same event or dependent event
Aspect treat event, realize the effect of mood comfort.
The above are merely the embodiment of the present invention, the common sense such as well known specific structure and characteristic are not made excessively herein in scheme
Description, all common of technical field that the present invention belongs to before one skilled in the art know the applying date or priority date
Technological know-how can know the prior art all in the field, and have using routine experiment means before the date
Ability, one skilled in the art can improve in conjunction with self-ability under the enlightenment that the application provides and implement we
Case, some typical known features or known method should not become the barrier that one skilled in the art implement the application
Hinder.It should be pointed out that for those skilled in the art, without departing from the structure of the invention, if can also make
Dry modification and improvement, these also should be considered as protection scope of the present invention, these all will not influence the effect that the present invention is implemented and
Patent practicability.The scope of protection required by this application should be based on the content of the claims, the specific reality in specification
Applying the records such as mode can be used for explaining the content of claim.
Claims (10)
1. the Emotion identification method based on nonlinear characteristic, it is characterised in that:Include the following steps:
Speech acquisition step, the voice that acquisition user speaks;
Characteristic extraction step carries out processing analysis to the voice of user, extracts matching characteristic;The matching characteristic includes audio spy
Sign, nonlinear characteristic and semantic feature;
Model Matching step, the matching characteristic extracted according to characteristic extraction step are matched with preset mood model, are looked for
The highest mood model of matching degree is as Emotion identification result out.
2. the Emotion identification method according to claim 1 based on nonlinear characteristic, it is characterised in that:The feature extraction
Step includes:
Step 1:Nonlinear characteristic is calculated according to collected voice;
Step 2:Collected voice is divided into multiple segments;
Step 3:Calculate the audio frequency characteristics of each segment;
Step 4:Semantics recognition is carried out to collected voice;
Step 5:Semantic feature is extracted from the semantic content recognized.
3. the Emotion identification method according to claim 2 based on nonlinear characteristic, it is characterised in that:The semantic feature
Including keyword feature, five are specifically included the step of the characteristic extraction step:
Keyword extraction step extracts all keywords in semantic content and the frequency of appearance according to preset keywords database
Rate.
4. the Emotion identification method according to claim 3 based on nonlinear characteristic, it is characterised in that:It is wrapped in mood model
Weight containing each matching characteristic, the Model Matching step include:
Step 1:For each preset mood model, each matching characteristic is calculated according to the weighted value of each matching characteristic
Score;
Step 2:The score of matching characteristic is summed, matching degree score is obtained;
Step 3:Each mood model is compared according to matching degree score, chooses the mood of matching degree highest scoring
The corresponding mood of model is as Emotion identification result.
5. the Emotion identification method according to claim 4 based on nonlinear characteristic, it is characterised in that:The audio frequency characteristics
Including pitch, energy, formant, zero-crossing rate, Teager energy calculation and Mel Cepstral Frequency Coefficients.
6. the Emotion identification method according to claim 5 based on nonlinear characteristic, it is characterised in that:The non-linear spy
Sign includes Hurst Exponent, curvature index, Shannon entropy, Lempel-Ziv complexity, interactive information, relevant dimension and Li Yapu
Promise husband's index.
7. it is a kind of used the Emotion identification method described in claim 1 based on nonlinear characteristic based on nonlinear characteristic
Emotion identification system, the system include:
Voice acquisition module, the voice spoken for acquiring user;
Characteristic extracting module, for extracting matching characteristic from collected voice;
Model fitting module filters out matching degree highest for being matched according to matching characteristic with preset mood model
Mood model as Emotion identification result;
Wherein characteristic extracting module includes:
Nonlinear feature extraction submodule, for extracting nonlinear characteristic from collected voice;
Audio feature extraction submodule, for extracting audio frequency characteristics from collected voice;
Semantic feature extraction submodule, for carrying out semantics recognition to collected voice and extracting semantic feature.
8. the Emotion identification system according to claim 7 based on nonlinear characteristic, it is characterised in that:The audio frequency characteristics
Extracting sub-module includes audio cutter unit and audio frequency characteristics computing unit, and the audio cutter unit is used for collected language
Sound is cut into multiple segments, and the audio frequency characteristics computing unit is for calculating each clip audio feature.
9. the Emotion identification system according to claim 8 based on nonlinear characteristic, it is characterised in that:The semantic feature
Extracting sub-module includes semantics recognition unit, keyword extracting unit and frequency record unit, and the voice recognition unit is used for
Semantics recognition is carried out to collected voice, the keyword extracting unit is used for according to preset keywords database from semantic content
Middle extraction keyword, the frequency record unit are used to record the number that each keyword occurs.
10. the Emotion identification system according to claim 9 based on nonlinear characteristic, it is characterised in that:The model
It include model sub-module stored, matching degree computational submodule and matching degree Comparative sub-module with module, the model is deposited
Storage submodule is used to calculate each matching according to mood model special for storing mood model, the matching degree computational submodule
The score of sign simultaneously calculates matching degree score, and the matching degree Comparative sub-module is used for the matching degree to each mood model
Score is compared, and filters out the mood model of matching degree highest scoring and using the corresponding mood of the mood model as identification
As a result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810712624.4A CN108877840A (en) | 2018-06-29 | 2018-06-29 | Emotion identification method and system based on nonlinear characteristic |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810712624.4A CN108877840A (en) | 2018-06-29 | 2018-06-29 | Emotion identification method and system based on nonlinear characteristic |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108877840A true CN108877840A (en) | 2018-11-23 |
Family
ID=64296632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810712624.4A Pending CN108877840A (en) | 2018-06-29 | 2018-06-29 | Emotion identification method and system based on nonlinear characteristic |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108877840A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110480656A (en) * | 2019-09-09 | 2019-11-22 | 国家康复辅具研究中心 | One kind is accompanied and attended to robot, accompany and attend to robot control method and device |
CN110693508A (en) * | 2019-09-02 | 2020-01-17 | 中国航天员科研训练中心 | Multi-channel cooperative psychophysiological active sensing method and service robot |
CN110751950A (en) * | 2019-10-25 | 2020-02-04 | 武汉森哲地球空间信息技术有限公司 | Police conversation voice recognition method and system based on big data |
CN110781719A (en) * | 2019-09-02 | 2020-02-11 | 中国航天员科研训练中心 | Non-contact and contact cooperative mental state intelligent monitoring system |
CN110808041A (en) * | 2019-09-24 | 2020-02-18 | 深圳市火乐科技发展有限公司 | Voice recognition method, intelligent projector and related product |
CN111816213A (en) * | 2020-07-10 | 2020-10-23 | 深圳小辣椒科技有限责任公司 | Emotion analysis method and system based on voice recognition |
CN111986702A (en) * | 2020-07-31 | 2020-11-24 | 中国地质大学(武汉) | Speaker mental impedance phenomenon recognition method based on voice signal processing |
CN112037820A (en) * | 2019-05-16 | 2020-12-04 | 杭州海康威视数字技术股份有限公司 | Security alarm method, device, system and equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503805A (en) * | 2016-11-14 | 2017-03-15 | 合肥工业大学 | A kind of bimodal based on machine learning everybody talk with sentiment analysis system and method |
-
2018
- 2018-06-29 CN CN201810712624.4A patent/CN108877840A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503805A (en) * | 2016-11-14 | 2017-03-15 | 合肥工业大学 | A kind of bimodal based on machine learning everybody talk with sentiment analysis system and method |
Non-Patent Citations (1)
Title |
---|
姚慧: "情感语音的非线性特征研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112037820A (en) * | 2019-05-16 | 2020-12-04 | 杭州海康威视数字技术股份有限公司 | Security alarm method, device, system and equipment |
CN112037820B (en) * | 2019-05-16 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Security alarm method, device, system and equipment |
CN110693508A (en) * | 2019-09-02 | 2020-01-17 | 中国航天员科研训练中心 | Multi-channel cooperative psychophysiological active sensing method and service robot |
CN110781719A (en) * | 2019-09-02 | 2020-02-11 | 中国航天员科研训练中心 | Non-contact and contact cooperative mental state intelligent monitoring system |
CN110480656A (en) * | 2019-09-09 | 2019-11-22 | 国家康复辅具研究中心 | One kind is accompanied and attended to robot, accompany and attend to robot control method and device |
CN110480656B (en) * | 2019-09-09 | 2021-09-28 | 国家康复辅具研究中心 | Accompanying robot, accompanying robot control method and accompanying robot control device |
CN110808041A (en) * | 2019-09-24 | 2020-02-18 | 深圳市火乐科技发展有限公司 | Voice recognition method, intelligent projector and related product |
CN110808041B (en) * | 2019-09-24 | 2021-01-12 | 深圳市火乐科技发展有限公司 | Voice recognition method, intelligent projector and related product |
CN110751950A (en) * | 2019-10-25 | 2020-02-04 | 武汉森哲地球空间信息技术有限公司 | Police conversation voice recognition method and system based on big data |
CN111816213A (en) * | 2020-07-10 | 2020-10-23 | 深圳小辣椒科技有限责任公司 | Emotion analysis method and system based on voice recognition |
CN111986702A (en) * | 2020-07-31 | 2020-11-24 | 中国地质大学(武汉) | Speaker mental impedance phenomenon recognition method based on voice signal processing |
CN111986702B (en) * | 2020-07-31 | 2022-11-04 | 中国地质大学(武汉) | Speaker psychological impedance phenomenon identification method based on voice signal processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108877840A (en) | Emotion identification method and system based on nonlinear characteristic | |
Schuller et al. | Cross-corpus acoustic emotion recognition: Variances and strategies | |
CN112750465B (en) | Cloud language ability evaluation system and wearable recording terminal | |
Roark et al. | Spoken language derived measures for detecting mild cognitive impairment | |
Iliev et al. | Spoken emotion recognition through optimum-path forest classification using glottal features | |
Pao et al. | Mandarin emotional speech recognition based on SVM and NN | |
Aloufi et al. | Emotionless: Privacy-preserving speech analysis for voice assistants | |
CN103811009A (en) | Smart phone customer service system based on speech analysis | |
Kim et al. | Emotion recognition using physiological and speech signal in short-term observation | |
Levitan et al. | Combining Acoustic-Prosodic, Lexical, and Phonotactic Features for Automatic Deception Detection. | |
Xiao et al. | Recognition of emotions in speech by a hierarchical approach | |
Wu et al. | Climate and weather: Inspecting depression detection via emotion recognition | |
Tirronen et al. | Utilizing wav2vec in database-independent voice disorder detection | |
Mittal et al. | Study of changes in glottal vibration characteristics during laughter | |
Kelley et al. | Using acoustic distance and acoustic absement to quantify lexical competition | |
Alhinti et al. | Recognising emotions in dysarthric speech using typical speech data | |
Waghmare et al. | A comparative study of recognition technique used for development of automatic stuttered speech dysfluency Recognition system | |
Verma et al. | An Acoustic Analysis of Speech for Emotion Recognition using Deep Learning | |
Pao et al. | Recognition and analysis of emotion transition in mandarin speech signal | |
Tahon et al. | Laughter detection for on-line human-robot interaction | |
Kexin et al. | Research on Emergency Parking Instruction Recognition Based on Speech Recognition and Speech Emotion Recognition | |
Dumpala et al. | Analysis of the Effect of Speech-Laugh on Speaker Recognition System. | |
Tulics et al. | Statistical analysis of acoustical parameters in the voice of children with juvenile dysphonia | |
Patil et al. | A review on emotional speech recognition: resources, features, and classifiers | |
Stadelmann et al. | Unfolding speaker clustering potential: a biomimetic approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |