CN105244042B - A kind of speech emotional interactive device and method based on finite-state automata - Google Patents
A kind of speech emotional interactive device and method based on finite-state automata Download PDFInfo
- Publication number
- CN105244042B CN105244042B CN201510535485.9A CN201510535485A CN105244042B CN 105244042 B CN105244042 B CN 105244042B CN 201510535485 A CN201510535485 A CN 201510535485A CN 105244042 B CN105244042 B CN 105244042B
- Authority
- CN
- China
- Prior art keywords
- state
- finite
- affective
- module
- speech
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000002996 emotional effect Effects 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 23
- 230000003993 interaction Effects 0.000 claims abstract description 65
- 230000007704 transition Effects 0.000 claims abstract description 43
- 230000008451 emotion Effects 0.000 claims abstract description 37
- 239000011159 matrix material Substances 0.000 claims abstract description 26
- 238000012546 transfer Methods 0.000 claims abstract description 23
- 230000008909 emotion recognition Effects 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000013518 transcription Methods 0.000 claims abstract description 6
- 230000035897 transcription Effects 0.000 claims abstract description 6
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 238000009415 formwork Methods 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 230000006870 function Effects 0.000 description 16
- 238000011160 research Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 208000020016 psychiatric disease Diseases 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 241000238558 Eucarida Species 0.000 description 1
- 206010027951 Mood swings Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of speech emotional interactive device and method based on finite-state automata.Described device includes:Voice acquisition module generates file A after completion for transcription source, sample rate, voice-grade channel, audio data format to be arranged to original document write-in uncorrected data;The speech emotion recognition module of speech emotion recognition is carried out to file A;The speech emotional interactive module of affective interaction is carried out by the carrier of emotion.Speech emotional interactive module includes:The structure module of finite state machine model, for describing voice affective state and its transfer process using finite state machine;Affective interaction state transition table establishes module, the determination of definition and finite-state automata transfer function for finite-state automata transfer function;Transition matrix module, in finite state machine model, state transition function to be described with state transition matrix.Invention additionally discloses the speech emotional exchange methods based on finite-state automata of described device.
Description
Technical field
The present invention relates to one kind being based on Android client speech emotional exchange method, more particularly to a kind of based on limited
The speech emotional interactive device of state automata and the speech emotional exchange method based on finite-state automata.
Background technology
With flourishing for mobile Internet, the continuous of smart mobile phone is popularized, and the human-computer interaction of smart mobile phone is increasingly
It is valued by people, people are day and night companion with computer, mobile phone, and " people-people is interactive " gradually decreases, and " people-machine is interactive " constantly increases
Add, more stringent requirements are proposed for the demand of people's all emotions in human-computer interaction, i.e., affective interaction is increasingly closed by people
Note.
Voice is that the important medium of Human communication, especially voice technology are more exposed to the favor of world-renowned enterprise.Such as:
Voice assistant Siri, Google company wearable device in Apple corporate mobile devices (iPhone, iPad and iPod)
(such as Google Glass) and equipped with Google service Android smart machines in voice assistant Google Now and
Cortana personal digital assistants etc. in Microsoft Corporation Windows Phone mobile devices, these functions are great
Improve the chance of man machine language's interaction.
Intelligence, personalization of the research of speech emotional interaction for increase computer, developing intellectual resource novel human-machine interaction
Environment pushes the development of machine learning subject, is of great significance.
Speech emotional interaction technique is evolving and perfect at present, is brought significantly to people's lives, study and work
It influences.In terms of personal lifestyle, speech emotional interaction can record personal mood swing curve, sum up and be suitble to oneself work
The best time of study improves efficiency;In educational circles, speech emotional interaction technique is applied to children education product, can teach children
How to talk, it might even be possible to teach they how words say from however rich in emotion;In amusement circles, speech emotional interaction technique can
To construct the style more to personalize and more life-like scene of game, user's more fully pleasure of the senses is given;In industrial quarters, intelligence
Can household electrical appliance, automobile etc. it will be appreciated that our emotion, and respond, good clothes provided for our work and life
Business;It, can be to the elderly in part psychological disorders (such as depression, anxiety disorder mental disease) and family not living home in medical field
Emotion variation be detected and corresponding help be provided.Speech emotional interaction is the important research direction of interactive voice again, can
Think and know speech emotional interaction ecology potential a brand-new stage must be welcome along with the tide of mobile Internet.
In speech emotional interaction, speech emotion recognition is basis, and affective interaction is crucial.Currently, speech emotion recognition
Research has been achieved with certain progress, and researcher pays close attention to the research contents such as the structure of speech feature extraction, speech recognition modeling mostly,
And the research shorter mention of speech emotional exchange method.Human-computer interaction interface in social product most of at present and amusement game
The big mode for mostly using text, portioned product add speech voice input function, but are also that the recording faded out is passed on a message, and can not judge society
Hand over the emotion of object, let alone affective interaction.Therefore, the Affective Interaction Models applied to different application scene how are built,
It realizes speech emotional interactive function, is a major issue urgently to be resolved hurrily in speech emotional interaction field.
Invention content
The present invention is in view of the above problems, propose a kind of speech emotional interactive device and base based on finite automata model
In the speech emotional exchange method of finite automata model, the present invention is interacted for speech emotional, can preferably reflect voice
The interaction mode of emotion.
The present invention is achieved by the following technical solutions:A kind of speech emotional interaction dress based on finite-state automata
It sets comprising:It is basic to be used to be arranged transcription source, sample rate, voice-grade channel, audio data format four for voice acquisition module
Parameter is written uncorrected data to original document after parameter setting completion, generates file A;Speech emotion recognition module, is used for text
Part A carries out speech emotion recognition and obtains affective style;Speech emotional interactive module is used to carry out emotion by the carrier of emotion
Interaction;
Wherein:The speech emotional interactive module includes:The structure module of finite state machine model, is used to use
Finite state machine describes voice affective state and its transfer process;Affective interaction state transition table establishes module, has been used for
Limit the determination of the definition and finite-state automata transfer function of state automata transfer function;Transition matrix module, is used for
In finite state machine model, state transition function is described with state transition matrix.
As being further improved for said program, for structure module, deterministic finite state machine M is one five yuan
Group, as shown in formula (1):M=(Q, Σ, δ, q0,F) (1);
Wherein, Q refers to finite state set, Q={ q1,q2,…,qn};Σ refers to the set for all events that system can receive,
Σ={ σ1,σ2,…,σn};δ refers to state transition function, δ:Q×Σ→Q;q0Refer to original state, q0∈Q;F is to terminate shape
State,
As being further improved for said program, for module is established, the process of affective interaction is as follows:Enable state machine when
Between t state be qt, condition feedback state is σt, in discrete time, the state of Affective Interaction Models subsequent time is qt+1, then
There is formula (2):qt+1=δ (qt,qt) (2),
The state that i.e. state of Affective Interaction Models subsequent time depends on its current state and it is received.
As being further improved for said program, for transition matrix module, in finite state machine model, state
Transfer function is described with state transition matrix;The matrix of description affective state transfer is enabled to have the following form of formula (3):
Wherein 0≤fij≤ 1 indicates from state qiIt is transferred to state qjProbability;fijValue is counted according to the sample analysis of emotion library
It arrives, fij=p (qj|qi,σi) i=1,2 ..., n;J=1,2 ..., n.
The present invention also provides a kind of speech emotional exchange method based on finite-state automata comprising following steps:
(1) four transcription source, sample rate, voice-grade channel, audio data format basic parameters are set, to original text after parameter setting completion
Uncorrected data is written in part, generates file A;(2) speech emotion recognition is carried out to file A and obtains affective style;(3) pass through the load of emotion
Body carries out the interaction of emotion;
Wherein:Step (3) includes the following steps:(3.1) voice affective state and its conversion are described using finite state machine
Process;(3.2) determination of the definition of finite-state automata transfer function and finite-state automata transfer function;(3.3) exist
In finite state machine model, state transition function is described with state transition matrix.
As being further improved for said program, in step (3.1), deterministic finite state machine M is one five yuan
Group, as shown in formula (1):M=(Q, Σ, δ, q0,F) (1);
Wherein, Q refers to finite state set, Q={ q1,q2,…,qn};Σ refers to the set for all events that system can receive,
Σ={ σ1,σ2,…,σn};δ refers to state transition function, δ:Q×Σ→Q;q0Refer to original state, q0∈Q;F is to terminate shape
State,
As being further improved for said program, in step (3.2), the process of affective interaction is as follows:State machine is enabled to exist
The state of time t is qt, condition feedback state is σt, in discrete time, the state of Affective Interaction Models subsequent time is qt+1,
Then there is formula (2):qt+1=δ (qt,σt) (2),
The state that i.e. state of Affective Interaction Models subsequent time depends on its current state and it is received.
As being further improved for said program, in step (3.3), in finite state machine model, state turns
Exchange the letters number is described with state transition matrix;The matrix of description affective state transfer is enabled to have the following form of formula (3):
Wherein 0≤fij≤ 1 indicates from state qiIt is transferred to state qjProbability;fijValue is counted according to the sample analysis of emotion library
It arrives, fij=p (qj|qi,σi) i=1,2 ..., n;J=1,2 ..., n.
The present invention proposes the finite state machine model of affective interaction, establishes Affective Interaction Models, is applied to man-machine
Speech emotional interacts.Compared with prior art, the present invention has the beneficial effect that:It is proposed limited affective state automaton model,
Applied to man machine language's affective interaction, which can be applied to intelligent appliance, medicine auxiliary treatment etc., can be the mankind
Humanized, emotional culture products & services are provided.
Description of the drawings
The structural frames for the speech emotional interactive device based on finite-state automata that Fig. 1 present pre-ferred embodiments provide
Figure.
Fig. 2 is the flow chart of voice acquisition module in Fig. 1.
Fig. 3 is the voice emotion identification block diagram of speech emotion recognition module in Fig. 1.
Fig. 4 is the Affective Interaction Models figure that affective interaction module is established in Fig. 1.
Fig. 5 is that affective interaction module obtains the state transition graph of affective interaction in Fig. 1.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with embodiment, to this hair
It is bright to be further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and do not have to
It is of the invention in limiting.
The invention mainly comprises three aspects:Voice collecting, speech emotion recognition, speech emotional interaction.Speech emotional is handed over
It is mutually the key technology of the present invention.As shown in Figure 1, the speech emotional interactive device based on finite-state automata includes that voice is adopted
Collect module, speech emotion recognition module and speech emotional interactive module.
The specific implementation mode of each module is as follows.
(1) voice acquisition module
The flow chart of voice acquisition module is as shown in Figure 2.Specific implementation is as follows:Setting four is needed in acquiring voice step
A basic parameter:First parameter is transcription source, in Android phone there are mainly four types of sound source be Mike respectively
Wind, call, call data feedback channel and down going channel of conversing, the group that can be used in Android tablets will have microphone source, this side
Method acquiescence uses microphone source;Second parameter is sample rate, and 44100Hz is the general frequency that may operate in all devices at present
Rate also has other some frequencies such as 22050Hz, 16000Hz, 11025Hz that can be operated in certain equipment, the method certainly
Acquiescence uses 44100Hz;Third parameter is voice-grade channel, and voice-grade channel has single channel and two kinds stereo, wherein single channel
It may operate in all Android devices, the method acquiescence uses monophonic;4th parameter is audio data format,
The data that sampling comes use pcm encoder, PCM to convert continuously varying analog signal by three sampling, quantization, coding steps
For digital coding, sample size 16bit or 8bit, the method uses 16bit.Naked number is written to file after parameter setting completion
According to generation file A.
(2) speech emotion recognition module
As shown in figure 3, the present invention improves speech emotion recognition result by multiple features fusion method.Fourier is extracted first
Then parameter attribute, wavelet packet coefficient feature and mel cepstrum frequecy characteristic build different classifications device model.It is carried for the present invention
The different characteristic parameter taken finds out the optimal feature subset of each characteristic parameter after feature selecting, builds emotion recognition mould
Then type takes the identification model of corresponding strategy structure multiple features multi-model fusion.The voice signal of (1) acquisition is passed through into voice
Emotion recognition module is identified, and obtains affective style.
(3) affective interaction module
Emotion is the basis of cognition, and most basic, quick intelligent response is made to environmental stimuli.Emotion passes through emotion
Carrier (people, computer etc.) carries out the interaction of emotion.From the angle of engineering, Affective Interaction Models are established, work as Emotion carrier
When carrying out affective interaction, which can reproduce the dynamic change of emotion.This module includes finite state machine model
Three parts of determination for establishing module and transition matrix module of structure module, affective interaction state transition table.
1. the structure module of finite state machine model
This module construction finite state machine Affective Interaction Models describe voice affective state using finite state machine and its turn
Change process.Deterministic finite state machine M is a five-tuple, as shown in formula (1):
M=(Q, Σ, δ, q0,F) (1)
Wherein, Q refers to finite state set, Q={ q1,q2,…,qn};Σ refers to the collection for all events that system can receive
It closes, Σ={ σ1,σ2,…,σn};δ refers to state transition function, δ:Q×Σ→Q;q0Refer to original state, q0∈Q;F is to terminate
State,
2. affective interaction state transition table establishes module
This module realizes the foundation of Affective Interaction Models, including two aspects:A finite-state automata transfer functions are determined
The determination of justice, B finite-state automata transfer functions.
The definition of A finite-state automata transfer functions
Assuming that it includes 4 kinds of affective states that affective state, which is concentrated,.According to finite state machine model, the affective interaction of foundation
As shown in figure 4, wherein q0Represent the initial affective state of individual, σiIndicate the state of input, the i.e. event of individual reception, σi∈
Σ indicates the affective state of machine feedback in this system.It is meant that according to current affective state and input state, individual
Affective state changes.
The process of affective interaction is as follows:Enable state machine time t state be qt, condition feedback state is σt, when discrete
In, the state of Affective Interaction Models subsequent time is qt+1, then have:
qt+1=δ (qt,σt) (2)
The state that i.e. state of Affective Interaction Models subsequent time depends on its current state and it is received.
The determination of B finite-state automata transfer functions
Include 4 kinds of affective states in speech emotion recognition system, therefore by taking 4 kinds of emotions in state set as an example, emotion shape
State set is defined as { H, S, A, N }, and wherein H refers to happiness, and S refers to sad, and A refers to anger, and N refers to calmness.State transition table
As shown in table 1, row indicates that current affective state, row indicate that the feedback of the machine emotion of state machine, state transition table take in table
State of the value expression state machine in time t+1.
Table 1:State transition table
According to the individual difference of Emotion carrier, the result of affective interaction is also different, in the present invention when affective interaction
According to being to count to obtain according to the sample analysis of emotion library, analysis result is as shown in table 1 for conversion.For example, initial affective state is
When angry, the affective state of machine feedback is happiness, and for different individual character carriers, emotion variation is also different.It can express
To be glad, angry, gentle or sad, other heartbeat conditions are also such.Consider for the case where this system, the state of setting turns
It is as shown in table 1 to change table.
3. transition matrix module
Transition matrix module realizes the determination of state transition matrix.In finite state machine model, state converts letter
Number can be described with state transition matrix.The matrix F of description affective state transfer is enabled to have the following form of formula (3):
Wherein 0≤fij≤ 1 indicates from state qiIt is transferred to state qjProbability.
fij=p (qj|qi,σi) i=1,2 ..., n;J=1,2 ..., n.
F in this systemijValue counts to obtain according to the sample analysis of emotion library, and value is as shown in table 2.
Table 2:State transition matrix
H | S | A | N | |
H | 0.40816328 | 0.020408163 | 0.008746356 | 0.5626822 |
S | 0.045698926 | 0.48924732 | 0.28225806 | 0.1827957 |
A | 0.016587678 | 0.4028436 | 0.45734596 | 0.123222746 |
N | 0.1590909 | 0.19444445 | 0.045454547 | 0.6010101 |
The determination of the state transition graph of affective state machine, according to the determination of state transition matrix, you can obtain affective interaction
State transition graph, as shown in Figure 5.According to Affective Interaction Models, when there is the input of speech emotional state, machine identifies first
Emotion information, then selects different emotions to feed back, and makes different affective interactions respectively, such as detects to be sad emotion,
A joke can be randomly selected to listen to user, play a suchlike interactive mode of animation etc. of making laughs.
The present invention is that one kind first has to detect in the present invention based on Android client speech emotional exchange method
The running environment or state parameter of Android client, if whether SD card opens with the presence or absence of data network or WIFI, if condition
Satisfaction can then carry out emotion recognition.Subsequent Android client mainly acquires raw tone, and is sampled, quantifies, encodes
Audio file is transmitted to server by network and carries out complicated data analysis, and energy by the audio file for forming a standard
The analysis result that server returns is received, and analysis result is handled accordingly, finally obtains sentiment analysis as a result, by feelings
Sense analysis result call UI feeds back to user.
The above content is combining, specific attached drawing is made for the present invention to be described in detail, and it cannot be said that the present invention is embodied
It is only limitted to these explanations.For those skilled in the art to which the present invention belongs, in the premise for not departing from present inventive concept
Under, several simple replacements and change can also be made, the present invention is all shall be regarded as belonging to and is determined by the claims submitted
Invention protection domain.
Claims (8)
1. a kind of speech emotional interactive device based on finite-state automata comprising:
Voice acquisition module is used to be arranged four transcription source, sample rate, voice-grade channel, audio data format basic parameters, ginseng
It counts and uncorrected data is written to original document after being provided with, generate file A;
Speech emotion recognition module is used to carry out speech emotion recognition to file A to obtain affective style;
Speech emotional interactive module is used to carry out the interaction of emotion by the carrier of emotion;
It is characterized in that:
The speech emotional interactive module includes:
The structure module of finite state machine model is used to describe voice affective state and its conversion using finite state machine
Process;
Affective interaction state transition table establishes module, is used for definition and the finite state of finite-state automata transfer function
The determination of automatic machine transfer function;
Transition matrix module, is used in finite state machine model, and state transition function is described with state transition matrix.
2. the speech emotional interactive device based on finite-state automata as described in claim 1, it is characterised in that:For structure
Block is modeled, deterministic finite state machine M is a five-tuple, as shown in formula (1):
M=(Q, ∑, δ, q0, F) and (1);
Wherein, Q refers to finite state set, Q={ q1,q2..., qn};∑ refers to the set for all events that system can receive, ∑
={ σ1,σ2..., σn};δ refers to state transition function, δ:Q×∑→Q;q0Refer to original state, q0∈Q;F is final state,
3. the speech emotional interactive device based on finite-state automata as claimed in claim 2, it is characterised in that:For building
The process of formwork erection block, affective interaction is as follows:Enable state machine time t state be qt, condition feedback state is σt, when discrete
In, the state of Affective Interaction Models subsequent time is qt+1, then have formula (2):qt+1=δ (qt,σt) (2),
The state that i.e. state of Affective Interaction Models subsequent time depends on its current state and it is received.
4. the speech emotional interactive device based on finite-state automata as claimed in claim 3, it is characterised in that:For turn
Matrix module is changed, in finite state machine model, state transition function is described with state transition matrix;Enable description emotion shape
The matrix of state transfer has the following form of formula (3):
Wherein 0≤fij≤ 1 indicates from state qiIt is transferred to state qjProbability;fijValue is counted according to the sample analysis of emotion library
It arrives, fij=p (qj|qi, σi) i=1,2 ..., n;J=1,2 ..., n.
5. a kind of speech emotional exchange method based on finite-state automata comprising following steps:
(1) four transcription source, sample rate, voice-grade channel, audio data format basic parameters are set, to original after parameter setting completion
Uncorrected data is written in beginning file, generates file A;
(2) speech emotion recognition is carried out to file A and obtains affective style;
(3) interaction of emotion is carried out by the carrier of emotion;
It is characterized in that:
Step (3) includes the following steps:
(3.1) voice affective state and its transfer process are described using finite state machine;
(3.2) determination of the definition of finite-state automata transfer function and finite-state automata transfer function;
(3.3) in finite state machine model, state transition function is described with state transition matrix.
6. the speech emotional exchange method based on finite-state automata as claimed in claim 5, it is characterised in that:In step
(3.1) in, deterministic finite state machine M is a five-tuple, as shown in formula (1):M=(Q, ∑, δ, q0, F) and (1);
Wherein, Q refers to finite state set, Q={ q1,q2..., qn};∑ refers to the set for all events that system can receive, ∑
={ σ1,σ2..., σn};δ refers to state transition function, δ:Q×∑→Q;q0Refer to original state, q0∈Q;F is final state,
7. the speech emotional exchange method based on finite-state automata as claimed in claim 6, it is characterised in that:In step
(3.2) in, the process of affective interaction is as follows:Enable state machine time t state be qt, condition feedback state is σt, when discrete
In, the state of Affective Interaction Models subsequent time is qt+1, then have formula (2):qt+1=δ (qt,σt) (2),
The state that i.e. state of Affective Interaction Models subsequent time depends on its current state and it is received.
8. the speech emotional exchange method based on finite-state automata as claimed in claim 7, it is characterised in that:In step
(3.3) in, in finite state machine model, state transition function is described with state transition matrix;Enable description affective state
The matrix of transfer has the following form of formula (3):
Wherein 0≤fij≤ 1 indicates from state qiIt is transferred to state qjProbability;fijValue is counted according to the sample analysis of emotion library
It arrives, fij=p (qj|qi, σi) i=1,2 ..., n;J=1,2 ..., n.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510535485.9A CN105244042B (en) | 2015-08-26 | 2015-08-26 | A kind of speech emotional interactive device and method based on finite-state automata |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510535485.9A CN105244042B (en) | 2015-08-26 | 2015-08-26 | A kind of speech emotional interactive device and method based on finite-state automata |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105244042A CN105244042A (en) | 2016-01-13 |
CN105244042B true CN105244042B (en) | 2018-11-13 |
Family
ID=55041661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510535485.9A Expired - Fee Related CN105244042B (en) | 2015-08-26 | 2015-08-26 | A kind of speech emotional interactive device and method based on finite-state automata |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105244042B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991172B (en) * | 2017-04-05 | 2020-04-28 | 安徽建筑大学 | Method for establishing multi-mode emotion interaction database |
CN106992000B (en) * | 2017-04-07 | 2021-02-09 | 安徽建筑大学 | Prediction-based multi-feature fusion old people voice emotion recognition method |
CN107358967A (en) * | 2017-06-08 | 2017-11-17 | 广东科学技术职业学院 | A kind of the elderly's speech-emotion recognition method based on WFST |
CN108875025B (en) * | 2018-06-21 | 2021-12-03 | 江苏好集网络科技集团有限公司 | Smart home emotion interaction system |
CN108984522B (en) * | 2018-06-21 | 2022-12-23 | 北京亿家老小科技有限公司 | Intelligent nursing system |
CN109015666A (en) * | 2018-06-21 | 2018-12-18 | 肖鑫茹 | A kind of intelligent robot |
CN110147432B (en) * | 2019-05-07 | 2023-04-07 | 大连理工大学 | Decision search engine implementation method based on finite state automaton |
CN111026467B (en) * | 2019-12-06 | 2022-12-20 | 合肥科大智能机器人技术有限公司 | Control method of finite-state machine and finite-state machine |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101064104A (en) * | 2006-04-24 | 2007-10-31 | 中国科学院自动化研究所 | Emotion voice creating method based on voice conversion |
CN101618280A (en) * | 2009-06-30 | 2010-01-06 | 哈尔滨工业大学 | Humanoid-head robot device with human-computer interaction function and behavior control method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8214214B2 (en) * | 2004-12-03 | 2012-07-03 | Phoenix Solutions, Inc. | Emotion detection device and method for use in distributed systems |
-
2015
- 2015-08-26 CN CN201510535485.9A patent/CN105244042B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101064104A (en) * | 2006-04-24 | 2007-10-31 | 中国科学院自动化研究所 | Emotion voice creating method based on voice conversion |
CN101618280A (en) * | 2009-06-30 | 2010-01-06 | 哈尔滨工业大学 | Humanoid-head robot device with human-computer interaction function and behavior control method thereof |
Non-Patent Citations (2)
Title |
---|
基于PAD三维情绪模型的情感语音韵律转换;鲁小勇 等;《计算机工程与应用》;20130531;第49卷(第5期);230-235 * |
面向情感语音转换的韵律转换方法;李贤 等;《声学学报》;20140731;第39卷(第4期);509-516 * |
Also Published As
Publication number | Publication date |
---|---|
CN105244042A (en) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105244042B (en) | A kind of speech emotional interactive device and method based on finite-state automata | |
US11475897B2 (en) | Method and apparatus for response using voice matching user category | |
CN102723078B (en) | Emotion speech recognition method based on natural language comprehension | |
CN103137129B (en) | Audio recognition method and electronic installation | |
CN102737629B (en) | Embedded type speech emotion recognition method and device | |
CN108000526A (en) | Dialogue exchange method and system for intelligent robot | |
CN107870977A (en) | Chat robots output is formed based on User Status | |
CN106294854A (en) | A kind of man-machine interaction method for intelligent robot and device | |
CN107480161A (en) | The intelligent automation assistant probed into for media | |
CN106910514A (en) | Method of speech processing and system | |
WO2020253128A1 (en) | Voice recognition-based communication service method, apparatus, computer device, and storage medium | |
WO2022178969A1 (en) | Voice conversation data processing method and apparatus, and computer device and storage medium | |
CN106847278A (en) | System of selection and its mobile terminal apparatus and information system based on speech recognition | |
CN102298694A (en) | Man-machine interaction identification system applied to remote information service | |
CN103729193A (en) | Method and device for man-machine interaction | |
CN107085717A (en) | A kind of family's monitoring method, service end and computer-readable recording medium | |
CN101115088A (en) | Mobile phone dedicated for deaf-mutes | |
CN109986569A (en) | Chat robots with roleization He characterization | |
CN107038241A (en) | Intelligent dialogue device and method with scenario analysis function | |
CN107808191A (en) | The output intent and system of the multi-modal interaction of visual human | |
CN109346083A (en) | A kind of intelligent sound exchange method and device, relevant device and storage medium | |
CN109215679A (en) | Dialogue method and device based on user emotion | |
CN108038243A (en) | Music recommends method, apparatus, storage medium and electronic equipment | |
CN109671435A (en) | Method and apparatus for waking up smart machine | |
CN104702759A (en) | Address list setting method and address list setting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181113 |