CN110085262A - Voice mood exchange method, computer equipment and computer readable storage medium - Google Patents

Voice mood exchange method, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN110085262A
CN110085262A CN201810078883.6A CN201810078883A CN110085262A CN 110085262 A CN110085262 A CN 110085262A CN 201810078883 A CN201810078883 A CN 201810078883A CN 110085262 A CN110085262 A CN 110085262A
Authority
CN
China
Prior art keywords
mood
frame
voice
audio
emotion identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810078883.6A
Other languages
Chinese (zh)
Inventor
王慧
余世经
朱频频
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaoi Robot Technology Co Ltd
Shanghai Zhizhen Intelligent Network Technology Co Ltd
Original Assignee
Shanghai Zhizhen Intelligent Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhizhen Intelligent Network Technology Co Ltd filed Critical Shanghai Zhizhen Intelligent Network Technology Co Ltd
Priority to CN201810078883.6A priority Critical patent/CN110085262A/en
Publication of CN110085262A publication Critical patent/CN110085262A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the invention provides a kind of voice mood exchange method, computer equipment and computer readable storage medium, solve the problems, such as that interactive voice mode in the prior art can not analyze the profound of user message and be intended to and can not provide more humane interactive experience.This method comprises: obtaining determining Emotion identification result according to the audio data of user speech message and content of text;Intention analysis is carried out according to the content of text of user speech message, obtains corresponding basic intent information;And corresponding interactive instruction is determined according to Emotion identification result and basic intent information;Obtaining Emotion identification result includes: to extract the audio feature vector of user speech message;The audio feature vector of user speech message is matched with multiple emotional characteristics models;The corresponding mood classification of the emotional characteristics model to match is classified as the mood of user speech message.

Description

Voice mood exchange method, computer equipment and computer readable storage medium
Technical field
The present invention relates to technical field of intelligent interaction, and in particular to a kind of voice mood exchange method, computer equipment and Computer readable storage medium.
Background technique
With the continuous improvement that the continuous development and people of artificial intelligence technology require interactive experience, intelligent interaction Mode gradually starts to substitute some traditional man-machine interaction modes, and has become a research hotspot.However, existing intelligence Interactive mode is only capable of in the semanteme for probably analyzing user message in such a way that speech message turns text and carries out semantics recognition Hold, and can not identify the current emotional state of user, thus it is real that user message institute can not be analyzed according to the emotional state of user The profound emotional need that border is intended by, can not also provide more humane interactive experience according to user message.For example, right It is anxious user and a rigid emotional state for having started to do stroke planning in the emotional state that one is being made up for lost time is gentle User, when inquiring air flight times information, desired obtained reply mode is different certainly, and according to existing Semantic-based intelligent interaction mode, the obtained reply mode of different users is identical, such as only corresponding boat Class's temporal information program is to user.
Summary of the invention
It can in view of this, the embodiment of the invention provides a kind of voice mood exchange method, computer equipment and computers Storage medium is read, the profound intention and nothing of user message can not be analyzed by solving intelligent interaction mode in the prior art Method provides the problem of more humane interactive experience.
One embodiment of the invention provide a kind of voice mood exchange method include:
Audio Emotion identification is obtained as a result, and according to the user speech according to the audio data of the user speech message The content of text of message obtains text Emotion identification result;
Intention analysis is carried out according to the content of text of the user speech message, obtains corresponding basic intent information;With And
Corresponding interactive instruction is determined according to the Emotion identification result and the basic intent information;
It is described to include: according to the audio data of user speech message acquisition Emotion identification result
Extract the audio feature vector of the user speech message, wherein the user speech message correspond to it is described to be identified One section of word in audio stream, the audio feature vector includes one of following several audio frequency characteristics or a variety of: energy feature, Pronunciation frame number feature, fundamental frequency feature, formant feature, harmonic wave are made an uproar than feature and mel cepstrum coefficients feature;
The audio feature vector of the user speech message is matched with multiple emotional characteristics models, wherein described more A emotional characteristics model respectively corresponds one in multiple mood classification;And
It is that the corresponding mood classification of the emotional characteristics model to match is used as the user speech by matching result The mood of message is classified.
A kind of computer equipment that one embodiment of the invention provides includes: memory, processor and is stored in described deposit The computer program executed on reservoir by the processor, the processor are realized as previously described when executing the computer program The step of method.
A kind of computer readable storage medium that one embodiment of the invention provides, is stored thereon with computer program, described The step of method as previously described is realized when computer program is executed by processor.
A kind of voice mood exchange method, computer equipment and computer-readable storage medium provided in an embodiment of the present invention Matter combines in audio data and text based on user speech message on the basis of understanding the basic intent information of user Hold the Emotion identification obtained as a result, simultaneously further providing the interaction that band is in a bad mood according to basic intent information and Emotion identification result Instruction is intended to and can not to solve intelligent interaction mode in the prior art and can not analyze the profound of user message The problem of more humane interactive experience is provided.
Detailed description of the invention
Fig. 1 show a kind of flow diagram of voice mood exchange method of one embodiment of the invention offer.
Fig. 2 is shown in voice mood exchange method provided by one embodiment of the invention according to the sound of user speech message The flow diagram of frequency data acquisition audio Emotion identification result.
Fig. 3 show the stream that emotional characteristics model is established in voice mood exchange method provided by one embodiment of the invention Journey schematic diagram.
Fig. 4 show the stream that user speech message is extracted in voice mood exchange method provided by one embodiment of the invention Journey schematic diagram.
Fig. 5, which is shown in voice mood exchange method provided by one embodiment of the invention, determines voice start frame and language The flow diagram of sound end frame.
Fig. 6 show detection pronunciation frame or non-vocal frame in voice mood exchange method provided by one embodiment of the invention Flow diagram.
Fig. 7 show in the voice mood exchange method of one embodiment of the invention offer and obtains base according to user speech message The flow diagram of this intent information.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
Fig. 1 show a kind of flow diagram of voice mood exchange method of one embodiment of the invention offer.Such as Fig. 1 Shown, which includes the following steps:
Step 101: audio Emotion identification is obtained as a result, and according to user speech according to the audio data of user speech message The content of text of message obtains text Emotion identification as a result, knowing according to the audio Emotion identification result and the text mood Other result determines Emotion identification result.
For example, being felt concerned about in the customer service interaction scenarios of system in a call, user may be that client is also likely to be server-side;Again Such as in intelligent robot interaction scenarios, user speech message just may include that user is inputted by the voice of the intelligent robot The information of module input.
Since the audio data of the user speech message of different emotional states will include different audio frequency characteristics, at this time can Audio Emotion identification is obtained as a result, and determining mood according to audio Emotion identification result according to the audio data of user speech message Recognition result.
According to Emotion identification result accessed by the user message will during subsequent with basic intent information into Row combines, and to speculate that the mood of user is intended to, or directly provides band according to basic intent information and Emotion identification result and is in a bad mood Interactive instruction.
In an embodiment of the present invention, audio Emotion identification result and text Emotion identification result can be come in several ways Characterization.In an embodiment of the present invention, the mode of discrete mood classification can be used to characterize Emotion identification as a result, audio at this time Emotion identification result and text Emotion identification result can respectively include one of multiple mood classification or a variety of.For example, in visitor It takes in interaction scenarios, multiple mood classification is just can include: satisfied classification, tranquil classification and irritated classification, to correspond to customer service The emotional state that user is likely to occur in interaction scenarios;Alternatively, multiple mood classification can include: satisfaction is classified, calmness is classified, Agitation classification and angry classification, to correspond to the emotional state that contact staff is likely to occur in customer service interaction scenarios.However it should Understand, the type and quantity of these moods classification can be adjusted according to actual application scenarios demand, and the present invention classifies to mood Type and quantity do not do considered critical equally.In a further embodiment, each mood classification may also include multiple moods Intensity rank.Specifically, mood classification and emotional intensity rank may be considered two dimensional parameters, it can be independent of one another (for example, the classification of every kind of mood has corresponding N kind emotional intensity rank, for example, slightly, moderate and severe), can also have default Corresponding relationship (such as the classification of " agitation " mood includes three kinds of emotional intensity ranks, slight, moderate and severe;And " satisfaction " feelings Thread classification only includes two kinds of emotional intensity ranks, moderate and severe).It can be seen that emotional intensity rank at this time can be regarded as It is a property parameters of mood classification, when determining a kind of classification of mood by Emotion identification process, has also determined that the feelings The emotional intensity rank of thread classification.
In an alternative embodiment of the invention, the mode of non-discrete dimension mood model can be used also to characterize Emotion identification As a result.Audio Emotion identification result and text Emotion identification result can respectively correspond a coordinate in multidimensional emotional space at this time Point, the emotional factor that the corresponding psychology of each dimension in multidimensional emotional space defines.For example, PAD can be used (PleasureArousalDominanc) three dimensional mood model.The model thinks that mood has pleasure degree, activity and dominance Three dimensions, every kind of mood can all be characterized by the corresponding emotional factor of these three dimensions institute.Wherein P represents pleasure Degree indicates the positive negative characteristic of individual emotional state;A represents activity, indicates the nerve triumph activation level of individual;D represents excellent Gesture degree indicates individual to scene and other people state of a control.
It should be appreciated that audio Emotion identification result and text Emotion identification result can also be used other characteristic manners and carry out table Sign, the present invention is to specific characteristic manner and without limitation.
In an embodiment of the present invention, audio Emotion identification result and text Emotion identification result respectively correspond multidimensional emotion A coordinate points in space, at this time can be by audio Emotion identification result and text Emotion identification result in multidimensional emotional space In the coordinate values of coordinate points be weighted and averaged processing, the coordinate points obtained after weighted average is handled are as Emotion identification knot Fruit.For example, audio Emotion identification result is characterized as (p1, a1, d1), text Emotion identification when using PAD three dimensional mood model As a result be characterized as (p2, a2, d2), then final Emotion identification result just may be characterized as ((p1+p2)/2, (a1+1.3*a2)/ 2, (d1+0.8*d2)/2), therein 1.3 and 0.8 is weight coefficient.Non-discrete dimension mood model is used to be more convenient for measure The mode of change calculates final Emotion identification result.It should be appreciated, however, that combination mode is not limited to above-mentioned add Weight average processing, the present invention respectively correspond in multidimensional emotional space to when audio Emotion identification result and text Emotion identification result A coordinate points when determine the concrete mode of Emotion identification result without limitation.
According to the method that audio Emotion identification result and text Emotion identification result determine Emotion identification result may include as Lower step:
Step 201:, will if audio Emotion identification result and text Emotion identification result include identical mood classification Identical mood classification is used as Emotion identification result.
Step 202: if audio Emotion identification result and text Emotion identification result do not include identical mood classification, Then by audio Emotion identification result and text Emotion identification result collectively as Emotion identification result.
Although it should be appreciated that being defined in step 202 when audio Emotion identification result and text Emotion identification result do not have When having including the classification of identical mood, by audio Emotion identification result and text Emotion identification result collectively as Emotion identification knot Fruit, but in other embodiments of the invention, can also take more conservative interactive strategy, for example, directly generate error information or Emotion identification result etc. is not exported, in order to avoid causing to mislead to interactive process, the present invention is to audio Emotion identification result and text feelings Processing mode when thread recognition result does not include the classification of identical mood does not do considered critical.
Certainly, the method for Emotion identification result is determined not according to audio Emotion identification result and text Emotion identification result It is limited to this.
Step 102: intention analysis being carried out according to the content of text of user speech message, obtains corresponding basic intention letter Breath.
It is intention that user message intuitively reflects that basic intent information is corresponding, but can not simultaneously reflect the current shape of user True emotional demand under state, therefore just need to integrate determining user message in conjunction with Emotion identification result and actually want to express Profound intention and emotional need.For example, the emotional state made up for lost time for one is anxious user and one The emotional state for just starting to do stroke planning is gentle user, when the content for the user speech message that the two is issued is similarly When inquiring Flight Information, it is all inquiry Flight Information, but required for the two that obtained basic intent information, which is also identical, Emotional need is obviously different.
When user message includes user speech message, basic intent information can be in the text according to user speech message Appearance carries out being intended to analysis obtaining, the basic intent information is corresponding be user speech message content of text it is anti-in semantic level institute The intention mirrored can't have any emotion.
It in an embodiment of the present invention, can also root in order to further increase the accuracy of acquired basic intent information According to current user speech message, and passing user speech message and/or subsequent user speech message is combined to be intended to Analysis, obtains corresponding basic intent information.For example, may lack in the intention of present user speech message some keywords and Slot position (slot), but these contents can be obtained by passing user speech message and/or subsequent user speech message.Example Such as, the content of current user speech message is " having what specialty? " when, what subject (slot) therein was missing from, but pass through In conjunction with passing user speech message " how is Changzhou weather? " i.e. extractable " Changzhou " is used as subject, finally obtains in this way The basic intent information of present user speech message can be " what specialty Changzhou has? ".
Step 103: corresponding interactive instruction is determined according to Emotion identification result and basic intent information.
Corresponding relationship between Emotion identification result and basic intent information and interactive instruction can be by building with learning process It is vertical.In an embodiment of the present invention, the content and form of interactive instruction includes that mode: text is presented in one or more of emotion Export emotion presentation mode, melody plays emotion presentation mode, mode is presented in speech emotional, mode and machinery is presented in Image emotional semantic It acts emotion and mode is presented.It should be appreciated, however, that the specific emotion presentation mode of interactive instruction can also be according to the need of interaction scenarios It asks and adjusts, the present invention is to the particular content and form of interactive instruction and without limitation.
In an embodiment of the present invention, it can be and corresponding feelings first determined according to Emotion identification result and basic intent information Then thread intent information determines corresponding interactive instruction further according to mood intent information, or according to mood intent information and substantially Intent information determines the corresponding interactive instruction.Mood intent information at this time can have specific content.
Specifically, the particular content of mood intent information refers to the intent information with emotion, can reflect Reflect the emotional need of user message, mood intent information and Emotion identification result and basic intent information while basic intention Between corresponding relationship can be pre-established by pre- learning process.In an embodiment of the present invention, which can wrap Include affection need information corresponding with Emotion identification result, or may include affection need information corresponding with Emotion identification result with And the incidence relation of Emotion identification result and basic intent information.The incidence relation of Emotion identification result and basic intent information can To preset (such as by rule settings or logic judgment).For example, when the content of Emotion identification result is " anxiety ", When the content of basic intent information is " reporting the loss credit card ", the content for the mood intent information determined just may include Emotion identification As a result the incidence relation with basic intent information: " reporting the loss credit card, user is very anxious, and possible credit card is lost or stolen ", together When identified affection need information can be " comfort ".The incidence relation of Emotion identification result and basic intent information can also be with It is that (such as trained end to end model can pass through input Emotion identification result for the model that is obtained based on specific training process Emotion intention is directly exported with basic intent information).This training pattern can be fixed depth network model (for example including Pre-set rule), it can also be constantly updated by on-line study (such as using enhancing learning model, in a model Objective function and reward function are set, as human-computer interaction number increases, which, which can also constantly update, is drilled Change).
It should be appreciated, however, that mood intent information can also only exist as the mark of mapping relations.Mood intent information with The corresponding relationship between corresponding relationship and mood intent information and basic intent information and interactive instruction between interactive instruction It can also be pre-established by pre- learning process.
It should be appreciated that being to need to show the feedback content to the mood intent information under application scenes 's.Such as under some customer service interaction scenarios, need to be presented the mood intent information analyzed according to the voice content of client To contact staff, to play reminding effect, corresponding mood intent information must be just determined at this time, and will be intended to the mood The feedback content of information shows.However under other application scenarios, need to directly give corresponding interactive instruction, and It does not need to show the feedback content to the mood intent information, it at this time can also be according to Emotion identification result and basic intention letter Breath directly determines corresponding interactive instruction, and does not have to generate mood intent information.
It in an embodiment of the present invention, can also be in order to further increase the accuracy of acquired mood intent information According to the Emotion identification result and basic intent information of current user speech message, and combine passing user speech message and / or subsequent user speech message Emotion identification result and basic intent information, determine corresponding mood intent information.At this time With regard to needing to record the Emotion identification result and basic intent information of current user speech message in real time, in order to according to other User speech message when determining mood intent information as reference.For example, the content of current user speech message is " not have How bank card withdraws cash? ", acquired Emotion identification result is " anxiety ", but can not be quasi- according to current user speech message Really the reason of judgement " anxiety " mood.Passing user speech message and/or subsequent user speech message can be traced at this time, As a result, it has been found that a passing user speech message is " how bank card is reported the loss? ", can then speculate that the mood of user is intended to Information can be for " bank card loss results in mood anxiety, it is desirable to which how consulting is reported the loss or taken in the case where no bank card Money ".Interactive instruction can be at this time generated for mood intent information, such as play following comfort voice " no card withdrawal please according to Following steps operation, woulds you please not worry, and losing bank card can also operate by the following method ... ".
It in an embodiment of the present invention, can also in order to further increase the accuracy of acquired corresponding interactive instruction With according to the mood intent information and basic intent information of current user speech message, and combine passing user speech message And/or the mood intent information and basic intent information of subsequent user speech message, determine corresponding interactive instruction.At this time Need to record the Emotion identification result and basic intent information of current user speech message in real time, in order to according to others As reference when user speech message determines interactive instruction.
Fig. 2 is shown in voice mood exchange method provided by one embodiment of the invention according to the sound of user speech message The flow diagram of frequency data acquisition audio Emotion identification result.As shown in Fig. 2, in a step 101, according to user speech message Audio data obtain Emotion identification result include:
Step 111: extracting the audio feature vector of user speech message, wherein user speech message corresponds to audio to be identified One section of word in stream, audio feature vector includes one of following several audio frequency characteristics or a variety of: energy feature, pronunciation frame number Feature, fundamental frequency feature, formant feature, harmonic wave are made an uproar than feature and mel cepstrum coefficients feature.
Step 112: the audio feature vector of user speech message being matched with multiple emotional characteristics models, wherein more A emotional characteristics model respectively corresponds one in multiple mood classification.
Step 113: being that the corresponding mood classification of emotional characteristics model to match is used as user speech by matching result The mood of message is classified.
It can be seen that voice mood exchange method provided in an embodiment of the present invention, in the basic intent information for understanding user On the basis of, the Emotion identification based on user message acquisition is combined as a result, simultaneously further speculating that the mood of user is intended to, or straight It connects and the interactive instruction that band is in a bad mood is provided according to basic intent information and Emotion identification result, to solve in the prior art Intelligent interaction mode can not analyze the profound of user message and be intended to and emotional need and can not provide more humane friendship Mutually the problem of experience.
Step 111: extracting the audio feature vector of the user speech message in audio stream to be identified, wherein user speech disappears One section of word in the corresponding audio stream to be identified of breath.
Audio feature vector includes value of at least one audio frequency characteristics at least one vector direction.It is in fact in this way All audio frequency characteristics are characterized using the vector space of a multidimensional, in the vector space, the direction of audio feature vector Can regard that the value in the vector direction different by many each leisures of audio frequency characteristics is summed in vector space as with value and At wherein value of each audio frequency characteristics in a vector direction can regard the one-component of audio feature vector as.Include The user speech message of different moods necessarily has different audio frequency characteristics, and the present invention exactly utilizes different moods and different audios Corresponding relationship between feature identifies the mood of user speech message.Specifically, audio frequency characteristics may include following several One of or it is a variety of: energy feature, pronunciation frame number feature, fundamental frequency feature, formant feature, harmonic to noise ratio feature with And mel cepstrum coefficients feature.In an embodiment of the present invention, following vector direction: ratio can be set in the vector space Value, mean value, maximum value, intermediate value and standard deviation.
Energy feature refers to the power spectrum characteristic of user speech message, can sum to obtain by power spectrum.Calculation formula It can are as follows:Wherein E indicates the value of energy feature, and k represents the number of frame, and j represents the number of Frequency point, N For frame length, P indicates the value of power spectrum.In an embodiment of the present invention, energy feature may include short-time energy first-order difference, And/or predeterminated frequency energy size below.The calculation formula of short-time energy first-order difference can are as follows:
VE (k)=(- 2*E (k-2)-E (k-1)+E (k+1)+2*E (k+2))/3;
Predeterminated frequency energy size below can be measured by ratio value, such as 500Hz or less band energy accounts for total energy The calculation formula of the ratio value of amount can are as follows:
Wherein j500For the corresponding frequency point number of 500Hz, k1 is the volume of the voice start frame of user speech message to be identified Number, k2 is the number of the voice end frame of user speech message to be identified.
Pronunciation frame number feature refers to the population size of pronunciation frame in user speech message, the population size of the pronunciation frame It can be measured by ratio value.Such as remember in the user speech message that the quantity of pronunciation frame and mute frame is respectively n1 and n2, The ratio of frame number and mute frame number of then pronouncing is p2=n1/n2, the ratio of pronounce frame number and totalframes are as follows: p3=n1/ (n1+ n2)。
Fundamental frequency feature can be used based on the algorithm of the auto-correlation function of linear prediction (LPC) error signal and extract. Fundamental frequency feature may include fundamental frequency and/or fundamental frequency first-order difference.The algorithm flow of fundamental frequency can be as follows: first First, it calculates the linear predictor coefficient of pronunciation frame x (k) and calculates linear prediction estimation signalSecondly, error signal Auto-correlation function c1:Then, in the offset ranges that corresponding fundamental frequency is 80-500Hz, The maximum value for finding auto-correlation function, records its corresponding offset Δ h.The calculation formula of fundamental frequency F0 are as follows: F0=Fs/ Δ h, wherein Fs is sample frequency.
Formant feature can be used based on the algorithm of the polynomial rooting of linear prediction and extract, it may include the first resonance The first-order difference at peak, the second formant and third formant and three formants.Harmonic to noise ratio (HNR) feature can adopt It is extracted with based on the algorithm of independent component analysis (ICA).Mel cepstrum (MFCC) coefficient characteristics may include that 1-12 rank Meier is fallen Spectral coefficient can be used general mel cepstrum coefficients calculation process and obtain, and details are not described herein.
Can be depending on the demand of actual scene it should be appreciated which audio feature vector specifically extracted, the present invention is to institute Extract type, quantity and the vector direction of audio frequency characteristics corresponding to audio feature vector without limitation.However in the present invention In one embodiment, in order to obtain optimal Emotion identification effect, six above-mentioned audio frequency characteristics can be extracted simultaneously: energy feature, Pronunciation frame number feature, fundamental frequency feature, formant feature, harmonic to noise ratio feature and mel cepstrum coefficients feature.For example, When extracting six above-mentioned audio frequency characteristics simultaneously, extracted audio feature vector just may include as shown in table 1 below 173 A component, using the audio feature vector and Gauss model (GMM) of the following table 1 as emotional characteristics model come to casia Chinese The accuracy that mood corpus carries out voice mood identification can achieve 74% to 80%.
Table 1
In an embodiment of the present invention, audio stream to be identified can be customer service interactive audio stream, user speech message it is corresponding to Identify that a user in audio stream inputs voice segments or a customer service inputs voice segments.Since customer interaction process is often one Ask a form answered, therefore a user inputs voice segments and can correspond to the primary enquirement of user in an interactive process or return It answers, and customer service input voice segments can correspond to the primary enquirement or answer of contact staff in an interactive process.Due to one As think user or customer service it is primary put question to or answer in can completely expression mood, therefore by the way that a user is inputted voice The unit of section or customer service input voice segments as Emotion identification, not only can guarantee the integrality of Emotion identification, but also can guarantee visitor Take the real-time of Emotion identification in interactive process.
Step 112: the audio feature vector of user speech message being matched with multiple emotional characteristics models, wherein more A emotional characteristics model respectively corresponds one of multiple mood classification.
These emotional characteristics models can be by including that multiple moods are classified the multiple default of corresponding mood tag along sort The respective audio feature vector of user speech message is learnt in advance and is established, and is equivalent to establish emotional characteristics mould in this way Corresponding relationship between type and mood classification, each emotional characteristics model can correspond to a mood classification.As shown in figure 3, this is built The pre- learning process of vertical emotional characteristics model can include: will classify the more of corresponding mood tag along sort including multiple moods first A respective audio feature vector of pre-set user speech message carries out clustering processing, obtains the cluster result of default mood classification (S31);Then, according to cluster result, the audio feature vector of the pre-set user speech message in each cluster is trained for one A emotional characteristics model (S32).Based on these emotional characteristics models, can be obtained by the matching process based on audio feature vector Emotional characteristics model corresponding with present user speech message is obtained, and obtains corresponding mood classification in turn.
In an embodiment of the present invention, these emotional characteristics models can be that (degree of mixing can be mixed Gauss model (GMM) 5).It can first be clustered in this way using emotional characteristics vector of the K-means algorithm to the speech samples that same mood is classified, according to Cluster result calculates the initial value of the parameter of mixed Gauss model (the number of iterations can be 50).Then it is instructed again using E-M algorithm Practise the corresponding mixed Gauss model (the number of iterations 200) of all kinds of moods classification.When to utilize these mixed Gauss models into Market thread classification matching process when, can by calculate present user speech message audio feature vector respectively with multiple moods Then likelihood probability between characteristic model determines matched emotional characteristics model by measuring the likelihood probability, such as will Likelihood probability is greater than preset threshold and maximum emotional characteristics model as matched emotional characteristics model.
Although it should be appreciated that elaborating that emotional characteristics model can be mixed Gauss model in the above description, in fact The emotional characteristics model can also be realized by other forms, such as support vector machines (SVM) model, K arest neighbors sorting algorithm (KNN) model, Markov model (HMM) and neural network (ANN) model etc..
In an embodiment of the present invention, multiple mood classification can include: satisfied classification, tranquil classification and irritated point Class, to correspond to the emotional state that user is likely to occur in customer service interaction scenarios.In another embodiment, multiple mood classification can It include: that satisfaction is classified, tranquil classification, agitation is classified and anger classification, it may to correspond to contact staff in customer service interaction scenarios The emotional state of appearance.That is, when audio stream to be identified is user's customer service interactive audio stream in customer service interaction scenarios, if current use When the corresponding customer service input voice segments of family speech message, multiple mood classification can include: satisfied classification, tranquil classification and Agitation classification;If the corresponding user of present user speech message inputs voice segments, multiple mood classification can include: satisfied Classification, tranquil classification, irritated classification and angry classification.Classified by the above-mentioned mood to user and customer service, Ke Yigeng Succinct is suitable for call center system, reduces calculation amount and meets the Emotion identification demand of call center system.However it should Understand, the type and quantity of these moods classification can be adjusted according to actual application scenarios demand.
Step 113: being that the corresponding mood classification of emotional characteristics model to match is used as user speech by matching result The mood of message is classified.
As previously described, because between emotional characteristics model and mood classification, there are corresponding relationships, therefore when according to step 112 Matching process the emotional characteristics model to match has been determined after, the corresponding mood classification of the matched emotional characteristics model is just For the mood classification identified.For example, the matching process can lead to when these emotional characteristics models are mixed Gauss model Cross the side for measuring the audio feature vector likelihood probability between multiple emotional characteristics models respectively of present user speech message Formula is realized, likelihood probability is then greater than preset threshold and the corresponding mood classification of maximum emotional characteristics model is used as user The mood of speech message is classified.
It can be seen that a kind of voice mood exchange method provided in an embodiment of the present invention, by extracting audio stream to be identified In user speech message audio feature vector, and using the emotional characteristics model that pre-establishes to extracted audio frequency characteristics Vector is matched, to realize the real-time emotion identification to user speech message.
It is also understood that the mood classification that voice mood exchange method based on the embodiment of the present invention is identified, Specific scene demand can be also further cooperated to realize more flexible secondary applications.It in an embodiment of the present invention, can be real-time Show the mood classification of the user speech message currently identified, specific real-time display mode can be according to actual scene demand And it adjusts.For example, can be classified with the different colours of signal lamp to characterize different moods, in this way according to the change of signal lamp color Change, contact staff and quality inspection personnel can be reminded to converse at present locating emotional state in real time.In another embodiment, may be used also The mood classification of the user speech message identified in preset time period is counted, such as the audio of calling record is numbered, The timestamp and Emotion identification result of the starting point and end point of user speech message are recorded, and a feelings are ultimately formed Thread identifies data bank, and counts various moods occur in a period of time number and probability, makes curve graph or table, is used for The reference frame of contact staff's service quality in a period of time is judged by enterprise.In another embodiment, it can also send in real time and institute The corresponding mood response message of mood classification of the user speech message identified, this is applicable to prosthetic machine visitor on duty Take scene.For example, when identify in real time at present call in user be in " anger " state when, then automatically reply user and " anger " state is corresponding to pacify language, to calm down user mood, achievees the purpose that continue to link up.As for mood classification and mood Corresponding relationship between response message can be pre-established by pre- learning process.
In an embodiment of the present invention, the audio feature vector for extracting the user speech message in audio stream to be identified it Before, need first to extract user speech message from audio stream to be identified, in order to it is subsequent with user speech message be single Position carries out Emotion identification, which can be real-time perfoming.
Fig. 4 show the stream that user speech message is extracted in voice mood exchange method provided by one embodiment of the invention Journey schematic diagram.As shown in fig. 7, the extracting method of the user speech message includes:
Step 401: determining the voice start frame and voice end frame in audio stream to be identified.
Voice start frame is the start frame of a user speech message, and voice end frame is the knot of a user speech message Beam frame.After voice start frame and voice end frame has been determined, the part between voice start frame and voice end frame is institute The user speech message to be extracted.
Step 402: extracting the audio stream part between voice start frame and voice end frame as user speech message.
In an embodiment of the present invention, as shown in figure 5, the language in audio stream to be identified can be determined especially by following steps Sound start frame and voice end frame:
Step 501: judging that the speech frame in audio stream to be identified is pronunciation frame or non-vocal frame.
In an embodiment of the present invention, the deterministic process of the pronunciation frame or non-vocal frame can be based on to speech terminals detection (VAD) judgement of decision parameter and power spectrum mean value is realized, as shown in fig. 6, specific as follows:
Step 5011: the pretreatment such as framing, adding window, preemphasis is carried out to audio stream to be identified.Hamming can be used in window function Window, pre emphasis factor desirable 0.97.Remember pretreated kth frame signal be x (k)=[x (k*N), x (k*N+1) ..., x (k* N+N-1)], N is frame length, such as desirable 256.It should be appreciated, however, that whether need to carry out preprocessing process, and need by Which preprocessing process can depending on actual scene demand, the present invention this without limitation.
Step 5012: discrete Fourier transform (DFT) being done to pretreated kth frame signal x (k) and calculates its power Spectrum, DFT length is taken as consistent with frame length:
P (k, j)=| FFT (x (k)) |2, j=0,1 ..., N-1;
Here j represents the number of Frequency point.
Step 5013: calculate posteriori SNR γ and prior weight ξ:
ξ (k, j)=α ξ (k-1, j)+(1- α) max (γ (k, j) -1,0);
Here factor alpha=0.98;λ is Background Noise Power spectrum, can detecte the power spectrum of initial 5 to 10 frame of beginning Arithmetic average is as initial value;Min () and max () is respectively to take minimum function and take maximal function;Prior weight ξ (k, J) 0.98 can be initialized as.
Step 5014: calculate likelihood ratio parameter η:
Step 5015: VAD decision parameter Γ and power spectrum mean value ρ is calculated,
VAD decision parameter can be initialized as 1.
Step 5016: judge whether the VAD decision parameter Γ (k) of kth frame signal is more than or equal to the first default VAD threshold value, And whether ρ (k) is more than or equal to predetermined power mean value threshold value.In an embodiment of the present invention, which can be 5, which can be 0.01.
Step 5017: if two results judged in step 5016 are to be, kth frame audio signal being determined as Pronounce frame.
Step 5018: if two in step 5016 judge at least one result be it is no, by kth frame audio signal It is determined as mute frame, executes step 5019.
Step 5019: noise power spectrum λ is updated by following formula:
λ (k+1, j)=β * λ (k, j)+(1- β) * P (k, j);
Here factor beta be smoothing factor, can value be 0.98.
It can be seen that by constantly recycle method and step as shown in Figure 5 can real-time monitoring go out in audio stream to be identified Pronunciation frame and non-vocal frame.The recognition result of these pronunciation frames and non-vocal frame is subsequent identification voice start frame and voice knot The basis of beam frame.
Step 502: after determining the voice end frame of the preceding paragraph user speech message or present user speech When message is the first segment user speech message of the audio stream to be identified, when there is the first preset quantity speech frame continuously to be sentenced Break for pronunciation frame when, using first speech frame in the first preset quantity speech frame as the language of present user speech message Sound start frame.
In an embodiment of the present invention, two end markers flag_start and flag_end can be set first, respectively generation The detecting state variable of predicative sound start frame and voice end frame, ture and false respectively represent appearance and do not occur.When When flag_end=ture, then illustrates that the end frame of a user speech message has been determined, start to detect at this time next The start frame of a user speech message.And it is more than or equal to the second preset threshold when the VAD decision parameter of continuous 30 frame signal meets When, illustrate that 30 frame has come into a user speech message, at this time using first speech frame in 30 frame as voice Start frame, flag_start=ture;Otherwise lag_start=false.
Step 503: after determining the voice start frame of present user speech message, when there is the second preset quantity When speech frame is continuously judged as non-vocal frame, illustrate that the second preset quantity speech frame has been not belonging to the user speech and has disappeared Breath, at this time terminates first speech frame in the second preset quantity speech frame as the voice of present user speech message Frame.
Specifically, still continuing to use above example, as flag_start=ture, then explanation has come into a use The family speech message and voice start frame of the user speech message has been determined starts to check present user speech message at this time End frame.And when the VAD decision parameter of continuous 30 frame signal meets and is less than third predetermined threshold value, it is determined as active user's language The sound end of message, flag_end=ture, the first frame of corresponding 30 frames are voice end frame;Otherwise flag_end=false.
In an embodiment of the present invention, in order to further increase the accuracy of judgement degree of voice start frame and voice end frame, It avoids judging by accident, second preset threshold and third predetermined threshold value may make to be all larger than aforementioned pronunciation frame and non-vocal frame identification process In the first preset threshold, such as second preset threshold can be 40, the third predetermined threshold value can be 20.
It can be seen that by method and step as shown in Figure 5, can determine the voice start frame in audio stream to be identified with And voice end frame, and the user speech message between extractable voice start frame and voice end frame carries out Emotion identification.
Although it should be appreciated that above-mentioned Fig. 5 and Fig. 6 embodiment description in introduce some design factors, parameter just Initial value and some judgment thresholds, but the initial value of these design factors, parameter and judgment threshold can be according to actual applications Scene and adjust, the present invention to the size of the initial value of these design factors, parameter and judgment threshold without limitation.
Fig. 7 show in the voice mood exchange method of one embodiment of the invention offer and obtains base according to user speech message The flow diagram of this intent information.As shown in fig. 7, the process of the basic intent information of the acquisition may include following steps:
Step 701: preset semantic templates multiple in the content of text and semantic knowledge-base of user speech message are carried out Matching is with the matched semantic template of determination;Wherein the corresponding relationship between semantic template and basic intent information is pre-established in language In adopted knowledge base, the corresponding one or more semantic templates of same intent information.
It should be appreciated that carrying out the matching (such as standard ask, extend and ask semantic template) of semanteme by semantic template is one Kind implementation, the speech text information of user's input directly can also extract word, word, sentence vector characteristics by network (may Attention mechanism is added) directly matches or classify.
Step 702: obtaining basic intent information corresponding with matched semantic template.
In an embodiment of the present invention, the content of text of user speech message can be right with " standard is asked " in semantic knowledge-base It answers, " standard is asked " is used to indicate that the text of some knowledge point, and main target is that expression is clear, convenient for safeguarding.Here " asking " It narrowly should not be interpreted as " inquiring ", and should broadly understand one " input ", being somebody's turn to do " input " has corresponding " output ".With Family to intelligent interaction machine when inputting, the most ideal situation is that asked using standard, then the intelligent semantic identifying system horse of machine Above it will be appreciated that the meaning of user.
However, user often not uses standard to ask, but the form of some deformations that standard is asked, as extend It asks.Therefore, for intelligent semantic identification, the extension that the standard that needs in knowledge base is asked is asked, which, which asks, asks table with standard There is slight difference up to form, but expresses identical meaning.Therefore, in a further embodiment of the invention, semantic template is The set for indicating one or more semantic formulas of a certain semantic content combines language according to scheduled rule by developer Adopted content generates, i.e., the sentence of a variety of different expression ways of corresponding semantic content can be described by a semantic template, The possible various deformation of content of text to cope with user speech message.In this way by the content of text of user message and preset language Adopted template is matched, and is avoided using " standard is asked " for being only capable of describing a kind of expression way and is identified limitation when user message Property.
Ontology generic attribute is done for example, by using abstract semantics and is further abstracted.The abstract semantics of one classification pass through one group of pumping The different expression of a kind of abstract semantics are described as the set of semantic formula, to express more abstract semanteme, these are abstracted Semantic formula is expanded on component.
It should be appreciated that the particular content and part of speech of semantic component word, the particular content and part of speech and language of semantic rules word The definition and collocation of adopted symbol all can be as developer specific interactive service fields according to applied by the voice mood exchange method Scape and preset, the present invention is to this and without limitation.
In an embodiment of the present invention, the process of matched semantic template is determined according to the content of text of user speech message It can be realized by similarity calculation process.Specifically, calculating the content of text and multiple preset semantemes of user speech message Multiple text similarities between template, then using the highest semantic template of text similarity as matched semantic template.Phase It can be used one of following calculation method or a variety of: editing distance calculation method like degree, n-gram calculation method, JaroWinkler calculation method and Soundex calculation method.In a further embodiment, when identifying that user speech disappears When semantic component word and semantic rules word in the content of text of breath, in user speech message and semantic template it is included it is semantic at Participle and semantic rules word can also be converted to simplified text-string, to improve the efficiency of Semantic Similarity Measurement.
In an embodiment of the present invention, as previously mentioned, semantic template can be made of semantic component word and semantic rules word, and These semantic component words and semantic rules word are closed with these words in the part of speech in semantic template and the grammer between word again It is related, therefore the similarity calculation process can specifically: first identify the word of word in user speech Message-text, word Property and grammatical relation, then identify semantic component word and semantic rules therein according to the part of speech of word and grammatical relation Word, then the semantic component word identified and semantic rules word are introduced into vector space model to calculate the text of user speech message Multiple similarities between this content and multiple preset semantic templates.It in an embodiment of the present invention, can the side of participle as follows Word in the content of text of one of method or a variety of identification user speech message, the language between the part of speech and word of word Method relationship: hidden markov model approach, Forward Maximum Method method, reverse maximum matching process and name Entity recognition side Method.
In an embodiment of the present invention, as previously mentioned, semantic template can be multiple semantemes of a certain semantic content of expression The set of expression formula can describe the language of a variety of different expression ways of corresponding semantic content by a semantic template at this time Sentence, is asked with multiple extensions that the same standard of correspondence is asked.Therefore in the content of text and preset semanteme for calculating user speech message When semantic similarity between template, need to calculate the content of text of user speech message with multiple preset semantic templates respectively At least one extension of expansion ask between similarity, then using the highest extension of similarity ask corresponding semantic template as Matched semantic template.The extension of these expansion is asked can be according to the semantic component word and/or semantic rules included by semantic template Word and/or semantic symbol and obtain.
Certainly the method for obtaining basic intent information is not limited to this, and the speech text information of user's input can directly lead to It crosses network and extracts word, word, sentence vector characteristics (attention mechanism may such as be added) and directly match or be categorized into and be intended to letter substantially Breath is to realize.
It can be seen that voice mood exchange method provided by through the embodiment of the present invention is, it can be achieved that according to user emotion State is different and provides the intelligent interaction mode of different answer services, is thus greatly improved the experience of intelligent interaction.For example, working as Voice mood exchange method provided by the embodiment of the present invention is applied in the tangible machine people in bank's customer service field, user's term Sound says entity customer service robot: " credit card to report the loss what if? ".Entity customer service robot receives user's language by microphone Sound message, and it is " anxiety " that the audio data by analyzing user speech message, which obtains audio Emotion identification result, and by audio Emotion identification result is as final Emotion identification result;User speech message is converted into text, obtains the basic meaning of client Figure information be " reporting the loss credit card " (the step for may also need to be related to combine passing or subsequent user speech message and silver The semantic knowledge-base in row field);Then, Emotion identification result " anxiety " and basic intent information " reporting the loss credit card " is contacted Together, obtain mood intent information " reporting the loss credit card, user is very anxious, and possible credit card is lost or stolen " (the step for The semantic knowledge-base of passing or subsequent user speech message and the bank field may be needed to be related to combining);It determines corresponding Interactive instruction: screen export credit jam step-out is rapid, while mood classification " comfort ", emotional intensity grade is presented by voice broadcast Wei not be high, export to user meet that the mood instructs may be brisk, the medium word speed of tone voice broadcast: " report the loss credit The step of card, shows see screen, woulds you please not worry, if it is losing credit card or is stolen, jam is freezed at once after losing, no It can cause damages to your property and prestige ... ".
In an embodiment of the present invention, application scenes (such as bank's customer service) may also consider the privacy of interaction content Property and avoid voice broadcast from operating, and be changed to realize interactive instruction in a manner of plain text or animation.The mould of this interactive instruction State selection can be adjusted according to application scenarios.
It should be appreciated that can be by adjusting voice for the presentation mode of mood classification and emotional intensity rank in interactive instruction The modes such as the word speed of casting and intonation realize which is not limited by the present invention.
It applies for another example working as voice mood exchange method provided by the embodiment of the present invention in the virtual of intelligent terminal When in intelligent personal assistants application, user says intelligent terminal with voice: " most fast path is assorted from family to airport ? ".Virtual Intelligent personal assistant applications receive user speech message by the microphone of intelligent terminal, and pass through analysis It is " excitement " that the audio data of user speech message, which obtains audio Emotion identification result,;It is simultaneously text by user speech message transformation This, and it is " anxiety " that the content of text by analyzing user speech message, which obtains text Emotion identification result, by logic judgment By " excitement " and " anxiety " two kinds of mood classification simultaneously as Emotion identification result.By combining passing or subsequent user's language The basic intent information that the semantic knowledge-base of sound message and this field obtains client is " to obtain the user road most fast from the home to airport Diameter navigation ".Since " anxiety " and basic intent information " it is most fast from the home to airport to be obtained user by Virtual Intelligent personal assistant applications Path navigation " the mood intent information that links together is " to obtain user's path navigation most fast from the home to airport, use Family is very anxious, may worry overdue aircraft ";And the mood that " excitement " and basic intent information link together is intended to believe Breath is " obtaining user's path navigation most fast from the home to airport, user is very excited, may travel at once ";Therefore, here Two kinds of mood intent informations can be generated, at this time in combination with passing or subsequent user speech message, it is found that front user mentions " I Flight be at 11 points and take off, need several points to set out? ", judge the Emotion identification result of user then for " anxiety ", mood is intended to letter Breath is " obtaining user's path navigation most fast from the home to airport, user is very anxious, may worry overdue aircraft ".It determines corresponding Interactive instruction: screen exports navigation information, while mood classification " comfort " and " warning ", emotional intensity is presented by voice broadcast Rank be respectively height, export to user meet the mood instruct may be smooth tone, medium word speed voice broadcast: " from You finish home address to the most fast path planning in airport, please show and navigate by screen, and normally travel is estimated can be at 1 hour It inside arrives at the airport, woulds you please not worry.In addition it reminds and carries out time planning, drive with caution, drive under the speed limit."
It is applied in a kind of intelligent wearable device for another example working as voice mood exchange method provided by the embodiment of the present invention When, user says intelligent wearable device with voice when movement: " my present heartbeat what state? ".Intelligence wearing is set It is standby that user speech message is received by microphone, and the audio data by analyzing user speech message obtains audio Emotion identification It as a result is PAD three dimensional mood model vector (p1, a1, d1), the audio data by analyzing user speech message obtains text feelings Thread recognition result is PAD three dimensional mood model vector (p2, a2, d2), in conjunction with audio Emotion identification result and text Emotion identification As a result final Emotion identification result (p3, a3, d3) is obtained, the combination of " worry " and " anxiety " is characterized.At the same time, intelligence Wearable device is " to obtain the heart of user by combining the semantic knowledge-base in medical treatment & health field to obtain the basic intent information of client Hop count evidence ".Then, Emotion identification result (p3, a3, d3) and basic be intended to " heartbeat data of acquisition user " are contacted one It rises, obtaining mood intent information is " to obtain the heartbeat data of user, user concerns, and may currently have rapid heart beat etc. no Suitable symptom ".Interactive instruction is determined according to the corresponding relationship between mood intent information and interactive instruction: in output heartbeat data Mood (p6, a6, d6) is presented simultaneously, i.e., " comforts " and the combination of " encouragement ", emotional intensity is respectively height, while starting prison in real time The program of control heartbeat continues 10min, and the voice broadcast of word speed brisk with tone, slow: " your current heartbeat data is every point Clock 150 times, would you please not worry, which still belongs to normal heartbeat range.If any feeling that the malaise symptoms such as rapid heart beat please put Feelings of getting relaxed, which are breathed deeply, to be adjusted.Your previous health data shows that heart working is good, can be by keeping regular forging Refining enhancing cardio-pulmonary function." then give more sustained attention the emotional state of user.It " is wrong with if user says after 5min." pass through It is three dimensional mood model vector (p7, a7, d7) that Emotion identification process, which obtains Emotion identification result, characterizes " pain ", then again Updating interactive instruction are as follows: screen exports heartbeat data, while mood (p8, a8, d8) is presented by voice broadcast, i.e., " warns ", Emotional intensity is respectively high, exports alarm sound, and the voice broadcast of word speed sedate with tone, slow: " your current beats It has been more than normal range (NR) according to being 170 times per minute, has woulded you please stop motion, adjustment breathing.If you need to seek help please by screen."
One embodiment of the invention also provides a kind of computer equipment, including memory, processor and is stored in memory On the computer program that is executed by processor, which is characterized in that processor realizes such as preceding any implementation when executing computer program Voice mood exchange method described in example.
One embodiment of the invention also provides a kind of computer readable storage medium, is stored thereon with computer program, special Sign is, the voice mood exchange method as described in preceding any embodiment is realized when computer program is executed by processor.The meter Calculation machine storage medium can be any tangible media, such as floppy disk, CD-ROM, DVD, hard disk drive, even network medium etc..
Although being produced it should be appreciated that can be computer program the foregoing describe a kind of way of realization of embodiment of the present invention Product, but the method for embodiments of the present invention can be realized by the combination according to software, hardware or software and hardware.Firmly Part part can use special logic to realize;Software section can store in memory, by instruction execution system appropriate, Such as microprocessor or special designs hardware execute.It will be understood by those skilled in the art that above-mentioned method and setting It is standby that computer executable instructions can be used and/or be included in the processor control code to realize, such as in such as disk, CD Or the programmable memory or such as optics or e-mail of the mounting medium of DVD-ROM, such as read-only memory (firmware) Such code is provided in the data medium of number carrier.Method of the invention
Can by ultra large scale integrated circuit or gate array, logic chip, transistor etc. semiconductor or The hardware circuit of the programmable hardware device of field programmable gate array, programmable logic device etc. is realized, can also be used The software realization executed by various types of processors, can also be by the combination such as firmware of above-mentioned hardware circuit and software Lai real It is existing.
It will be appreciated that though it is referred to several modules or unit of device in the detailed description above, but this stroke It point is only exemplary rather than enforceable.In fact, according to an illustrative embodiment of the invention, above-described two or More multimode/unit feature and function can realize in a module/unit, conversely, an above-described module/mono- The feature and function of member can be to be realized by multiple module/units with further division.In addition, above-described certain module/ Unit can be omitted under certain application scenarios.
It should be appreciated that determiner " first ", " second " and " third " etc. used in description of the embodiment of the present invention is only used In clearer elaboration technical solution, can not be used to limit the scope of the invention.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, made any modification, equivalent replacement etc. be should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of voice mood exchange method characterized by comprising
Audio Emotion identification is obtained as a result, and according to the user speech message according to the audio data of the user speech message Content of text obtain text Emotion identification as a result, according to the audio Emotion identification result and the text Emotion identification knot Fruit determines Emotion identification result;
Intention analysis is carried out according to the content of text of the user speech message, obtains corresponding basic intent information;And
Corresponding interactive instruction is determined according to the Emotion identification result and the basic intent information;
It is described that obtain Emotion identification result according to the audio data of the user speech message include: to extract the user speech to disappear The audio feature vector of breath, wherein the user speech message corresponds to one section of word in the audio stream to be identified, the audio Feature vector includes one of following several audio frequency characteristics or a variety of: energy feature, pronunciation frame number feature, fundamental frequency are special Sign, formant feature, harmonic wave are made an uproar than feature and mel cepstrum coefficients feature;
The audio feature vector of the user speech message is matched with multiple emotional characteristics models, wherein the multiple feelings Thread characteristic model respectively corresponds one in multiple mood classification;And
It is that the corresponding mood classification of the emotional characteristics model to match is used as the user speech message by matching result Mood classification.
2. voice mood exchange method according to claim 1, which is characterized in that described according to the Emotion identification result Determine that corresponding interactive instruction includes: with the basic intent information
Corresponding mood intent information is determined according to the Emotion identification result and the basic intent information;And according to described Mood intent information determines the corresponding interactive instruction, or true according to the mood intent information and the basic intent information The fixed corresponding interactive instruction;
Wherein, the mood intent information includes affection need information corresponding with the Emotion identification result;Or,
The mood intent information includes the affection need information corresponding with the Emotion identification result and the mood The incidence relation of recognition result and the basic intent information.
3. voice mood exchange method according to claim 1, which is characterized in that the multiple emotional characteristics model passes through To including that the multiple mood is classified multiple default respective audio feature vectors of sound bite of corresponding mood tag along sort Set is learnt in advance and is established.
4. voice mood exchange method according to claim 3, which is characterized in that the pre- learning process includes:
It will include that the multiple mood is classified multiple default respective audio frequency characteristics of sound bite of corresponding mood tag along sort Vector set carries out clustering processing, obtains the cluster result of default mood classification;And
According to the cluster result, the audio feature vector set of the default sound bite in each cluster is trained for one A emotional characteristics model.
5. voice mood exchange method according to claim 1, which is characterized in that the energy feature includes: in short-term can Measure first-order difference and/or predeterminated frequency energy size below;And/or
The fundamental frequency feature includes: fundamental frequency and/or fundamental frequency first-order difference;And/or
The formant feature includes one of following items or a variety of: the first formant, the second formant, third resonance Peak, the first formant first-order difference, the second formant first-order difference and third formant first-order difference;And/or
The mel cepstrum coefficients feature includes one scale of 1-12 rank mel cepstrum coefficients and/or 1-12 rank mel cepstrum coefficients Point.
6. voice mood exchange method according to claim 1, which is characterized in that the audio frequency characteristics pass through following calculating One of characteristic manner a variety of characterizes: ratio value, mean value, maximum value, intermediate value and standard deviation.
7. voice mood exchange method according to claim 1, which is characterized in that the energy feature includes: in short-term can Measure mean value, maximum value, intermediate value and the standard deviation of first-order difference and/or the ratio of predeterminated frequency energy below and total energy Example value;And/or
The pronunciation frame number feature includes: the pronounce ratio value of frame number and mute frame number, and/or pronunciation frame number and totalframes Ratio value;
The fundamental frequency feature includes: mean value, maximum value, intermediate value and the standard deviation and/or fundamental frequency one of fundamental frequency Mean value, maximum value, intermediate value and the standard deviation of order difference;And/or
The formant feature includes one of following items or a variety of: the mean value of the first formant, maximum value, intermediate value and Standard deviation, mean value, maximum value, intermediate value and the standard deviation of the second formant, the mean value of third formant, maximum value, intermediate value with And standard deviation, mean value, maximum value, intermediate value and the standard deviation of the first formant first-order difference, the second formant first-order difference Mean value, maximum value, intermediate value and the standard of mean value, maximum value, intermediate value and standard deviation and third formant first-order difference Difference;And/or
The mel cepstrum coefficients feature includes mean value, maximum value, intermediate value and the standard deviation of 1-12 rank mel cepstrum coefficients, And/or mean value, maximum value, intermediate value and the standard deviation of 1-12 rank mel cepstrum coefficients first-order difference.
8. intelligent interactive method according to claim 1, which is characterized in that the sound according to the user speech message Frequency data acquisition audio Emotion identification result further comprises:
Determine the voice start frame and voice end frame in the audio stream to be identified;And
The audio stream part between the voice start frame and the voice end frame is extracted as the user speech message;
Wherein, the voice start frame in the determination audio stream to be identified and voice end frame include:
Judge that the speech frame in the audio stream to be identified is pronunciation frame or non-vocal frame;
After the voice end frame of the preceding paragraph sound bite or it is current it is unidentified to first segment sound bite when, when having When first preset quantity speech frame is continuously judged as pronunciation frame, by first in the first preset quantity speech frame The voice start frame of the speech frame as current speech segment;And
After the voice start frame of current speech segment, when there is the second preset quantity speech frame to be continuously judged as non- When pronunciation frame, using first speech frame in the second preset quantity speech frame as the voice of current speech segment End frame.
9. a kind of computer equipment, including memory, processor and being stored on the memory is executed by the processor Computer program, which is characterized in that the processor is realized when executing the computer program as appointed in claim 1 to 7 The step of one the method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program It realizes when being executed by processor such as the step of any one of claims 1 to 7 the method.
CN201810078883.6A 2018-01-26 2018-01-26 Voice mood exchange method, computer equipment and computer readable storage medium Pending CN110085262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810078883.6A CN110085262A (en) 2018-01-26 2018-01-26 Voice mood exchange method, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810078883.6A CN110085262A (en) 2018-01-26 2018-01-26 Voice mood exchange method, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110085262A true CN110085262A (en) 2019-08-02

Family

ID=67412633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810078883.6A Pending CN110085262A (en) 2018-01-26 2018-01-26 Voice mood exchange method, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110085262A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648691A (en) * 2019-09-30 2020-01-03 北京淇瑀信息科技有限公司 Emotion recognition method, device and system based on energy value of voice
CN110827821A (en) * 2019-12-04 2020-02-21 三星电子(中国)研发中心 Voice interaction device and method and computer readable storage medium
CN110931002A (en) * 2019-10-12 2020-03-27 平安科技(深圳)有限公司 Human-computer interaction method and device, computer equipment and storage medium
CN110942229A (en) * 2019-10-24 2020-03-31 北京九狐时代智能科技有限公司 Service quality evaluation method and device, electronic equipment and storage medium
CN110991427A (en) * 2019-12-25 2020-04-10 北京百度网讯科技有限公司 Emotion recognition method and device for video and computer equipment
CN111106995A (en) * 2019-12-26 2020-05-05 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium
CN111179903A (en) * 2019-12-30 2020-05-19 珠海格力电器股份有限公司 Voice recognition method and device, storage medium and electric appliance
CN111833907A (en) * 2020-01-08 2020-10-27 北京嘀嘀无限科技发展有限公司 Man-machine interaction method, terminal and computer readable storage medium
CN112420049A (en) * 2020-11-06 2021-02-26 平安消费金融有限公司 Data processing method, device and storage medium
CN112951233A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Voice question and answer method and device, electronic equipment and readable storage medium
CN113035181A (en) * 2019-12-09 2021-06-25 斑马智行网络(香港)有限公司 Voice data processing method, device and system
CN113160852A (en) * 2021-04-16 2021-07-23 平安科技(深圳)有限公司 Voice emotion recognition method, device, equipment and storage medium
CN113223560A (en) * 2021-04-23 2021-08-06 平安科技(深圳)有限公司 Emotion recognition method, device, equipment and storage medium
CN113450124A (en) * 2021-06-24 2021-09-28 未鲲(上海)科技服务有限公司 Outbound method, device, electronic equipment and medium based on user behavior
CN113506586A (en) * 2021-06-18 2021-10-15 杭州摸象大数据科技有限公司 Method and system for recognizing emotion of user
CN113779238A (en) * 2020-06-17 2021-12-10 北京沃东天骏信息技术有限公司 Data processing method, device, equipment and computer readable storage medium
CN114298019A (en) * 2021-12-29 2022-04-08 中国建设银行股份有限公司 Emotion recognition method, emotion recognition apparatus, emotion recognition device, storage medium, and program product
CN115101074A (en) * 2022-08-24 2022-09-23 深圳通联金融网络科技服务有限公司 Voice recognition method, device, medium and equipment based on user speaking emotion
US11594224B2 (en) 2019-12-04 2023-02-28 Samsung Electronics Co., Ltd. Voice user interface for intervening in conversation of at least one user by adjusting two different thresholds
CN111026843B (en) * 2019-12-02 2023-03-14 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium
CN115952288A (en) * 2023-01-07 2023-04-11 华中师范大学 Teacher emotion concern feature detection method and system based on semantic understanding

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489453A (en) * 2013-06-28 2014-01-01 陆蔚华 Product emotion qualification method based on acoustic parameters
CN103531198A (en) * 2013-11-01 2014-01-22 东南大学 Speech emotion feature normalization method based on pseudo speaker clustering
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN105681546A (en) * 2015-12-30 2016-06-15 宇龙计算机通信科技(深圳)有限公司 Voice processing method, device and terminal
CN106531162A (en) * 2016-10-28 2017-03-22 北京光年无限科技有限公司 Man-machine interaction method and device used for intelligent robot
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN106658129A (en) * 2016-12-27 2017-05-10 上海智臻智能网络科技股份有限公司 Emotion-based terminal control method and apparatus, and terminal
CN106776936A (en) * 2016-12-01 2017-05-31 上海智臻智能网络科技股份有限公司 intelligent interactive method and system
US9818406B1 (en) * 2016-06-23 2017-11-14 Intuit Inc. Adjusting user experience based on paralinguistic information
WO2017218243A2 (en) * 2016-06-13 2017-12-21 Microsoft Technology Licensing, Llc Intent recognition and emotional text-to-speech learning system
CN107562816A (en) * 2017-08-16 2018-01-09 深圳狗尾草智能科技有限公司 User view automatic identifying method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489453A (en) * 2013-06-28 2014-01-01 陆蔚华 Product emotion qualification method based on acoustic parameters
CN103531198A (en) * 2013-11-01 2014-01-22 东南大学 Speech emotion feature normalization method based on pseudo speaker clustering
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN105681546A (en) * 2015-12-30 2016-06-15 宇龙计算机通信科技(深圳)有限公司 Voice processing method, device and terminal
WO2017218243A2 (en) * 2016-06-13 2017-12-21 Microsoft Technology Licensing, Llc Intent recognition and emotional text-to-speech learning system
CN107516511A (en) * 2016-06-13 2017-12-26 微软技术许可有限责任公司 The Text To Speech learning system of intention assessment and mood
US9818406B1 (en) * 2016-06-23 2017-11-14 Intuit Inc. Adjusting user experience based on paralinguistic information
CN106531162A (en) * 2016-10-28 2017-03-22 北京光年无限科技有限公司 Man-machine interaction method and device used for intelligent robot
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN106776936A (en) * 2016-12-01 2017-05-31 上海智臻智能网络科技股份有限公司 intelligent interactive method and system
CN106658129A (en) * 2016-12-27 2017-05-10 上海智臻智能网络科技股份有限公司 Emotion-based terminal control method and apparatus, and terminal
CN107562816A (en) * 2017-08-16 2018-01-09 深圳狗尾草智能科技有限公司 User view automatic identifying method and device

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648691A (en) * 2019-09-30 2020-01-03 北京淇瑀信息科技有限公司 Emotion recognition method, device and system based on energy value of voice
CN110648691B (en) * 2019-09-30 2023-06-27 北京淇瑀信息科技有限公司 Emotion recognition method, device and system based on energy value of voice
CN110931002A (en) * 2019-10-12 2020-03-27 平安科技(深圳)有限公司 Human-computer interaction method and device, computer equipment and storage medium
CN110942229A (en) * 2019-10-24 2020-03-31 北京九狐时代智能科技有限公司 Service quality evaluation method and device, electronic equipment and storage medium
CN111026843B (en) * 2019-12-02 2023-03-14 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium
CN110827821B (en) * 2019-12-04 2022-04-12 三星电子(中国)研发中心 Voice interaction device and method and computer readable storage medium
CN110827821A (en) * 2019-12-04 2020-02-21 三星电子(中国)研发中心 Voice interaction device and method and computer readable storage medium
US11594224B2 (en) 2019-12-04 2023-02-28 Samsung Electronics Co., Ltd. Voice user interface for intervening in conversation of at least one user by adjusting two different thresholds
CN113035181A (en) * 2019-12-09 2021-06-25 斑马智行网络(香港)有限公司 Voice data processing method, device and system
CN110991427A (en) * 2019-12-25 2020-04-10 北京百度网讯科技有限公司 Emotion recognition method and device for video and computer equipment
CN111106995A (en) * 2019-12-26 2020-05-05 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium
CN111106995B (en) * 2019-12-26 2022-06-24 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium
CN111179903A (en) * 2019-12-30 2020-05-19 珠海格力电器股份有限公司 Voice recognition method and device, storage medium and electric appliance
CN111833907A (en) * 2020-01-08 2020-10-27 北京嘀嘀无限科技发展有限公司 Man-machine interaction method, terminal and computer readable storage medium
CN113779238A (en) * 2020-06-17 2021-12-10 北京沃东天骏信息技术有限公司 Data processing method, device, equipment and computer readable storage medium
CN112420049A (en) * 2020-11-06 2021-02-26 平安消费金融有限公司 Data processing method, device and storage medium
CN112951233A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Voice question and answer method and device, electronic equipment and readable storage medium
CN113160852A (en) * 2021-04-16 2021-07-23 平安科技(深圳)有限公司 Voice emotion recognition method, device, equipment and storage medium
CN113223560A (en) * 2021-04-23 2021-08-06 平安科技(深圳)有限公司 Emotion recognition method, device, equipment and storage medium
CN113506586A (en) * 2021-06-18 2021-10-15 杭州摸象大数据科技有限公司 Method and system for recognizing emotion of user
CN113450124A (en) * 2021-06-24 2021-09-28 未鲲(上海)科技服务有限公司 Outbound method, device, electronic equipment and medium based on user behavior
CN114298019A (en) * 2021-12-29 2022-04-08 中国建设银行股份有限公司 Emotion recognition method, emotion recognition apparatus, emotion recognition device, storage medium, and program product
CN115101074B (en) * 2022-08-24 2022-11-11 深圳通联金融网络科技服务有限公司 Voice recognition method, device, medium and equipment based on user speaking emotion
CN115101074A (en) * 2022-08-24 2022-09-23 深圳通联金融网络科技服务有限公司 Voice recognition method, device, medium and equipment based on user speaking emotion
CN115952288A (en) * 2023-01-07 2023-04-11 华中师范大学 Teacher emotion concern feature detection method and system based on semantic understanding
CN115952288B (en) * 2023-01-07 2023-11-03 华中师范大学 Semantic understanding-based teacher emotion care feature detection method and system

Similar Documents

Publication Publication Date Title
CN110085262A (en) Voice mood exchange method, computer equipment and computer readable storage medium
CN110085221A (en) Speech emotional exchange method, computer equipment and computer readable storage medium
CN108197115B (en) Intelligent interaction method and device, computer equipment and computer readable storage medium
Akçay et al. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers
CN110085220A (en) Intelligent interaction device
CN110085211A (en) Speech recognition exchange method, device, computer equipment and storage medium
EP3174047B1 (en) Speech recognition
Jing et al. Prominence features: Effective emotional features for speech emotion recognition
Gharavian et al. Speech emotion recognition using FCBF feature selection method and GA-optimized fuzzy ARTMAP neural network
Bone et al. Robust unsupervised arousal rating: A rule-based framework withknowledge-inspired vocal features
Koolagudi et al. Choice of a classifier, based on properties of a dataset: case study-speech emotion recognition
Mower et al. Interpreting ambiguous emotional expressions
Bisio et al. Gender-driven emotion recognition through speech signals for ambient intelligence applications
Origlia et al. Continuous emotion recognition with phonetic syllables
Levitan et al. Combining Acoustic-Prosodic, Lexical, and Phonotactic Features for Automatic Deception Detection.
Sethu et al. Speech based emotion recognition
Al-Dujaili et al. Speech emotion recognition: a comprehensive survey
Hema et al. Emotional speech recognition using cnn and deep learning techniques
CN109935241A (en) Voice information processing method
Das et al. Optimal prosodic feature extraction and classification in parametric excitation source information for Indian language identification using neural network based Q-learning algorithm
Yücesoy Speaker age and gender classification using GMM supervector and NAP channel compensation method
Shah et al. Speech emotion recognition based on SVM using MATLAB
CN113853651A (en) Apparatus and method for speech-emotion recognition using quantized emotional states
Alonso et al. Continuous tracking of the emotion temperature
Dhar et al. A system to predict emotion from Bengali speech

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190802