CN114842880A - Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium - Google Patents

Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium Download PDF

Info

Publication number
CN114842880A
CN114842880A CN202210439847.4A CN202210439847A CN114842880A CN 114842880 A CN114842880 A CN 114842880A CN 202210439847 A CN202210439847 A CN 202210439847A CN 114842880 A CN114842880 A CN 114842880A
Authority
CN
China
Prior art keywords
voice
speech
client
emotion
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210439847.4A
Other languages
Chinese (zh)
Inventor
季景瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weikun Shanghai Technology Service Co Ltd
Original Assignee
Weikun Shanghai Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weikun Shanghai Technology Service Co Ltd filed Critical Weikun Shanghai Technology Service Co Ltd
Priority to CN202210439847.4A priority Critical patent/CN114842880A/en
Publication of CN114842880A publication Critical patent/CN114842880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • G06Q30/016After-sales
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Abstract

The invention relates to the field of artificial intelligence, and discloses an intelligent customer service voice rhythm adjusting method, which comprises the following steps: performing feature extraction on the spectrogram by using the depth separable convolution layer to obtain a feature vector set; performing reduced-order sampling on the reduced-order feature vector set by using a global pooling layer to obtain a reduced-order matrix; outputting the predicted speech emotion of the client of the reduced-order dialogue speech sequence by using an activation function in the full-connection layer; and obtaining a trained speech emotion recognition model according to the loss value of the predicted speech emotion of the client and the real speech emotion of the client, recognizing the speech emotion of the client through the model, and adjusting the speech rhythm of the speech to be replied according to the speech emotion of the client. The invention also relates to a blockchain technique, and the speech emotion of the client can be stored in the blockchain link points. The invention also provides an intelligent customer service voice rhythm adjusting device, equipment and a medium. The invention can improve the accuracy of speech emotion recognition of the customer and the communication effect between the intelligent customer service and the customer.

Description

Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to an intelligent customer service voice rhythm adjusting method, device, equipment and storage medium.
Background
Currently, with the development of intelligent customer service, the application of intelligent networked customer service terminals based on a voice recognition method is gradually promoted, for example, the application of intelligent customer service based on voice recognition in the field of express delivery, the application of intelligent customer service in the field of telecommunications industry, the application of intelligent customer service in the field of e-commerce, and the like.
However, the traditional intelligent customer service mainly uses a uniform speech rate rhythm to communicate with a volume rhythm barrel client, and the age of the client and whether the client is in a busy state cannot be determined in the communication process, so that the emotion of the client cannot be accurately identified, and the communication effect between the intelligent customer service and the client is poor.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for regulating the voice rhythm of intelligent customer service, and mainly aims to improve the accuracy of voice emotion recognition of a customer and improve the communication effect between the intelligent customer service and the customer.
In order to achieve the above object, the present invention provides an intelligent customer service voice rhythm adjusting method, comprising:
acquiring a conversation voice sequence to be trained of a client, marking a real client voice emotion corresponding to the conversation voice sequence, and constructing a spectrogram of the conversation voice sequence;
performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
performing reduced-order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix;
inputting the reduced matrix into a full connection layer in the speech emotion recognition model, and outputting the predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
calculating loss values of the predicted client speech emotion and the real client speech emotion by using a combined loss function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss values until the loss values meet preset conditions to obtain a trained speech emotion recognition model;
recognizing the obtained dialogue voice sequence to be recognized by using the trained voice emotion recognition model to obtain the client voice emotion of the dialogue voice sequence;
the method comprises the steps of obtaining a text to be replied of an intelligent customer service, carrying out voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of a customer.
Optionally, the extracting features of the speech spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set includes:
performing convolution operation on the spectrogram by utilizing the depth convolution in the depth separable convolution layer to obtain initial spectrogram characteristics;
and combining the initial spectrogram features by using point-by-point convolution in the depth separable convolution layer to obtain a feature vector set.
Optionally, the calculating the loss value of the predicted customer speech emotion and the real customer speech emotion by using a combined loss function in the speech emotion recognition model includes:
calculating a loss value for the predicted customer speech emotion and the real customer speech emotion using a combined loss function of:
L=L s +γL c
Figure BDA0003614651160000021
Figure BDA0003614651160000022
wherein L represents a loss value, L s Represents the cross entropy loss value, L c Represents the central loss value, gamma represents the balance coefficient of the cross entropy loss value and the central loss value, k represents the number of predicted client speech emotions, y i Represents the ith predicted customer Voice mood, y' i Representing the ith real client speech emotion, m representing the number of predicted client speech emotions,
Figure BDA0003614651160000023
denotes the y th i Class center of class features.
Optionally, the performing speech synthesis on the text to be replied to obtain speech to be replied includes:
converting the text to be replied into a phoneme sequence;
sequentially carrying out spectrum processing on the phoneme sequence by utilizing an encoder, a decoder and a residual error network of a preset speech synthesis model to obtain a target Mel spectrum;
and performing audio conversion on the target Mel frequency spectrum by using a WaveGlow vocoder of the voice synthesis model to obtain the voice to be replied.
Optionally, the adjusting the voice rhythm of the voice to be replied according to the emotion of the client voice includes:
inputting the voice to be replied into a preset inverse filter according to the voice emotion of the client to obtain a glottis excitation signal;
and adjusting the speed and the volume of the glottis excitation signal to obtain the voice rhythm of the voice to be replied.
Optionally, the inputting the reduced order matrix into an activation function in the speech emotion recognition model, and outputting the predicted client speech emotion of the conversational speech sequence, includes:
carrying out full connection operation on the reduced order matrix by utilizing the full connection layer to obtain a full connection matrix;
and calculating the full-connection matrix by using the activation function to obtain the predicted speech emotion of the client.
Optionally, the constructing a spectrogram of the dialogue voice sequence includes:
extracting frame frequency of the dialogue voice sequence, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the spectrogram.
In order to solve the above problem, the present invention further provides an intelligent customer service voice rhythm adjusting device, including:
the dialogue voice sequence acquisition module is used for acquiring a dialogue voice sequence to be trained of a client, marking the real client voice emotion corresponding to the dialogue voice sequence and constructing a spectrogram of the dialogue voice sequence;
the feature extraction module is used for performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
the reduced order sampling module is used for carrying out reduced order sampling on the feature vector set by utilizing a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix;
the client speech emotion prediction module is used for inputting the reduced matrix to a full connection layer in the speech emotion recognition model and outputting predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
the model training completion module is used for calculating loss values of the predicted speech emotion of the client and the real speech emotion of the client by using a combined loss function in the speech emotion recognition model, adjusting parameters of the speech emotion recognition model according to the loss values, and obtaining a trained speech emotion recognition model until the loss values meet preset conditions;
the speech emotion recognition module is used for recognizing the obtained conversation speech sequence to be recognized by using the trained speech emotion recognition model to obtain the speech emotion of the client of the conversation speech sequence;
and the voice rhythm adjusting module is used for acquiring a text to be replied of the intelligent customer service, performing voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of the customer.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and the processor executes the computer program stored in the memory to realize the intelligent customer service voice rhythm regulation method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the intelligent customer service voice rhythm regulation method described above.
In the embodiment of the invention, firstly, feature extraction is carried out on the spectrogram by utilizing a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set, so that the accuracy of feature extraction is ensured to be unchanged, the operation is reduced, the efficiency of feature extraction is improved, and the feature vector set is subjected to reduced order sampling by utilizing a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix, so that the necessary features of a conversational speech sequence can be further strengthened, and the integrity of the necessary features is ensured; secondly, inputting the reduced matrix into a combined activation function consisting of an activation function and a central activation function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss value until the loss value meets a preset condition to obtain a trained speech emotion recognition model, so that compactness among layers in the model can be improved, and high-precision recognition of speech emotion of a client can be realized subsequently; finally, the voice rhythm of the voice to be replied is adjusted according to the voice emotion of the client, so that adaptation can be realized by adopting different voice rhythms for different clients, and the communication effect between the intelligent customer service and the client is effectively improved. Therefore, the method, the device, the equipment and the storage medium for adjusting the voice rhythm of the intelligent customer service provided by the embodiment of the invention can improve the accuracy of voice emotion recognition of the customer and improve the communication effect between the intelligent customer service and the customer.
Drawings
Fig. 1 is a schematic flow chart of an intelligent customer service voice rhythm regulation method according to an embodiment of the present invention;
fig. 2 is a detailed flowchart illustrating a step in an intelligent customer service voice rhythm regulation method according to an embodiment of the present invention;
fig. 3 is a detailed flowchart illustrating a step in an intelligent customer service voice rhythm adjustment method according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of an intelligent customer service voice rhythm adjusting device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device implementing an intelligent customer service voice rhythm adjustment method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an intelligent customer service voice rhythm adjusting method. The execution subject of the intelligent customer service voice rhythm regulation method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the present application. In other words, the method for adjusting the intelligent customer service voice rhythm may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, which is a schematic flow chart of an intelligent customer service voice rhythm regulation method according to an embodiment of the present invention, in an embodiment of the present invention, the intelligent customer service voice rhythm regulation method includes the following steps S1-S7:
s1, obtaining a conversation voice sequence to be trained of a client, marking the real client voice emotion corresponding to the conversation voice sequence, and constructing a spectrogram of the conversation voice sequence.
In the embodiment of the invention, the conversation voice sequence to be trained of the client is a conversation voice sequence generated in the communication process of the client and the intelligent customer service, and the conversation voice sequence is composed of continuous frame signals, wherein the conversation voice sequence can be obtained through an APP (application) or an enterprise database; the real client voice emotion is that voice emotion recognition is carried out on a voice sequence flow to be trained through a preset system, and verification is carried out, so that client voice emotion with the accuracy rate of one hundred percent is obtained, wherein the real client voice emotion mainly comprises positive emotion (such as interest and like), neutral emotion (such as calmness and smoothness) and message emotion (such as impatience and dislike), the fact that the client is in a busy state is further determined according to the positive emotion and the neutral emotion, and the fact that the client is in a busy state is determined through negative emotion.
In the embodiment of the present invention, the spectrogram is a two-dimensional image formed by accumulating frequency domain characteristics of a speech signal of a speech sequence in a time domain, and dynamically displays a variation relationship between a frequency spectrum of the speech sequence and time. The spectrogram comprises spatial characteristic information consisting of corresponding time frequency and energy intensity and time sequence characteristic information changing along with time, different textures are formed according to the shade of the color, and a large amount of individual characteristic information of a dialog person is contained in the textures.
According to the embodiment of the invention, the accuracy of speech emotion recognition of the subsequent dialog speech sequence can be improved by acquiring the dialog speech sequence to be trained of the client, marking the real client speech emotion corresponding to the dialog speech sequence and constructing the spectrogram of the dialog speech sequence.
In an embodiment of the present invention, endpoint detection may be performed on the dialogue voice sequence first, and the disordered and irregular dialogue voice sequence is converted into a regular dialogue voice sequence, where the endpoint detection is to perform signal time domain analysis on the dialogue voice sequence to determine whether the dialogue voice sequence is a voiced segment or an unvoiced segment.
In the embodiment of the invention, because the conversation voice sequence is a non-stable time-varying signal which carries various information such as background noise, voice and the like, after the conversation voice sequence of the client is obtained, the conversation voice sequence can be preprocessed to extract the voice only containing the voice.
Further, after obtaining the dialog voice sequence to be trained of the client, the method further comprises:
and carrying out pre-emphasis operation on the conversation voice sequence, framing the conversation voice sequence after pre-emphasis by adopting a windowing method, screening out background sound in the conversation voice sequence, obtaining the conversation voice sequence to be trained only containing human voice, and reducing the interference of the background sound.
In an embodiment of the invention, performing the pre-emphasis operation can enhance the high resolution of the voice data.
Preferably, the windowing method comprises: hamming windowing.
Further, the constructing of the spectrogram of the dialogue voice sequence comprises:
extracting frame frequency of the dialogue voice sequence, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values; deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values; and summing the plurality of target frame frequency values to obtain the spectrogram.
Wherein the preset Fourier formula is as follows:
Figure BDA0003614651160000061
wherein, s (k) represents a frame frequency obtained by performing fast fourier transform on an audio signal s (n) of a speech sequence of a conversation; k represents the number of frame points included in s (k), and N represents the size of the frame.
Assume a sampling frequency of f s Then, the framing signal of the dialog speech sequence is as follows:
Figure BDA0003614651160000062
wherein, T s Representing taking a corresponding time, f, in the audio signal S (n) s Representing the sampling frequency, and n represents the number of frame points included in the signal s (n).
The frequency at point k can be obtained by the following equation:
Figure BDA0003614651160000063
wherein, s (k) is a plurality of initial frame frequency values, k represents the number of frame points included in s (k), N represents the size of the frame, the repeated frequency values in s (k) are removed, and the remaining value is the target frame frequency value.
And summing each frequency corresponding to k by the following formula:
Figure BDA0003614651160000064
and S2, performing feature extraction on the spectrogram by using the depth separable convolution layer in the pre-constructed speech emotion recognition model to obtain a feature vector set.
In this embodiment of the present invention, the pre-constructed speech emotion recognition model may be a DensNet (Dense convolutional neural network), where the speech emotion recognition model includes: a depth separable convolutional layer, a global pooling layer, and a combined activation function layer.
In the embodiment of the invention, the Depth Separable Con-volume layer (DSC) mainly has the functions of reducing the number of model parameters and improving the model operation rate while ensuring the accuracy of feature extraction.
According to the embodiment of the invention, the speech spectrogram is subjected to feature extraction by using the depth separable convolution layer in the pre-constructed speech emotion recognition model to obtain the feature vector set, so that the operation is reduced and the feature extraction efficiency is improved while the feature extraction accuracy is ensured.
As an embodiment of the present invention, referring to fig. 2, the extracting features of the spectrogram by using the depth separable convolution layer in the pre-constructed speech emotion recognition model to obtain a feature vector set includes the following steps S21-S22:
s21, performing convolution operation on the spectrogram by utilizing the depth convolution in the depth separable convolution layer to obtain initial spectrogram characteristics;
and S22, combining the initial spectrogram features by using point-by-point convolution in the depth separable convolution layer to obtain a feature vector set.
The deep convolution mainly has the advantages that the spectrogram of each input channel is subjected to independent convolution, so that spectrogram features of different positions can be extracted, and the integrity of the spectrogram features is ensured; the point-by-point convolution mainly has the effect of extracting spectrogram information on the same spatial position by using different channels to further obtain more complete global information.
For example, there is a spectrogram with 5x5 pixels, and the convolution operation performed by the convolution kernel of 3x3 of 3 channels subjected to deep convolution can extract the initial spectrogram feature of 3x3x 3; then, 4 convolution kernels of 1x1x3 convolved point by point are used for carrying out weighted combination on the initial spectrogram features, so that a feature vector set which is global feature information of 1x3x3x4 can be obtained, and compared with a traditional convolutional layer, the difference is that the number of parameters involved in the convolution process of the traditional convolutional layer is F-4 x3x 3-108; the parameter quantity involved in the depth separable convolution is P, deep and P (3x3x3) + (1x1x3x4) 39, which shows that the parameter quantity involved in the depth separable convolution is about 1/3 of the parameter quantity of the traditional convolution, and the operation rate is greatly improved.
And S3, performing reduced order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix.
In the embodiment of the invention, the feature vector set is subjected to reduced-order sampling by utilizing the global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix, the feature vector set can be subjected to average region division through different pooling layers, and the average value of the sum of all pixel values in a region is taken to replace the pixel value in the region, so that the main features extracted in the feature vector set are not lost, and the dimension reduction operation can be performed by eliminating some useless information in the feature vector set.
And S4, inputting the reduced matrix to a full connection layer in the speech emotion recognition model, and outputting the predicted client speech emotion of the dialogue speech sequence by using an activation function in the full connection layer.
In the embodiment of the invention, the activation function can activate the reduced matrix to obtain the predicted client voice emotion of the dialogue voice sequence output by the activation function, the predicted client voice emotion of the dialogue voice sequence is output by inputting the reduced matrix into the activation function in the voice emotion recognition model, and the subsequent comparison with the real client emotion can be facilitated, so that the accuracy of the voice emotion recognition model is improved.
As an embodiment of the present invention, the inputting the reduced order matrix into the activation function in the speech emotion recognition model to output the predicted speech emotion of the client of the conversational speech sequence includes:
carrying out full connection operation on the reduced order matrix by utilizing the full connection layer to obtain a full connection matrix; and calculating the full-connection matrix by using the activation function to obtain the predicted speech emotion of the client.
Preferably, the activation function may be a Sigmoid function.
In an embodiment of the present invention, the predicted speech emotion of the client can be obtained by the following activation function formula:
Figure BDA0003614651160000081
wherein f (x) is a predicted client speech emotion, x is a full-connection matrix, and e is an infinite acyclic decimal.
S5, calculating loss values of the predicted client speech emotion and the real client speech emotion by using a combined loss function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss values until the loss values meet preset conditions to obtain the trained speech emotion recognition model.
In the embodiment of the invention, the combined Loss function can be formed by combining a Cross Entropy Loss function (Cross entry Loss) and a Center Loss function (Center Loss), and has the main function of improving the compactness between layers in a speech emotion recognition model so as to realize high-precision recognition of speech emotion of a client, wherein the Center Loss function has the main function of providing a category Center for each recognized client speech emotion category, so that each sample participating in training can be close to the Center of the same category to achieve the clustering effect; the preset condition may be set according to the actual model training scenario, for example, the preset condition may be that the loss value is smaller than a preset threshold.
In an embodiment of the present invention, the calculating the loss values of the predicted speech emotion of the client and the real speech emotion of the client by using the combined loss function in the speech emotion recognition model includes:
calculating a loss value for the predicted customer speech emotion and the real customer speech emotion using a combined loss function of:
L=L s +γL c
Figure BDA0003614651160000082
Figure BDA0003614651160000083
wherein L represents a loss value, L s Represents the cross entropy loss value, L c Represents the central loss value, gamma represents the balance coefficient of the cross entropy loss value and the central loss value, k represents the number of predicted client speech emotions, y i Represents the ith predicted customer Voice mood, y' i Representing the ith real client speech emotion, m representing the number of predicted client speech emotions,
Figure BDA0003614651160000091
denotes the y th i Class center of class features.
In an optional embodiment of the present invention, adjusting parameters of the speech emotion recognition model according to the loss value may be performed by obtaining a return circulation stopping instruction when the loss value is smaller than a preset threshold, and obtaining the trained speech emotion recognition model according to the circulation stopping instruction, for example, in a training model constructed based on a programming language, the accuracy of the loss value and the preset threshold is checked every 10 steps for the loss value; and testing the accuracy of the loss value and the preset threshold value once every 100 steps, if the accuracy of the loss value and the preset threshold value is not improved after 1000 steps of training, and is always less than or equal to the preset threshold value, stopping training in advance to obtain a trained speech emotion recognition model.
And S6, recognizing the obtained dialogue voice sequence to be recognized by using the trained voice emotion recognition model to obtain the client voice emotion of the dialogue voice sequence.
In the embodiment of the invention, the dialogue voice sequence to be recognized refers to voice data of a client during dialogue with the client voice, and the dialogue voice sequence sent by the client can be obtained through the background of the APP.
In the embodiment of the invention, the trained speech emotion recognition model can be used for recognizing the vocal segment part of the dialog speech sequence stream to be recognized, and converting the vocal segment part into the speech emotion of the client.
For example, the dialogue speech sequence to be recognized can be a dialogue speech sequence sent by the client through the communication tool, and the model can be used for recognizing the client speech emotion corresponding to the dialogue speech sequence.
S7, acquiring a text to be replied of the intelligent customer service, performing voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of the customer.
In the embodiment of the invention, the text to be replied refers to a corresponding reply which is carried out by the intelligent customer service by recognizing that the voice emotion of the customer is busy or not busy, for example, when the enterprise product is promoted, the voice emotion of the customer is recognized to be busy, the corresponding text to be replied can be good, the next telephone communication time can be reserved or good, and the related product information of the enterprise can be sent to a mobile phone of the user in a short message/WeChat mode later to please pay attention to check.
In the embodiment of the present invention, the voice rhythm refers to a voice rhythm when an intelligent customer service communicates with a customer, wherein the voice rhythm mainly includes a speed and a volume, and the speed includes three levels: the speed of speech is fast, the speed of speech is normal and the speed of speech is slow, and the volume also includes three grades: the volume is big, the volume is normal, the volume is little.
According to the embodiment of the invention, the voice to be replied is obtained by carrying out voice synthesis on the text to be replied, and the voice rhythm of the voice to be replied is adjusted according to the voice emotion of the client, so that scene adaptation can be realized by adopting different reply texts and different reply voice rhythms for different clients, and the communication effect between the intelligent customer service and the client is effectively improved.
As an embodiment of the present invention, the performing speech synthesis on the text to be replied to obtain a speech to be replied includes:
converting the text to be replied into a phoneme sequence; sequentially carrying out spectrum processing on the phoneme sequence by utilizing an encoder, a decoder and a residual error network of a preset speech synthesis model to obtain a target Mel spectrum; and performing audio conversion on the target Mel frequency spectrum by using a WaveGlow vocoder of the voice synthesis model to obtain the voice to be replied.
In the embodiment of the invention, the phoneme sequence refers to the minimum unit in the field of speech recognition, for example, the Chinese phoneme can be pinyin and tone; the encoder may include a convolutional layer and an output function. The decoder may be an autoregressive recurrent neural network, including attention networks and decoding processing networks. The residual network includes a convolutional layer and a series of functions.
In the embodiment of the invention, the target Mel frequency spectrum refers to a nonlinear characteristic capable of describing human ear frequency, the auditory process of human ears is simulated, the target Mel frequency spectrum is subjected to audio conversion to obtain the voice to be replied, the pronunciation error can be reduced, and the accuracy rate of converting the text to be replied into the voice to be replied is improved.
Further, referring to fig. 3, the adjusting the voice rhythm of the voice to be replied to according to the emotion of the client voice includes the following steps S71-S72:
s71, inputting the voice to be replied into a preset inverse filter according to the voice emotion of the client to obtain a glottis excitation signal;
and S72, adjusting the speed and the volume of the glottal excitation signal to obtain the voice rhythm of the voice to be replied.
The preset inverse filter is a filter constructed by utilizing a basic principle of linear prediction analysis, and mainly has the functions of extracting the speed and the volume of the voice to be replied and filtering out some useless noise, so that the interference of vocal tract response information and the noise can be avoided, and the glottal excitation signal is a speed and volume signal included in the voice to be replied.
In an embodiment of the present invention, the adjusting the speech rate of the glottic excitation signal may first determine a speech segment length of a normal speech rate when the speech to be replied is not adjusted, determine a speech segment length of the speech to be replied, which needs to be adjusted in speech rate, compare the speech segment length of the normal speech rate with the speech segment length of the speech rate, which needs to be adjusted in speech rate, to obtain a speech rate adjustment ratio, and increase the processing length of each frame of the glottic excitation signal to the speech rate adjustment ratio, to obtain the adjusted speech rate.
In addition, the average amplitude of the normal volume when the voice to be replied is not adjusted is respectively determined, the average amplitude of the voice to be replied, which needs to be subjected to the speed adjustment, is determined, the average amplitude of the normal volume is compared with the average amplitude of the voice to be subjected to the speed adjustment to obtain the amplitude adjustment ratio, the processing length of each frame of the glottis excitation signal is increased to the amplitude adjustment ratio to obtain the adjusted volume, and in sum, the voice rhythm of the voice to be replied is obtained.
In an optional embodiment of the present invention, when there are many unusual words (such as professional terms and industry exclusive vocabularies) in the text to be replied, the rhythm of the speech to be replied can be adjusted to be slow in speech speed and high in volume; when few words exist in the text to be replied, the rhythm of the voice to be replied can be adjusted to be normal/fast and normal in volume.
In another embodiment of the present invention, the voice rhythm of the voice to be replied may also be adjusted in real time according to the feedback of the client, for example, the feedback of the client includes "repeat", the voice rhythm may be adjusted to be slow in speed and normal in volume, and if the feedback of the client includes "too light in sound", the voice rhythm may be adjusted to be normal in speed and high in volume.
In the embodiment of the invention, firstly, feature extraction is carried out on the spectrogram by utilizing a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set, so that the accuracy of feature extraction is ensured to be unchanged, the operation is reduced, the efficiency of feature extraction is improved, and the feature vector set is subjected to reduced order sampling by utilizing a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix, so that necessary features of a conversation speech sequence can be further strengthened, and the integrity of the necessary features is ensured; secondly, inputting the reduced matrix into a combined activation function consisting of an activation function and a central activation function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss value until the loss value meets a preset condition to obtain a trained speech emotion recognition model, so that compactness among layers in the model can be improved, and high-precision recognition of speech emotion of a client can be realized subsequently; finally, the voice rhythm of the voice to be replied is adjusted according to the voice emotion of the client, so that adaptation can be realized by adopting different voice rhythms for different clients, and the communication effect between the intelligent customer service and the client is effectively improved. Therefore, the intelligent customer service voice rhythm adjusting method provided by the embodiment of the invention can improve the accuracy of voice emotion recognition of the customer and improve the communication effect between the intelligent customer service and the customer.
The intelligent customer service voice rhythm adjusting device 100 can be installed in electronic equipment. According to the realized functions, the intelligent customer service speech rhythm adjusting device can comprise a conversation speech sequence obtaining module 101, a feature extraction module 102, a reduced order sampling module 103, a customer speech emotion prediction module 104, a model training completion module 105, a speech emotion recognition module 106 and a speech rhythm adjusting module 107.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the dialogue voice sequence acquisition module 101 is configured to acquire a dialogue voice sequence to be trained of a client, mark a real client voice emotion corresponding to the dialogue voice sequence, and construct a spectrogram of the dialogue voice sequence.
In the embodiment of the invention, the conversation voice sequence to be trained of the client is a conversation voice sequence generated in the communication process of the client and the intelligent customer service, and the conversation voice sequence is composed of continuous frame signals, wherein the conversation voice sequence can be obtained through an APP (application) or an enterprise database; the real client voice emotion is that voice emotion recognition is carried out on a voice sequence flow to be trained through a preset system, and verification is carried out, so that client voice emotion with the accuracy rate of one hundred percent is obtained, wherein the real client voice emotion mainly comprises positive emotion (such as interest and like), neutral emotion (such as calmness and smoothness) and message emotion (such as impatience and dislike), the fact that the client is in a busy state is further determined according to the positive emotion and the neutral emotion, and the fact that the client is in a busy state is determined through negative emotion.
In the embodiment of the present invention, the spectrogram is a two-dimensional image formed by accumulating frequency domain characteristics of a speech signal of a speech sequence in a time domain, and dynamically displays a variation relationship between a frequency spectrum of the speech sequence and time. The spectrogram comprises spatial characteristic information consisting of corresponding time frequency and energy intensity and time sequence characteristic information changing along with time, different textures are formed according to the shade of the color, and a large amount of individual characteristic information of a dialog person is contained in the textures.
According to the embodiment of the invention, the accuracy of speech emotion recognition of the conversation speech sequence in the follow-up process can be improved by acquiring the conversation speech sequence to be trained of the client, marking the real client speech emotion corresponding to the conversation speech sequence and constructing the spectrogram of the conversation speech sequence.
In an embodiment of the present invention, endpoint detection may be performed on the dialogue voice sequence first, and the disordered and irregular dialogue voice sequence is converted into a regular dialogue voice sequence, where the endpoint detection is to perform signal time domain analysis on the dialogue voice sequence to determine whether the dialogue voice sequence is a voiced segment or an unvoiced segment.
In the embodiment of the invention, because the conversation voice sequence is a non-stable time-varying signal which carries various information such as background noise, voice and the like, after the conversation voice sequence of the client is obtained, the conversation voice sequence can be preprocessed to extract the voice only containing the voice.
The dialogue voice sequence acquisition module 101 is further configured to:
after the conversation voice sequence to be trained of the client is obtained, the conversation voice sequence is subjected to pre-emphasis operation, the conversation voice sequence subjected to pre-emphasis is subjected to framing by adopting a windowing method, background sounds in the conversation voice sequence are screened out, the conversation voice sequence to be trained only containing human voices is obtained, and interference of the background sounds is reduced.
In an embodiment of the invention, performing the pre-emphasis operation can enhance the high resolution of the voice data.
Preferably, the windowing method comprises: hamming windowing.
Further, the constructing of the spectrogram of the dialogue voice sequence comprises:
extracting frame frequency of the dialogue voice sequence, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values; deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values; and summing the plurality of target frame frequency values to obtain the spectrogram.
Wherein the preset Fourier formula is as follows:
Figure BDA0003614651160000121
wherein, s (k) represents a frame frequency obtained by performing fast fourier transform on an audio signal s (n) of a speech sequence of a conversation; k represents the number of frame points included in s (k), and N represents the size of the frame.
Assume a sampling frequency of f s Then, the framing signal of the dialog speech sequence is as follows:
Figure BDA0003614651160000131
wherein, T s Representing a corresponding time period, f, in the audio signal S (n) s Representing the sampling frequency, and n represents the number of frame points included in the signal s (n).
The frequency at point k can be obtained by the following equation:
Figure BDA0003614651160000132
wherein, s (k) is a plurality of initial frame frequency values, k represents the number of frame points included in s (k), N represents the size of the frame, the repeated frequency values in s (k) are removed, and the remaining value is the target frame frequency value.
And summing each frequency corresponding to k by the following formula:
Figure BDA0003614651160000133
the feature extraction module 102 is configured to perform feature extraction on the speech spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set.
In this embodiment of the present invention, the pre-constructed speech emotion recognition model may be a DensNet (Dense convolutional neural network), where the speech emotion recognition model includes: a depth separable convolutional layer, a global pooling layer, and a combined activation function layer.
In the embodiment of the invention, the Depth Separable Con-volume layer (DSC) mainly has the functions of reducing the number of model parameters and improving the model operation rate while ensuring the accuracy of feature extraction.
According to the embodiment of the invention, the speech spectrogram is subjected to feature extraction by using the depth separable convolution layer in the pre-constructed speech emotion recognition model to obtain the feature vector set, so that the operation is reduced and the feature extraction efficiency is improved while the feature extraction accuracy is ensured.
As an embodiment of the present invention, the feature extraction module 102 performs feature extraction on the speech spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set by performing the following operations, including:
performing convolution operation on the spectrogram by utilizing the depth convolution in the depth separable convolution layer to obtain initial spectrogram characteristics;
and combining the initial spectrogram features by using point-by-point convolution in the depth separable convolution layer to obtain a feature vector set.
The deep convolution mainly has the advantages that the spectrogram of each input channel is subjected to independent convolution, so that spectrogram features of different positions can be extracted, and the integrity of the spectrogram features is ensured; the point-by-point convolution mainly has the effect of extracting spectrogram information on the same spatial position by using different channels to further acquire more complete global information.
For example, there is a spectrogram with 5x5 pixels, and the convolution operation performed by the convolution kernel of 3x3 of 3 channels subjected to deep convolution can extract the initial spectrogram feature of 3x3x 3; then, 4 convolution kernels of 1x1x3 convolved point by point are used for carrying out weighted combination on the initial spectrogram features, so that a feature vector set which is global feature information of 1x3x3x4 can be obtained, and compared with a traditional convolutional layer, the difference is that the number of parameters involved in the convolution process of the traditional convolutional layer is F-4 x3x 3-108; the parameter quantity involved in the depth separable convolution is P, deep and P (3x3x3) + (1x1x3x4) 39, which shows that the parameter quantity involved in the depth separable convolution is about 1/3 of the parameter quantity of the traditional convolution, and the operation rate is greatly improved.
The reduced-order sampling module 103 is configured to perform reduced-order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix.
In the embodiment of the invention, the feature vector set is subjected to reduced-order sampling by utilizing the global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix, the feature vector set can be subjected to average region division through different pooling layers, and the average value of the sum of all pixel values in a region is taken to replace the pixel value in the region, so that the main features extracted in the feature vector set are not lost, and the dimension reduction operation can be performed by eliminating some useless information in the feature vector set.
And the client speech emotion prediction module 104 is configured to input the reduced order matrix to a full connection layer in the speech emotion recognition model, and output the predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer.
In the embodiment of the invention, the activation function can activate the reduced matrix to obtain the predicted client voice emotion of the dialogue voice sequence output by the activation function, the predicted client voice emotion of the dialogue voice sequence is output by inputting the reduced matrix into the activation function in the voice emotion recognition model, and the subsequent comparison with the real client emotion can be facilitated, so that the accuracy of the voice emotion recognition model is improved.
As an embodiment of the present invention, the inputting the reduced order matrix into the activation function in the speech emotion recognition model to output the predicted speech emotion of the client of the conversational speech sequence includes:
carrying out full connection operation on the reduced order matrix by utilizing the full connection layer to obtain a full connection matrix; and calculating the full-connection matrix by using the activation function to obtain the predicted speech emotion of the client.
Preferably, the activation function may be a Sigmoid function.
In an embodiment of the present invention, the predicted speech emotion of the client can be obtained by the following activation function formula:
Figure BDA0003614651160000141
wherein f (x) is a predicted client speech emotion, x is a full-connection matrix, and e is an infinite acyclic decimal.
And the model training completion module 105 is configured to calculate a loss value of the predicted speech emotion of the customer and the speech emotion of the real customer by using a combined loss function in the speech emotion recognition model, adjust parameters of the speech emotion recognition model according to the loss value, and obtain a trained speech emotion recognition model until the loss value meets a preset condition.
In the embodiment of the invention, the combined Loss function can be formed by combining a Cross Entropy Loss function (Cross entry Loss) and a Center Loss function (Center Loss), and has the main function of improving the compactness between layers in a speech emotion recognition model so as to realize high-precision recognition of speech emotion of a client, wherein the Center Loss function has the main function of providing a category Center for each recognized client speech emotion category, so that each sample participating in training can be close to the Center of the same category to achieve the clustering effect; the preset condition may be set according to the actual model training scenario, for example, the preset condition may be that the loss value is smaller than a preset threshold.
In an embodiment of the present invention, the model training completion module 105 calculates the loss values of the predicted speech emotion of the customer and the real speech emotion of the customer by using the combined loss function in the speech emotion recognition model by performing the following operations:
calculating a loss value for the predicted customer speech emotion and the real customer speech emotion using a combined loss function of:
L=L s +γL c
Figure BDA0003614651160000151
Figure BDA0003614651160000152
wherein L represents a loss value, L s Represents the cross entropy loss value, L c Represents the central loss value, gamma represents the balance coefficient of the cross entropy loss value and the central loss value, k represents the number of predicted client speech emotions, y i Represents the ith predicted customer Voice mood, y' i Representing the ith real client speech emotion, m representing the number of predicted client speech emotions,
Figure BDA0003614651160000153
denotes the y th i Class center of class features.
In an optional embodiment of the present invention, adjusting parameters of the speech emotion recognition model according to the loss value may be performed by obtaining a return circulation stopping instruction when the loss value is smaller than a preset threshold, and obtaining the trained speech emotion recognition model according to the circulation stopping instruction, for example, in a training model constructed based on a programming language, the accuracy of the loss value and the preset threshold is checked every 10 steps for the loss value; and testing the accuracy of the loss value and the preset threshold value once every 100 steps, if the accuracy of the loss value and the preset threshold value is not improved after 1000 steps of training, and is always less than or equal to the preset threshold value, stopping training in advance to obtain a trained speech emotion recognition model.
The speech emotion recognition module 106 is configured to recognize the obtained conversation speech sequence to be recognized by using the trained speech emotion recognition model, so as to obtain a client speech emotion of the conversation speech sequence.
In the embodiment of the invention, the dialogue voice sequence to be recognized refers to voice data of a client during dialogue with the client voice, and the dialogue voice sequence sent by the client can be obtained through the background of the APP.
In the embodiment of the invention, the trained speech emotion recognition model can be used for recognizing the vocal segment part of the dialog speech sequence stream to be recognized, and converting the vocal segment part into the speech emotion of the client.
For example, the dialogue speech sequence to be recognized can be a dialogue speech sequence sent by the client through the communication tool, and the model can be used for recognizing the client speech emotion corresponding to the dialogue speech sequence.
The voice rhythm adjusting module 107 is configured to obtain a text to be replied of the intelligent customer service, perform voice synthesis on the text to be replied to obtain a voice to be replied, and adjust the voice rhythm of the voice to be replied according to the emotion of the client voice.
In the embodiment of the invention, the text to be replied refers to a corresponding reply which is performed by the intelligent customer service by identifying that the voice emotion of the customer is busy or not busy, for example, when the enterprise product is promoted, the voice emotion of the customer is identified as busy, the corresponding text to be replied can be good, the next telephone communication time can be reserved or the next telephone communication time can be good, and the related product information of the enterprise can be sent to a mobile phone in a short message/WeChat mode for please pay attention to check.
In the embodiment of the present invention, the voice rhythm refers to a voice rhythm when an intelligent customer service communicates with a customer, wherein the voice rhythm mainly includes a speed and a volume, and the speed includes three levels: the speed of speech is fast, the speed of speech is normal and the speed of speech is slow, and the volume also includes three grades: the volume is big, the volume is normal, the volume is little.
According to the embodiment of the invention, the to-be-replied voice is obtained by performing voice synthesis on the to-be-replied text, and the voice rhythm of the to-be-replied voice is adjusted according to the voice emotion of the client, so that scene adaptation can be realized by adopting different reply texts and different reply voice rhythms for different clients, and the communication effect between the intelligent customer service and the client is effectively improved.
As an embodiment of the present invention, the voice rhythm adjusting module 107 performs voice synthesis on the text to be replied by performing the following operations to obtain a voice to be replied, including:
converting the text to be replied into a phoneme sequence; sequentially carrying out spectrum processing on the phoneme sequence by utilizing an encoder, a decoder and a residual error network of a preset speech synthesis model to obtain a target Mel spectrum; and performing audio conversion on the target Mel frequency spectrum by using a WaveGlow vocoder of the voice synthesis model to obtain the voice to be replied.
In the embodiment of the invention, the phoneme sequence refers to the minimum unit in the field of speech recognition, for example, the Chinese phoneme can be pinyin and tone; the encoder may include a convolutional layer and an output function. The decoder may be an autoregressive recurrent neural network, including attention networks and decoding processing networks. The residual network includes a convolutional layer and a series of functions.
In the embodiment of the invention, the target Mel frequency spectrum refers to a nonlinear characteristic capable of describing human ear frequency, the auditory process of human ears is simulated, the target Mel frequency spectrum is subjected to audio conversion to obtain the voice to be replied, the pronunciation error can be reduced, and the accuracy rate of converting the text to be replied into the voice to be replied is improved.
The voice rhythm adjusting module 107 is further configured to adjust the voice rhythm of the voice to be replied according to the emotion of the client voice, including:
inputting the voice to be replied into a preset inverse filter according to the voice emotion of the client to obtain a glottis excitation signal;
and adjusting the speed and the volume of the glottis excitation signal to obtain the voice rhythm of the voice to be replied.
The preset inverse filter is a filter constructed by utilizing a basic principle of linear prediction analysis, and mainly has the functions of extracting the speed and the volume of the voice to be replied and filtering out some useless noise, so that the interference of vocal tract response information and the noise can be avoided, and the glottal excitation signal is a speed and volume signal included in the voice to be replied.
In an embodiment of the present invention, the adjusting the speech rate of the glottic excitation signal may first determine a speech segment length of a normal speech rate when the speech to be replied is not adjusted, determine a speech segment length of the speech to be replied, which needs to be adjusted in speech rate, compare the speech segment length of the normal speech rate with the speech segment length of the speech rate, which needs to be adjusted in speech rate, to obtain a speech rate adjustment ratio, and increase the processing length of each frame of the glottic excitation signal to the speech rate adjustment ratio, to obtain the adjusted speech rate.
In addition, the average amplitude of the normal volume when the voice to be replied is not adjusted is respectively determined, the average amplitude of the voice to be replied, which needs to be subjected to the speed adjustment, is determined, the average amplitude of the normal volume is compared with the average amplitude of the voice to be subjected to the speed adjustment to obtain the amplitude adjustment ratio, the processing length of each frame of the glottis excitation signal is increased to the amplitude adjustment ratio to obtain the adjusted volume, and in sum, the voice rhythm of the voice to be replied is obtained.
In an optional embodiment of the present invention, when there are many unusual words (such as professional terms and industry exclusive vocabularies) in the text to be replied, the rhythm of the speech to be replied can be adjusted to be slow in speech speed and high in volume; when few words exist in the text to be replied, the rhythm of the voice to be replied can be adjusted to be normal/fast and normal in volume.
In another embodiment of the present invention, the voice rhythm of the voice to be replied may also be adjusted in real time according to the feedback of the client, for example, the feedback of the client includes "repeat", the voice rhythm may be adjusted to be slow in speed and normal in volume, and if the feedback of the client includes "too light in sound", the voice rhythm may be adjusted to be normal in speed and high in volume.
In the embodiment of the invention, firstly, feature extraction is carried out on the spectrogram by utilizing a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set, so that the accuracy of feature extraction is ensured to be unchanged, the operation is reduced, the efficiency of feature extraction is improved, and the feature vector set is subjected to reduced order sampling by utilizing a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix, so that the necessary features of a conversational speech sequence can be further strengthened, and the integrity of the necessary features is ensured; secondly, inputting the reduced matrix into a combined activation function consisting of an activation function and a central activation function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss value until the loss value meets a preset condition to obtain a trained speech emotion recognition model, so that compactness among layers in the model can be improved, and high-precision recognition of speech emotion of a client can be realized subsequently; finally, the voice rhythm of the voice to be replied is adjusted according to the voice emotion of the client, so that adaptation can be realized by adopting different voice rhythms for different clients, and the communication effect between the intelligent customer service and the client is effectively improved. Therefore, the intelligent customer service voice rhythm adjusting device provided by the embodiment of the invention can improve the accuracy of voice emotion recognition of the customer and improve the communication effect between the intelligent customer service and the customer.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the method for adjusting the speech rhythm of the intelligent customer service according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a smart customer service voice tempo adjustment program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of media, which includes flash memory, removable hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, local disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an intelligent customer service voice rhythm adjustment program, but also to temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., an intelligent customer service voice rhythm adjustment program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 5 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The intelligent customer service speech rhythm regulation program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, which when executed in the processor 10, can implement:
acquiring a conversation voice sequence to be trained of a client, marking a real client voice emotion corresponding to the conversation voice sequence, and constructing a spectrogram of the conversation voice sequence;
performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
performing reduced-order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix;
inputting the reduced matrix into a full connection layer in the speech emotion recognition model, and outputting the predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
calculating loss values of the predicted client speech emotion and the real client speech emotion by using a combined loss function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss values until the loss values meet preset conditions to obtain a trained speech emotion recognition model;
recognizing the obtained dialogue voice sequence to be recognized by using the trained voice emotion recognition model to obtain the client voice emotion of the dialogue voice sequence;
the method comprises the steps of obtaining a text to be replied of an intelligent customer service, carrying out voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of a customer.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring a conversation voice sequence to be trained of a client, marking a real client voice emotion corresponding to the conversation voice sequence, and constructing a spectrogram of the conversation voice sequence;
performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
performing reduced-order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix;
inputting the reduced matrix into a full connection layer in the speech emotion recognition model, and outputting the predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
calculating loss values of the predicted customer speech emotion and the real customer speech emotion by using a combined loss function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss values until the loss values meet preset conditions to obtain a trained speech emotion recognition model;
recognizing the obtained dialogue voice sequence to be recognized by using the trained voice emotion recognition model to obtain the client voice emotion of the dialogue voice sequence;
the method comprises the steps of obtaining a text to be replied of an intelligent customer service, carrying out voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of a customer.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided by the present invention, it should be understood that the disclosed media, devices, apparatuses and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An intelligent customer service voice rhythm adjusting method is characterized by comprising the following steps:
acquiring a conversation voice sequence to be trained of a client, marking a real client voice emotion corresponding to the conversation voice sequence, and constructing a spectrogram of the conversation voice sequence;
performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
performing reduced-order sampling on the feature vector set by using a global pooling layer in the speech emotion recognition model to obtain a reduced-order matrix;
inputting the reduced matrix into a full connection layer in the speech emotion recognition model, and outputting the predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
calculating loss values of the predicted client speech emotion and the real client speech emotion by using a combined loss function in the speech emotion recognition model, and adjusting parameters of the speech emotion recognition model according to the loss values until the loss values meet preset conditions to obtain a trained speech emotion recognition model;
recognizing the obtained dialogue voice sequence to be recognized by using the trained voice emotion recognition model to obtain the client voice emotion of the dialogue voice sequence;
the method comprises the steps of obtaining a text to be replied of an intelligent customer service, carrying out voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of a customer.
2. The method for adjusting the rhythm of an intelligent customer service voice as claimed in claim 1, wherein the extracting features of the spectrogram by using a depth separable convolution layer in a pre-constructed voice emotion recognition model to obtain a feature vector set comprises:
performing convolution operation on the spectrogram by utilizing the depth convolution in the depth separable convolution layer to obtain initial spectrogram characteristics;
and combining the initial spectrogram features by using point-by-point convolution in the depth separable convolution layer to obtain a feature vector set.
3. The intelligent customer service speech tempo adjustment method of claim 1, wherein said calculating a loss value of said predicted customer speech emotion and said actual customer speech emotion using a combined loss function in said speech emotion recognition model comprises:
calculating a loss value for the predicted customer speech emotion and the real customer speech emotion using a combined loss function of:
L=L s +γL c
Figure FDA0003614651150000021
Figure FDA0003614651150000022
wherein L represents a loss value, L s Represents the cross entropy loss value, L c Represents the central loss value, gamma represents the balance coefficient of the cross entropy loss value and the central loss value, k represents the number of predicted client speech emotions, y i Represents the ith predicted customer Voice mood, y' i Representing the ith real client speech emotion, m representing the number of predicted client speech emotions,
Figure FDA0003614651150000023
denotes the y th i Class center of class features.
4. The method for adjusting the rhythm of the speech of the intelligent customer service according to claim 1, wherein the step of performing speech synthesis on the text to be replied to obtain the speech to be replied comprises the following steps:
converting the text to be replied into a phoneme sequence;
sequentially carrying out spectrum processing on the phoneme sequence by utilizing an encoder, a decoder and a residual error network of a preset speech synthesis model to obtain a target Mel spectrum;
and performing audio conversion on the target Mel frequency spectrum by using a WaveGlow vocoder of the voice synthesis model to obtain the voice to be replied.
5. The method for adjusting the speech rhythm of customer service according to claim 1, wherein the adjusting the speech rhythm of the speech to be replied according to the emotion of the customer speech comprises:
inputting the voice to be replied into a preset inverse filter according to the voice emotion of the client to obtain a glottis excitation signal;
and adjusting the speed and the volume of the glottis excitation signal to obtain the voice rhythm of the voice to be replied.
6. The intelligent customer service speech tempo adjustment method of claim 1, wherein said inputting said reduced-order matrix into an activation function in said speech emotion recognition model to output a predicted speech emotion of a customer of said conversational speech sequence comprises:
carrying out full connection operation on the reduced order matrix by utilizing the full connection layer to obtain a full connection matrix;
and calculating the full-connection matrix by using the activation function to obtain the predicted speech emotion of the client.
7. The intelligent customer service voice rhythm adjustment method of claim 1, wherein said constructing a spectrogram of said conversational voice sequence comprises:
extracting frame frequency of the dialogue voice sequence, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the spectrogram.
8. An intelligent customer service voice rhythm adjusting device, characterized in that the device comprises:
the dialogue voice sequence acquisition module is used for acquiring a dialogue voice sequence to be trained of a client, marking the real client voice emotion corresponding to the dialogue voice sequence and constructing a spectrogram of the dialogue voice sequence;
the feature extraction module is used for performing feature extraction on the spectrogram by using a depth separable convolution layer in a pre-constructed speech emotion recognition model to obtain a feature vector set;
the reduced order sampling module is used for carrying out reduced order sampling on the feature vector set by utilizing a global pooling layer in the speech emotion recognition model to obtain a reduced order matrix;
the client speech emotion prediction module is used for inputting the reduced matrix to a full connection layer in the speech emotion recognition model and outputting predicted client speech emotion of the conversation speech sequence by using an activation function in the full connection layer;
the model training completion module is used for calculating loss values of the predicted speech emotion of the client and the real speech emotion of the client by using a combined loss function in the speech emotion recognition model, adjusting parameters of the speech emotion recognition model according to the loss values, and obtaining a trained speech emotion recognition model until the loss values meet preset conditions;
the speech emotion recognition module is used for recognizing the obtained conversation speech sequence to be recognized by using the trained speech emotion recognition model to obtain the speech emotion of the client of the conversation speech sequence;
and the voice rhythm adjusting module is used for acquiring a text to be replied of the intelligent customer service, performing voice synthesis on the text to be replied to obtain a voice to be replied, and adjusting the voice rhythm of the voice to be replied according to the voice emotion of the customer.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the intelligent customer service voice tempo adjustment method according to any one of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the intelligent customer service speech tempo adjustment method according to any one of claims 1 to 7.
CN202210439847.4A 2022-04-25 2022-04-25 Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium Pending CN114842880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210439847.4A CN114842880A (en) 2022-04-25 2022-04-25 Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210439847.4A CN114842880A (en) 2022-04-25 2022-04-25 Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114842880A true CN114842880A (en) 2022-08-02

Family

ID=82565085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210439847.4A Pending CN114842880A (en) 2022-04-25 2022-04-25 Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114842880A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457969A (en) * 2022-09-06 2022-12-09 平安科技(深圳)有限公司 Speech conversion method, apparatus, computer device and medium based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457969A (en) * 2022-09-06 2022-12-09 平安科技(深圳)有限公司 Speech conversion method, apparatus, computer device and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN107657017A (en) Method and apparatus for providing voice service
CN112086086A (en) Speech synthesis method, device, equipment and computer readable storage medium
CN112397047A (en) Speech synthesis method, device, electronic equipment and readable storage medium
CN112951203B (en) Speech synthesis method, device, electronic equipment and storage medium
WO2022178969A1 (en) Voice conversation data processing method and apparatus, and computer device and storage medium
CN111916054B (en) Lip-based voice generation method, device and system and storage medium
CN112466273A (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium
CN111835926A (en) Intelligent voice outbound method, device, equipment and medium based on voice interaction
CN113420556A (en) Multi-mode signal based emotion recognition method, device, equipment and storage medium
CN113064994A (en) Conference quality evaluation method, device, equipment and storage medium
CN113111812A (en) Mouth action driving model training method and assembly
CN113327586A (en) Voice recognition method and device, electronic equipment and storage medium
CN113807103A (en) Recruitment method, device, equipment and storage medium based on artificial intelligence
CN114863945A (en) Text-based voice changing method and device, electronic equipment and storage medium
CN114999533A (en) Intelligent question-answering method, device, equipment and storage medium based on emotion recognition
CN112992187B (en) Context-based voice emotion detection method, device, equipment and storage medium
CN112489628B (en) Voice data selection method and device, electronic equipment and storage medium
CN114842880A (en) Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium
CN116450797A (en) Emotion classification method, device, equipment and medium based on multi-modal dialogue
CN116564322A (en) Voice conversion method, device, equipment and storage medium
CN115512698B (en) Speech semantic analysis method
CN115631748A (en) Emotion recognition method and device based on voice conversation, electronic equipment and medium
CN114255737B (en) Voice generation method and device and electronic equipment
CN114155832A (en) Speech recognition method, device, equipment and medium based on deep learning
CN113555003A (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination