Disclosure of Invention
In order to solve the defects of the prior art, the application provides a method and a system for recognizing the speech emotion with noise;
in a first aspect, the application provides a method for recognizing speech emotion with noise;
the method for recognizing the emotion of the voice with the noise comprises the following steps:
acquiring a voice signal with noise to be identified;
carrying out end point detection processing on a voice signal with noise to be recognized; obtaining a plurality of voice segments with voice according to the end points;
carrying out feature extraction on the voice fragments with voice to obtain voice features;
and inputting the voice features into the trained voice emotion recognition model, and outputting emotion types.
In a second aspect, the present application provides a noisy speech emotion recognition system;
a noisy speech emotion recognition system comprising:
an acquisition module configured to: acquiring a voice signal with noise to be identified;
an endpoint detection module configured to: carrying out end point detection processing on a voice signal with noise to be recognized; obtaining a plurality of voice segments with voice according to the end points;
a feature extraction module configured to: carrying out feature extraction on the voice fragments with voice to obtain voice features;
an output module configured to: and inputting the voice features into the trained voice emotion recognition model, and outputting emotion types.
In a third aspect, the present application further provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs are stored in the memory, and when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the beneficial effects of this application are:
(1) the invention provides a voice endpoint detection method based on residual error conditional entropy difference generated in an iteration process, and the method is effectively applied to voice recognition with noise and emotion. The endpoint detection method can calculate the conditional entropy between the predicted residual error and the last iteration signal estimation value in the iteration process of an Orthogonal Matching Pursuit (OMP) algorithm in the sample reconstruction process, directly give the endpoint detection result of the reconstructed sample while completing the sample reconstruction according to the difference value of the conditional entropy of the residual error before and after the iteration, fully utilize the data generated in the sample reconstruction process, save the subsequent analysis and processing time of the system, and have the anti-noise performance because the endpoint detection method is established on the compressed sensing reconstruction algorithm.
(2) Processing an emotional voice component in an emotional video by adopting a compressed sensing theory, completing sparse transformation of the emotional voice by using discrete cosine transformation, and providing a prediction residual conditional entropy parameter of the emotional voice compressed sensing reconstruction by using a Gaussian random matrix as an observation matrix and an Orthogonal Matching Pursuit (OMP) algorithm as a reconstruction algorithm;
(3) providing a residual conditional entropy difference analysis idea before and after OMP reconstruction iteration;
(4) according to the residual error condition entropy difference value and a threshold value, giving an endpoint detection result while finishing sample reconstruction;
(5) and realizing voice emotion recognition of the voice test sample with the noise emotion based on the end point detection result.
(6) The voice signal endpoint detection method adopting the residual error conditional entropy difference is based on a compressed sensing theory, endpoint detection is completed during sample reconstruction, and because noise has no sparsity and cannot be reconstructed, the voice endpoint detection result obtained by the method has anti-noise performance;
(7) when the voice is reconstructed, the voice signal endpoint detection method adopting the residual conditional entropy difference value obtains the judgment result whether the voice frame is a voiced segment according to the calculated residual conditional entropy difference value, does not need to process the reconstructed voice sample, has small time delay and can realize quick judgment;
(8) the voice signal endpoint detection method adopting the residual conditional entropy difference value deeply and effectively excavates the data characteristics in the reconstruction process through the calculation of the information theory parameters, fully utilizes the data in the sample reconstruction process and saves the calculation resources;
(9) the voice signal endpoint detection method adopting the residual error conditional entropy difference can be effectively applied to noisy voice emotion recognition.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment provides a method for recognizing noisy speech emotion;
as shown in fig. 1, the method for recognizing emotion of noisy speech includes:
s100: acquiring a voice signal with noise to be identified;
s200: carrying out end point detection processing on a voice signal with noise to be recognized; obtaining a plurality of voice segments with voice according to the end points;
s300: carrying out feature extraction on the voice fragments with voice to obtain voice features;
s400: and inputting the voice features into the trained voice emotion recognition model, and outputting emotion types.
As one or more embodiments, the S200: carrying out end point detection processing on a voice signal with noise to be recognized; obtaining a plurality of voice segments with voice according to the end points; the method specifically comprises the following steps:
s201: carrying out sparse conversion processing on a voice signal with noise to be recognized;
s202: randomly generating a Gaussian random matrix for the voice signals after the sparse conversion processing; taking the Gaussian random matrix as an observation matrix of the voice signal;
s203: and based on the observation matrix, carrying out sample reconstruction by adopting an Orthogonal Matching Pursuit (OMP) algorithm to obtain an endpoint detection result.
Further, the step S201: carrying out sparse conversion processing on a voice signal with noise to be recognized; the method specifically comprises the following steps:
and performing sparse conversion processing on the voice signal with the noise to be recognized by adopting discrete cosine transform.
Further, the step S202: randomly generating a Gaussian random matrix for the voice signals after the sparse conversion processing; wherein, the Gaussian random matrix follows normal distribution with the mean value of 0, the variance of 1 and the standard deviation of 1.
As one or more embodiments, as shown in fig. 3, the S203: based on the observation matrix, adopting an orthogonal matching pursuit algorithm (OMP) to reconstruct a sample to obtain an endpoint detection result; the method specifically comprises the following steps:
s2031: obtaining a voice observation value of each frame according to the observation matrix;
s2032: when the sensor runs for the first time, setting the residual error as a voice observation value, and calculating a correlation coefficient of the residual error and the sensing matrix;
when the sensor is not operated for the first time, calculating a residual error between the last iteration estimation value and the voice observation value and a correlation coefficient between the residual error and the sensing matrix;
s2033: searching atoms with the maximum correlation coefficient in the sensing matrix, and updating a support set reconstructed by the signals by using the atoms with the maximum correlation coefficient;
s2034: based on the support set, approximating the observed value by using a least square method to obtain an estimated value of the signal;
s2035: updating the residual error, and calculating the conditional entropy of the residual error;
s2036: judging whether the sparsity condition is reached, if so, returning to S2032; if not, calculating a residual conditional entropy difference value between the first iteration and the last iteration, and obtaining a reconstructed sample according to an estimated value of a signal at the moment;
s2037: judging whether the difference value of the residual conditional entropy of the first iteration and the last iteration is higher than a set threshold value, and if so, considering the current frame speech as a voiced segment; if the current frame voice is lower than the set threshold, the current frame voice is considered to be a silent section, and an endpoint detection result of the current frame voice is obtained;
s2038: based on the endpoint detection results, voiced speech segments in the reconstructed samples are obtained.
Further, the S2031: obtaining a voice observation value of each frame according to the observation matrix; the method specifically comprises the following steps:
if a frame of voice signal is x, completing sparse conversion through discrete cosine transform, wherein the signal is a discrete cosine coefficient alpha, namely x ═ Ψ alpha, and Ψ is a sparse matrix formed by DCT bases; then the observation is y ═ θ α, where Θ ═ Φ Ψ, and Φ is the observation matrix.
Further, the S2032: calculating a residual error between the last iteration estimation value and the voice observation value and a correlation coefficient between the residual error and the sensing matrix; the method specifically comprises the following steps:
reconstructed residual r obtained from the t-th iterationtThe calculation formula of (2) is as follows:
wherein A is
tIs a support set formed by atoms of a sensing matrix in the t iteration process of the OMP algorithm,
and y is an observed value which is an estimated value calculated by a least square method in the t-th iteration process.
Further, the correlation coefficient of the residual error and the sensing matrix is calculated by using the inner product of the residual error and the column vector of the sensing matrix.
It should be understood that the sensing matrix is obtained by multiplying a sparsity matrix of sparse transformation and an observation matrix, and can ensure that signals can be sampled and compressed simultaneously.
Further, the S2033: searching atoms with the maximum correlation coefficient in the sensing matrix, and updating a support set reconstructed by the signals by using the atoms with the maximum correlation coefficient; the support set is a set of columns found from the sensing matrix according to the correlation coefficient.
Further, the S2035: updating the residual error, and calculating the conditional entropy of the residual error; the method specifically comprises the following steps:
storing the residual error obtained by each iteration and updating the residual error;
based on the updated residual, a residual conditional entropy is calculated.
Further, calculating a residual conditional entropy based on the updated residual; residual conditional entropy σeThe calculation formula of (2) is as follows:
A
t-1is a support set formed by atoms of a sensing matrix in the t-1 iteration process of the OMP algorithm,
is an estimated value calculated by a least square method in the process of t-1 times of iteration.
Further, the S2036: judging whether the sparsity condition is reached, if so, returning to S2032; if not, calculating a residual conditional entropy difference value between the first iteration and the last iteration; the method specifically comprises the following steps:
and subtracting the residual conditional entropy obtained by the first iteration from the residual conditional entropy obtained by the last iteration to obtain a difference value.
Further, the sparsity condition refers to that whether iteration is terminated or not is judged by judging the number of iterations and the sparsity K after each iteration is completed in the sample reconstruction process. If the iteration number is less than K, continuing the iteration, otherwise, terminating the iteration.
Further, S300: extracting the characteristics of each voice segment with voice to obtain voice characteristics; the specific voice features include: prosodic features (e.g., fundamental frequency, short-term energy, time-dependent features such as sample duration, voiced segment duration, speech rate, etc.), psychoacoustic features (e.g., first, second, third formants, etc.), spectral features (e.g., MFCC parameters), and statistical parameters (maximum, minimum, mean) of the above features, etc.
Further, the step S400: inputting the voice features into the trained voice emotion recognition model, and outputting emotion types; the training step of the trained speech emotion recognition model comprises the following steps:
constructing a neural network model; the neural network model is a convolutional neural network;
constructing a training set, wherein the training set comprises voice features of known emotion classes;
and inputting the training set into a neural network model for training, and stopping training when the loss function reaches the minimum value or reaches the set iteration times to obtain the trained speech emotion recognition model.
The compressed sensing is applied to voice signal processing, and if discrete cosine transform is selected to complete sparse transform of voice signals, a Gaussian random matrix is adopted as an observation matrix, and an Orthogonal Matching Pursuit (OMP) algorithm is adopted as a sample reconstruction algorithm.
The invention provides a voice signal endpoint detection method adopting a residual conditional entropy difference value, which is based on a prediction residual generated in an OMP iterative execution process. The OMP algorithm is a common algorithm in speech signal reconstruction, and updates a support set of signal reconstruction by calculating a residual error between an estimated value and an observed value of each iteration and a correlation between the residual error and a sensing matrix until a sparsity condition is reached, and then completes the signal reconstruction. The calculation of the residual is an important ring in the OMP algorithm, and from the information theory perspective, the acquisition of the voice information in the iterative process means the reduction of the residual entropy. The invention adopts the conditional entropy sigma between the residual error of the introduced t iteration and the signal estimation value of the last iterationeTo judge the extraction degree of the voice component in the reconstructed residual.
In the OMP algorithm, the reconstructed residual r obtained in the t-th iterationtThe calculation formula of (2) is as follows:
wherein A is
tIs a support set formed by atoms of a sensing matrix in the t iteration process of the OMP algorithm,
is the estimated value calculated by the least square method in the t-th iteration process.
σeThe calculation formula of (2) is as follows:
A
t-1is a support set formed by atoms of a sensing matrix in the t-1 iteration process of the OMP algorithm,
is an estimated value calculated by a least square method in the process of t-1 times of iteration.
And when the iteration is finished, solving a residual conditional entropy difference value between the last iteration and the first iteration, and obtaining an endpoint detection result through threshold judgment.
Fig. 2(a) shows a speech time domain waveform in a process of reconstructing a certain speech sample by using an OMP algorithm, fig. 2(b) shows a time domain waveform of a noisy speech, and fig. 2(c) shows a residual conditional entropy difference value and a threshold value between a last iteration and a first iteration.
As can be seen from the figure, the sample has a strong noise level, the signal-to-noise ratio of the noisy sample is 0dB, and the voice signal is covered by noise, but according to the algorithm, the residual conditional entropy difference is more stable in a noise environment and has better robustness, and the starting point and the ending point of the noisy voice can be detected by setting a smaller threshold.
It can be seen that the difference in residual conditional entropy in the iterative process corresponds well to the active components, σ, in the speech sampleeThe variation trend of the method is corresponding to the position of a voiced segment (including unvoiced sound and voiced sound) in an original waveform, and the starting and ending point judgment of the reconstructed voice sample can be completed by adopting an empirical threshold conditionEnd point detection of noisy speech may then be achieved using a lower threshold (e.g., 0.01) as in fig. 2 (c). And the end points of the reconstructed samples can be obtained while the samples are reconstructed by the algorithm, and other end point detection algorithms do not need to be implemented on the reconstructed samples.
The overall flow chart of the noisy speech emotion recognition of the speech signal endpoint detection method using the residual conditional entropy difference is shown in fig. 1. As can be seen from fig. 1, when the noisy emotion speech is reconstructed, an endpoint detection result of the reconstructed sample can be obtained, subsequent feature extraction and feature learning can be performed according to the endpoint detection result, and an effective emotion recognition model can be trained by using a feature parameter set of the emotion speech, thereby realizing the noisy speech emotion recognition.
Example two
The embodiment provides a system for recognizing noisy speech emotion;
a noisy speech emotion recognition system comprising:
an acquisition module configured to: acquiring a voice signal with noise to be identified;
an endpoint detection module configured to: carrying out end point detection processing on a voice signal with noise to be recognized; obtaining a plurality of voice segments with voice according to the end points;
a feature extraction module configured to: carrying out feature extraction on the voice fragments with voice to obtain voice features;
an output module configured to: and inputting the voice features into the trained voice emotion recognition model, and outputting emotion types.
It should be noted here that the above-mentioned obtaining module, the endpoint detecting module, the feature extracting module and the output module correspond to steps S100 to S400 in the first embodiment, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the contents disclosed in the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In the foregoing embodiments, the descriptions of the embodiments have different emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The proposed system can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules may be combined or integrated into another system, or some features may be omitted, or not executed.
EXAMPLE III
The present embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein, a processor is connected with the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first embodiment.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software.
The method in the first embodiment may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Example four
The present embodiments also provide a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the method of the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.