WO2021179424A1 - 结合ai模型的语音增强方法、系统、电子设备和介质 - Google Patents

结合ai模型的语音增强方法、系统、电子设备和介质 Download PDF

Info

Publication number
WO2021179424A1
WO2021179424A1 PCT/CN2020/088399 CN2020088399W WO2021179424A1 WO 2021179424 A1 WO2021179424 A1 WO 2021179424A1 CN 2020088399 W CN2020088399 W CN 2020088399W WO 2021179424 A1 WO2021179424 A1 WO 2021179424A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
speech
signal
training
noise ratio
Prior art date
Application number
PCT/CN2020/088399
Other languages
English (en)
French (fr)
Inventor
康力
叶顺舟
陆成
巴莉芳
Original Assignee
紫光展锐(重庆)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 紫光展锐(重庆)科技有限公司 filed Critical 紫光展锐(重庆)科技有限公司
Publication of WO2021179424A1 publication Critical patent/WO2021179424A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0264Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Definitions

  • the invention belongs to the technical field of speech enhancement, and in particular relates to a speech enhancement method, system, electronic device and medium combined with an AI model.
  • the voice trigger detection function and the automatic speech detection function will both increase the misrecognition rate and decrease the recognition rate, making it difficult to use.
  • the purpose of speech enhancement is to separate clean speech signals from noisy speech.
  • the obtained voice signal can make the call clearer, the intelligibility is higher, and the communication between people is more efficient. It can also help the virtual assistant better understand the user's purpose and improve the user experience.
  • Voice enhancement has been researched for decades and is widely used in communication, security, home and other scenarios. Compared with microphone array technology, single-channel speech enhancement has a very wide range of application scenarios. On the one hand, single-channel voice enhancement has low cost and is more flexible and convenient to use. On the other hand, single-channel speech enhancement cannot use spatial information such as the angle of arrival, and it is very difficult to process complex scenes, especially non-stationary noise scenes.
  • the traditional processing method of speech enhancement is based on the statistical analysis of speech signal and noise signal. Once encountered statistical characteristics that do not meet expectations, the effect of speech enhancement will be weakened, or noise reduction performance will decrease, or speech distortion will increase.
  • the traditional single-channel speech enhancement technology is based on two assumptions. One is that the non-stationarity of the noise signal is weaker than that of the speech signal, and the other is that the amplitude of the noise signal and the speech signal meets the Gaussian distribution. Based on these assumptions, referring to Figure 1, the traditional single-channel speech enhancement method is divided into two steps, one is noise power spectrum estimation, and the other is speech enhancement gain calculation.
  • the noise power spectrum estimation estimates the noise that may be contained in the current noisy speech signal, and updates the noise power spectrum.
  • the gain calculation part estimates the prior signal-to-noise ratio based on the noise power spectrum and calculates the gain.
  • the input noisy speech signal is multiplied by the calculated gain, and the enhanced speech signal is obtained.
  • the traditional method assumes that the noise signal and the speech signal conform to the Gaussian distribution. Based on this assumption, the Bayesian posterior probability formula can be used to calculate the probability of speech existence, which is a posterior probability. Then use the speech existence probability to estimate the noise power spectrum, thus completing the noise estimation.
  • this noise power can be used to estimate the prior signal-to-noise ratio and calculate the gain.
  • Priori SNR estimation includes decision-guided method (DD), cepstrum smoothing, improved decision-guided method, and so on.
  • DD decision-guided method
  • Cepstrum smoothing cepstrum smoothing
  • OMLSA optimal log magnitude spectrum estimation
  • the traditional processing method of speech enhancement is based on the statistical analysis of speech signal and noise signal. These statistical analyses are mainly used to estimate the probability of speech existence. Once encountered statistical features that do not meet expectations, such as some non-stationary noise, the effect of speech enhancement will decrease.
  • the technical problem to be solved by the present invention is to overcome the disadvantage of poor speech enhancement effect in the prior art, and to provide a speech enhancement method, system, electronic device and medium combined with an AI model.
  • the present invention provides a voice enhancement method combined with an AI model, which includes the following steps:
  • the speech enhancement gain is obtained according to the prior signal-to-noise ratio.
  • the voice enhancement method combined with the AI model further includes the following steps:
  • the steps of constructing the target AI model include:
  • the pure speech signal and pure noise signal are mixed according to several preset ratios to obtain several noisy speech signals, and a training set is constructed.
  • the input of the training set is the noisy speech signal, and the output of the training set is the actual signal to noise of the noisy speech signal Compare;
  • the AI model is trained according to the training set to obtain the target AI model.
  • the mean square error is used as the evaluation index for AI model training.
  • the mean square error is the mean square error of the post-training signal-to-noise ratio and the actual signal-to-noise ratio.
  • the post-training signal The noise ratio is obtained according to the training noise power spectrum, the training noise power spectrum is obtained according to the existence probability of the training speech, and the existence probability of the training speech is the output of the AI model.
  • the step of obtaining speech enhancement gain includes:
  • the speech enhancement gain is obtained according to a preset algorithm, which includes Wiener (a speech enhancement algorithm), MMSE-STSA (a speech enhancement algorithm) or MMSE-LogSTSA (a speech enhancement algorithm), OMLSA.
  • Wiener a speech enhancement algorithm
  • MMSE-STSA a speech enhancement algorithm
  • MMSE-LogSTSA a speech enhancement algorithm
  • OMLSA a speech enhancement algorithm
  • the AI model includes LSTM (Long Short-Term Memory) and GRU (a neural network).
  • LSTM Long Short-Term Memory
  • GRU a neural network
  • the present invention also provides a speech enhancement system, which includes a probability acquisition unit, a noise power acquisition unit, a signal-to-noise ratio acquisition unit, and a gain acquisition unit;
  • the probability acquisition unit is used to acquire the voice existence probability according to the target AI model
  • the noise power obtaining unit is used to obtain the noise power according to the probability of speech existence
  • the signal-to-noise ratio obtaining unit is used to obtain a priori signal-to-noise ratio according to the noise power
  • the gain obtaining unit is used to obtain the speech enhancement gain according to the prior signal-to-noise ratio.
  • the speech enhancement system further includes a model construction unit
  • the model building unit is used to build the target AI model.
  • model building unit is also used for:
  • the pure speech signal and pure noise signal are mixed according to several preset ratios to obtain several noisy speech signals, and a training set is constructed.
  • the input of the training set is the noisy speech signal, and the output of the training set is the actual signal to noise of the noisy speech signal Compare;
  • the AI model is trained according to the training set to obtain the target AI model.
  • the mean square error is used as the evaluation index for AI model training.
  • the mean square error is the mean square error of the post-training signal-to-noise ratio and the actual signal-to-noise ratio.
  • the post-training signal The noise ratio is obtained according to the training noise power spectrum, the training noise power spectrum is obtained according to the existence probability of the training speech, and the existence probability of the training speech is the output of the AI model.
  • the gain obtaining unit is also used for:
  • the speech enhancement gain is obtained according to a preset algorithm, which includes Wiener, MMSE-STSA, MMSE-LogSTSA or OMLSA.
  • the AI model includes LSTM or GRU.
  • the present invention also provides an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the computer program to implement the voice enhancement method combined with the AI model of the present invention.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the voice enhancement method combined with the AI model of the present invention are realized.
  • the positive progress effect of the present invention is that the present invention can improve the intelligibility of speech in complex and noisy scenes, and can also improve the performance of keyword wake-up and speech recognition functions.
  • Fig. 1 is a flow chart of a method for speech enhancement in the prior art.
  • Fig. 2 is a flowchart of a speech enhancement method combined with an AI model according to Embodiment 1 of the present invention.
  • Fig. 3 is a flowchart of a speech enhancement method combined with an AI model according to Embodiment 2 of the present invention.
  • FIG. 4 is a flowchart of an optional implementation manner of the speech enhancement method combined with an AI model according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic structural diagram of a system of a speech enhancement method combined with an AI model according to Embodiment 3 of the present invention.
  • FIG. 6 is a schematic structural diagram of a system of a speech enhancement method combined with an AI model according to Embodiment 4 of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device according to Embodiment 5 of the present invention.
  • This embodiment provides a voice enhancement method combined with an AI model, including the following steps:
  • Step S11 Acquire the speech existence probability according to the target AI model.
  • Step S12 Obtain the noise power according to the speech existence probability.
  • Step S13 Obtain a priori signal-to-noise ratio according to the noise power.
  • Step S14 Obtain the speech enhancement gain according to the prior signal-to-noise ratio.
  • the speech enhancement method combined with the AI model of this embodiment can improve the intelligibility of speech in complex noisy scenes, and can also improve the performance of keyword wake-up and speech recognition functions.
  • this embodiment provides a voice enhancement method combined with an AI model.
  • the voice enhancement method combined with an AI model further includes the following steps:
  • Step S10 Build a target AI model.
  • step S10 first, the pure speech signal and the pure noise signal are mixed according to a number of preset ratios to obtain a number of noisy speech signals, and a training set is constructed.
  • the input of the training set is the noisy speech signal
  • the output of the training set is the band The actual signal-to-noise ratio of the noisy speech signal.
  • the AI model is trained according to the training set to obtain the target AI model, and the mean square error is used as the evaluation index for AI model training.
  • the mean square error is the mean square error of the post-training signal-to-noise ratio and the actual signal-to-noise ratio.
  • the test signal-to-noise ratio is obtained according to the training noise power spectrum
  • the training noise power spectrum is obtained according to the existence probability of the training speech
  • the existence probability of the training speech is the output of the AI model.
  • the pure speech signal and pure noise signal are mixed according to different proportions to obtain noisy speech signals with different signal-to-noise ratios.
  • the noisy speech signal obtained by mixing is used as the input of the training set, and the actual signal-to-noise ratio is used as the target output of the training set.
  • the output of the AI model is the probability of speech existence. According to the probability of speech existence, the noise power spectrum is estimated and the posterior signal-to-noise ratio is further estimated. The estimated posterior signal-to-noise ratio is compared with the calculated actual signal-to-noise ratio, and the mean square error is calculated as an evaluation index for AI model training.
  • step S10 the signal is processed in the frequency domain.
  • the input pure speech signal s[t] and pure noise signal n[t] both need to be framed and windowed, and then converted to the frequency domain using Fourier transform .
  • the frequency spectrum S[k,n] of the pure speech signal and the frequency spectrum N[k,n] of the pure noise signal are obtained respectively, where k represents the frequency index and n represents the frame index.
  • noisy speech signals X[k,n] with different signal-to-noise ratios can be obtained.
  • the formula for signal mixing is as follows:
  • the range of the coefficient a is [0,1].
  • ⁇ x E ⁇
  • ⁇ n E ⁇
  • the mixed noisy speech signal X[k,n] is used as the input of the training set, and the actual signal-to-noise ratio ⁇ truth [k,n] is used as the target output of the training set.
  • the input of the AI model is the amplitude spectrum of the noisy speech, and the output is the speech existence probability P[k,n]. After obtaining the probability of speech existence, first calculate the smoothing factor a n [k,n],
  • a 0 is a fixed value, and the value range is [0.7, 0.95].
  • step S11 first import the target AI model parameters obtained by training.
  • the input of the target AI model is the amplitude spectrum of the noisy speech, and the output is the speech existence probability P[k,n].
  • the amplitude spectrum of the noisy speech is calculated after the input noisy speech signal is framed and windowed FFT. After the speech signal obtains the speech existence probability, first calculate the smoothing factor a n [k,n], where a 0 is a fixed value, and the value range is [0.7,0.95],
  • a n [k,n] a 0 +(1-a 0 )P[k,n].
  • step S12 the smoothing factor a n based on the estimated noise power spectrum
  • step S13 first calculate the posterior signal-to-noise ratio based on the estimated noise power spectrum
  • a dd is the smoothing factor of the decision-guided method, with a value range of [0.9, 0.98].
  • step S14 the calculation of the gain G[k,n] is continued.
  • the gain calculation adopts the Wiener (Wiener gain) algorithm, which has the least amount of calculation, and the formula is as follows:
  • the gain calculation adopts the MMSE-LogSTSA gain estimation combined with the speech existence probability, and its noise reduction performance is the best, and its formula is as follows:
  • G LSA stands for MMSE-LogSTSA gain.
  • p 1 [k,n] represents the probability of speech existence
  • GH0 represents the minimum gain threshold of noise suppression in the pure noise section
  • G min represents the minimum gain threshold of overall noise suppression.
  • MMSE-STSA or the like can be used to calculate the gain.
  • the input noisy speech signal is framed and windowed FFT and multiplied by the gain to obtain the enhanced speech signal Y[k,n]:
  • the AI model includes LSTM and GRU, but is not limited to LSTM and GRU.
  • the voice enhancement method combined with the AI model of this embodiment is a single-channel voice enhancement method combined with the AI model, and only requires one channel of voice amplitude spectrum information.
  • the speech enhancement method combined with the AI model of this embodiment can be used in a single-microphone scene, and can also be used in post-processing of a multi-microphone array. Because its hardware conditions are less restricted, the application scenarios are more extensive.
  • the speech enhancement method combined with the AI model of this embodiment uses a neural network to estimate the speech existence probability. After the speech existence probability is obtained, the noise power can be estimated, the signal-to-noise ratio a priori, and then the output gain can be calculated. This provides more flexibility for subsequent calculations. For example, when calculating the gain, it can be the Wiener gain or the OMLSA gain. And you can also set the corresponding parameters according to the application scenario. For example, Wiener gain and OMLSA gain have parameters to set the degree of noise reduction.
  • the speech enhancement method combined with the AI model of this embodiment uses LSTM and GRU as the structure of the AI model, which is more suitable for time series problems such as speech enhancement, but is not limited to LSTM and GRU.
  • the voice enhancement method combined with the AI model of this embodiment is single-channel voice enhancement, which can be used for single-microphone voice enhancement, and can also be used for the post-processing part of the microphone array.
  • voice enhancement can be used for single-microphone voice enhancement, and can also be used for the post-processing part of the microphone array.
  • other acousto-electric sensors are also possible. Such as bone conduction technology, and the combination of bone conduction and microphone.
  • the speech enhancement method combined with the AI model of this embodiment is used for a priori signal-to-noise ratio calculation.
  • the decision-guided method (DD) is used, but it is not limited to the decision-guided method.
  • Other methods, including variable decision-guided methods, and cepstrum smoothing estimation are also available.
  • the voice enhancement method combined with the AI model in this embodiment is used for gain calculation, and GOMLSA is used , but it is not limited to GOMLSA .
  • Other methods including Wiener gain, MMSE-STSA gain, MMSE-LogSTSA gain, and MMSE-STSA gain combined with speech existence probability are available.
  • the reference value range proposed by the speech enhancement method combined with the AI model in this embodiment is based on the empirical values obtained in practice, and these values are not limited in practical applications.
  • the AI model used in the speech enhancement method combined with the AI model in this embodiment is LSTM and GRU, but it is not limited to these two models, other DNN (a neural network), CNN (a neural network), CRNN (one A neural network), GMM-HMM (a neural network), any model based on machine learning or deep learning in order to obtain the probability of speech existence can be used as the AI model in the speech enhancement method combined with the AI model in this embodiment.
  • DNN a neural network
  • CNN a neural network
  • CRNN one A neural network
  • GMM-HMM a neural network
  • the speech enhancement system includes a probability acquisition unit 21, a noise power acquisition unit 22, a signal-to-noise ratio acquisition unit 23, and a gain acquisition unit 24.
  • the probability acquisition unit 21 is configured to acquire the voice existence probability according to the target AI model.
  • the noise power obtaining unit 22 is configured to obtain the noise power according to the probability of speech existence.
  • the signal-to-noise ratio obtaining unit 23 is used to obtain a priori signal-to-noise ratio according to the noise power.
  • the gain obtaining unit 24 is used to obtain the speech enhancement gain according to the prior signal-to-noise ratio.
  • the speech enhancement system of this embodiment can improve the intelligibility of speech in complex noisy scenes, and can also improve the performance of keyword wake-up and speech recognition functions.
  • this embodiment provides a speech enhancement system.
  • the speech enhancement system further includes a model construction unit 25; the model construction unit 25 is used to construct a target AI model.
  • the model construction unit 25 mixes the pure speech signal and the pure noise signal according to several preset ratios to obtain several noisy speech signals, and constructs a training set.
  • the input of the training set is the noisy speech signal
  • the output of the training set is The actual signal-to-noise ratio of the noisy speech signal
  • the AI model is trained according to the training set to obtain the target AI model, and the mean square error is used as the evaluation index for AI model training, and the mean square error is the post-training signal-to-noise ratio and the actual signal
  • the mean square error of the noise ratio, the post-training signal-to-noise ratio is obtained according to the training noise power spectrum
  • the training noise power spectrum is obtained according to the existence probability of the training speech
  • the existence probability of the training speech is the output of the AI model.
  • the model construction unit 25 constructs the target AI model
  • the signal is processed in the frequency domain.
  • the input pure speech signal s[t] and pure noise signal n[t] both need to be framed and windowed, and then converted to the frequency domain using Fourier transform.
  • the frequency spectrum S[k,n] of the pure speech signal and the frequency spectrum N[k,n] of the pure noise signal are obtained respectively, where k represents the frequency index and n represents the frame index.
  • noisy speech signals X[k,n] with different signal-to-noise ratios can be obtained.
  • the formula for signal mixing is as follows:
  • the range of the coefficient a is [0,1].
  • ⁇ x E ⁇
  • ⁇ n E ⁇
  • the mixed noisy speech signal X[k,n] is used as the input of the training set, and the actual signal-to-noise ratio ⁇ truth [k,n] is used as the target output of the training set.
  • the input of the AI model is the amplitude spectrum of the noisy speech, and the output is the speech existence probability P[k,n]. After obtaining the probability of speech existence, first calculate the smoothing factor a n [k,n],
  • a 0 is a fixed value, and the value range is [0.7, 0.95].
  • the estimated posterior signal-to-noise ratio is compared with the calculated actual signal-to-noise ratio, and the mean square error is calculated as the evaluation index for AI model training:
  • the probability acquisition unit 21 acquires the voice existence probability according to the target AI model.
  • the input of the target AI model is the amplitude spectrum of the noisy speech, and the output is the speech existence probability P[k,n].
  • the amplitude spectrum of the noisy speech is calculated from the input noisy speech signal after being framed and windowed FFT.
  • After obtaining the probability of speech existence first calculate the smoothing factor a n [k,n], where a 0 is a fixed value and the value range is [0.7,0.95],
  • a n [k,n] a 0 +(1-a 0 )P[k,n].
  • Noise power acquisition unit 22 based on the smoothing factor a n estimated noise power spectrum The formula is as follows:
  • the signal-to-noise ratio obtaining unit 23 is used to obtain a priori signal-to-noise ratio according to the noise power.
  • the signal-to-noise ratio acquisition unit 23 first calculates the posterior signal-to-noise ratio according to the estimated noise power spectrum
  • a dd is the smoothing factor of the decision-guided method, with a value range of [0.9, 0.98].
  • the gain obtaining unit 24 is configured to obtain the speech enhancement gain according to the prior signal-to-noise ratio, and continue to calculate the gain G[k,n].
  • the gain calculation adopts the Wiener (Wiener gain) algorithm, which has the least amount of calculation, and the formula is as follows:
  • the gain calculation adopts the MMSE-LogSTSA gain estimation combined with the speech existence probability, and its noise reduction performance is the best, and its formula is as follows:
  • G LSA stands for MMSE-LogSTSA gain.
  • p 1 [k,n] represents the probability of speech existence
  • GH0 represents the minimum gain threshold of noise suppression in the pure noise section
  • G min represents the minimum gain threshold of overall noise suppression.
  • MMSE-STSA or the like can be used to calculate the gain.
  • the input noisy speech signal is multiplied by the gain after framed and windowed FFT to obtain the enhanced speech signal Y[k,n]:
  • the AI model includes LSTM and GRU, but is not limited to LSTM and GRU.
  • the speech enhancement system of this embodiment is a single-channel speech enhancement system, and only needs one channel of speech amplitude spectrum information.
  • the speech enhancement system of this embodiment can be used in a single-microphone scenario, and can also be used in post-processing of a multi-microphone array. Because its hardware conditions are less restricted, the application scenarios are more extensive.
  • the speech enhancement system of this embodiment uses a neural network to estimate the probability of speech existence. After obtaining the probability of speech existence, the noise power can be estimated, the signal-to-noise ratio a priori, and then the output gain can be calculated. This provides more flexibility for subsequent calculations. For example, when calculating the gain, it can be the Wiener gain or the OMLSA gain. And you can also set the corresponding parameters according to the application scenario. For example, Wiener gain and OMLSA gain have parameters to set the degree of noise reduction.
  • the speech enhancement system of this embodiment uses LSTM and GRU as the structure of the AI model, which is more suitable for time series problems such as speech enhancement, but is not limited to LSTM and GRU.
  • the voice enhancement system of this embodiment is a single-channel voice enhancement, which can be used for single-microphone voice enhancement, and can also be used for the post-processing part of the microphone array.
  • voice enhancement can be used for single-microphone voice enhancement, and can also be used for the post-processing part of the microphone array.
  • other acousto-electric sensors are also possible. Such as bone conduction technology, and the combination of bone conduction and microphone.
  • the speech enhancement system of this embodiment is used to calculate a priori signal-to-noise ratio, using the decision-guided method (DD), but it is not limited to the decision-guided method.
  • DD decision-guided method
  • Other methods, including variable decision-guided methods, and cepstrum smoothing estimation are also available.
  • the speech enhancement system of this embodiment is used for gain calculation, using GOMLSA , but it is not limited to GOMLSA .
  • Other methods including Wiener gain, MMSE-STSA gain, MMSE-LogSTSA gain, and MMSE-STSA gain combined with speech existence probability are available.
  • the range of reference values proposed by the speech enhancement system of this embodiment is based on empirical values obtained in practice, and these values are not limited in practical applications.
  • the AI models used by the speech enhancement system of this embodiment are LSTM and GRU, but they are not limited to these two models.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by this embodiment.
  • the electronic device includes a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the program, the speech enhancement method combined with the AI model of Embodiment 1 or Embodiment 2 is implemented.
  • the electronic device 30 shown in FIG. 7 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
  • the electronic device 30 may be in the form of a general-purpose computing device, for example, it may be a server device.
  • the components of the electronic device 30 may include, but are not limited to: the above-mentioned at least one processor 31, the above-mentioned at least one memory 32, and a bus 33 connecting different system components (including the memory 32 and the processor 31).
  • the bus 33 includes a data bus, an address bus, and a control bus.
  • the memory 32 may include a volatile memory, such as a random access memory (RAM) 321 and/or a cache memory 322, and may further include a read-only memory (ROM) 323.
  • RAM random access memory
  • ROM read-only memory
  • the memory 32 may also include a program/utility tool 325 having a set of (at least one) program module 324.
  • program module 324 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data. Each of the examples or some combination may include the realization of a network environment.
  • the processor 31 executes various functional applications and data processing by running a computer program stored in the memory 32, such as the voice enhancement method combined with an AI model in Embodiment 1 of the present invention.
  • the electronic device 30 may also communicate with one or more external devices 34 (such as keyboards, pointing devices, etc.). This communication can be performed through an input/output (I/O) interface 35.
  • the model-generated device 30 can also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through a network adapter 36. As shown in the figure, the network adapter 36 communicates with other modules of the device 30 generated by the model through the bus 33.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the voice enhancement method combined with the AI model of Embodiment 1 or Embodiment 2 are implemented.
  • the readable storage medium may more specifically include but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device or any of the above The right combination.
  • the present invention can also be implemented in the form of a program product, which includes program code.
  • program product runs on a terminal device
  • the program code is used to make the terminal device execute the implementation. Steps of the speech enhancement method combined with the AI model of Embodiment 1 or Embodiment 2.
  • program code used to execute the present invention can be written in any combination of one or more programming languages, and the program code can be completely executed on the user equipment, partially executed on the user equipment, as an independent
  • the software package is executed, partly on the user’s device, partly on the remote device, or entirely on the remote device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

一种结合AI模型的语音增强方法、系统、电子设备和介质,其中结合AI模型的语音增强方法包括以下步骤:根据目标AI模型获取语音存在概率(S11);根据语音存在概率得到噪声功率(S12);根据噪声功率得到先验信噪比(S13);根据先验信噪比得到语音增强增益(S14)。可以在复杂嘈杂场景下改善语音的可懂度,也能提高关键词唤醒和语音识别功能的性能。

Description

结合AI模型的语音增强方法、系统、电子设备和介质
本申请要求申请日为2020/3/13的中国专利申请2020101737400的优先权。本申请引用上述中国专利申请的全文。
技术领域
本发明属于语音增强技术领域,尤其涉及一种结合AI模型的语音增强方法、系统、电子设备和介质。
背景技术
当人处于嘈杂的环境中进行通话时,比如汽车,街道或咖啡馆中,环境中的噪声使得远端处的用户分心,理解困难,使得交流不顺利。类似场景下,如果使用虚拟助手(Virtual Assistant),其关键词唤醒(voice trigger detection)功能和自动语音识别(Automatic speech detection)功能都会出现误识率增加,识别率降低的现象,造成使用困难。
语音增强的目的是从带噪语音中分离出干净语音信号。得到的语音信号能够使得通话更清晰,可懂度更高,使得人与人之间交流更高效。也能帮助虚拟助手更好地理解用户的目的,提高用户体验。语音增强已经进行了数十年的研究,广泛用于通信,安防,家居等场景。相较于麦克风阵列技术,单通道的语音增强具有非常广泛的应用场景。一方面,单通道语音增强成本低,使用更加灵活便捷。另一方面,单通道语音增强无法利用到达角等空间信息,对于复杂场景,尤其是非平稳噪声场景,处理起来非常困难。
传统的语音增强的处理方法是建立在语音信号和噪声信号的统计分析之上的。一旦遇到不符合预期的统计特征,则语音增强的效果会减弱,或者降噪性能下降,或者语音失真增多。
传统的单通道语音增强技术基于两个假设,一是噪声信号的非平稳性比语音信号要弱,二是噪声信号和语音信号,其幅度都满足高斯分布。基于这些假设,参照图1,传统的单通道语音增强的方法分为两个步骤,一是噪声功率谱估计,二是语音增强增益计算。噪声功率谱估计根据当前带噪语音信号估计出当中可能包含的噪声,更新噪声功率谱。增益计算部分根据噪声功率谱估计先验信噪比,并计算增益。输入的带噪语音信号乘以计算出来的增益,就得到了增强后的语音信号。
传统方法在计算语音存在概率时,假设噪声信号和语音信号符合高斯分布。基于这个假设,然后使用贝叶斯后验概率公式能够计算出语音存在概率,是一个后验概率。然 后使用语音存在概率估计噪声功率谱,这样就完成了噪声估计。
在增益计算部分,可以利用这个噪声功率估计先验信噪比和计算增益。先验信噪比的估计有判决引导法(DD),倒谱平滑,改进型的判决引导法,等等。增益计算有多种方法,分别是维纳(Wiener)滤波,最小均方误差估计(MMSE-STSA),对数域最小均方误差估计(MMSE-LogSTSA),以及最优对数幅度谱估计(OMLSA)。
最后将输入带噪语音信号乘以这个增益,就可以得到增强后的语音信号。传统的语音增强的处理方法是建立在语音信号和噪声信号的统计分析的基础之上。这些统计分析主要用于语音存在概率的估计。一旦遇到不符合预期的统计特征,比如一些非平稳噪声,则语音增强的效果会下降。
发明内容
本发明要解决的技术问题是为了克服现有技术中语音增强效果不佳的缺陷,提供一种结合AI模型的语音增强方法、系统、电子设备和介质。
本发明是通过下述技术方案来解决上述技术问题:
本发明提供一种结合AI模型的语音增强方法,包括以下步骤:
根据目标AI(人工智能)模型获取语音存在概率;
根据语音存在概率得到噪声功率;
根据噪声功率得到先验信噪比;
根据先验信噪比得到语音增强增益。
较佳地,在根据目标AI模型获取语音存在概率的步骤之前,结合AI模型的语音增强方法还包括以下步骤:
构建目标AI模型。
较佳地,构建目标AI模型的步骤包括:
将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,训练集的输入为带噪语音信号,训练集的输出为带噪语音信号的实际信噪比;
根据训练集对AI模型进行训练以得到目标AI模型,以均方误差作为AI模型训练的评价指标,均方误差为训练后验信噪比与实际信噪比的均方误差,训练后验信噪比根据训练噪声功率谱得到,训练噪声功率谱根据训练语音存在概率得到,训练语音存在概率为AI模型的输出。
较佳地,得到语音增强增益的步骤包括:
根据预设算法得到语音增强增益,预设算法包括Wiener(一种语音增强算法)、MMSE-STSA(一种语音增强算法)或MMSE-LogSTSA(一种语音增强算法)、OMLSA。
较佳地,AI模型包括LSTM(Long Short-Term Memory,长短期记忆网络)、GRU(一种神经网络)。
本发明还提供一种语音增强系统,包括概率获取单元、噪声功率获取单元、信噪比获取单元、增益获取单元;
概率获取单元用于根据目标AI模型获取语音存在概率;
噪声功率获取单元用于根据语音存在概率得到噪声功率;
信噪比获取单元用于根据噪声功率得到先验信噪比;
增益获取单元用于根据先验信噪比得到语音增强增益。
较佳地,语音增强系统还包括模型构建单元;
模型构建单元用于构建目标AI模型。
较佳地,模型构建单元还用于:
将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,训练集的输入为带噪语音信号,训练集的输出为带噪语音信号的实际信噪比;
根据训练集对AI模型进行训练以得到目标AI模型,以均方误差作为AI模型训练的评价指标,均方误差为训练后验信噪比与实际信噪比的均方误差,训练后验信噪比根据训练噪声功率谱得到,训练噪声功率谱根据训练语音存在概率得到,训练语音存在概率为AI模型的输出。
较佳地,增益获取单元还用于:
根据预设算法得到语音增强增益,预设算法包括Wiener、MMSE-STSA、MMSE-LogSTSA或OMLSA。
较佳地,AI模型包括LSTM或GRU。
本发明还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现本发明的结合AI模型的语音增强方法。
本发明还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现本发明的结合AI模型的语音增强方法的步骤。
本发明的积极进步效果在于:本发明可以在复杂嘈杂场景下改善语音的可懂度,也能提高关键词唤醒和语音识别功能的性能。
附图说明
图1为现有技术的语音增强的方法的流程图。
图2为本发明的实施例1的结合AI模型的语音增强方法的流程图。
图3为本发明的实施例2的结合AI模型的语音增强方法的流程图。
图4为本发明的实施例2的结合AI模型的语音增强方法的一种可选的实施方式的流程图。
图5为本发明的实施例3的结合AI模型的语音增强方法的系统的结构示意图。
图6为本发明的实施例4的结合AI模型的语音增强方法的系统的结构示意图。
图7为本发明的实施例5的电子设备的结构示意图。
具体实施方式
下面通过实施例的方式进一步说明本发明,但并不因此将本发明限制在所述的实施例范围之中。
实施例1
本实施例提供一种结合AI模型的语音增强方法,包括以下步骤:
步骤S11、根据目标AI模型获取语音存在概率。
步骤S12、根据语音存在概率得到噪声功率。
步骤S13、根据噪声功率得到先验信噪比。
步骤S14、根据先验信噪比得到语音增强增益。
本实施例的结合AI模型的语音增强方法可以在复杂嘈杂场景下改善语音的可懂度,也能提高关键词唤醒和语音识别功能的性能。
实施例2
在实施例1的基础上,本实施例提供一种结合AI模型的语音增强方法,参照图3,在步骤S11之前,该结合AI模型的语音增强方法还包括以下步骤:
步骤S10、构建目标AI模型。
在步骤S10中,首先,将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,训练集的输入为带噪语音信号,训练集的输出为带噪语音信号的实际信噪比。然后,根据训练集对AI模型进行训练以得到目标AI模型,以均方误差作为AI模型训练的评价指标,均方误差为训练后验信噪比与实际信噪比的均方误差,训练后验信噪比根据训练噪声功率谱得到,训练噪声功率谱根据训练语音存在概率得到,训练语音存在概率为AI模型的输出。
将纯净语音信号和纯噪声信号,根据不同的比例混合,可以得到具有不同信噪比的带噪语音信号。混合得到的带噪语音信号作为训练集的输入,实际信噪比作为训练集的目标输出。AI模型的输出是语音存在概率,根据语音存在概率估计噪声功率谱并进一步估计后验信噪比。将估计的后验信噪比与计算的实际信噪比进行比较,计算均方误差,作为AI模型训练的评价指标。
具体实施时,在步骤S10中,信号是在频域进行处理的。在一种可选的实施方式中,参照图4,对于输入纯净语音信号s[t]和纯噪声信号n[t],都需要通过分帧加窗,然后使用傅里叶变换转换到频域。此时分别得到纯净语音信号的频谱S[k,n]和纯噪声信号的频谱N[k,n],其中k表示频点索引,n表示帧索引。根据不同的比例混合,可以得到具有不同信噪比的带噪语音信号X[k,n],信号混合的公式如下:
X[k,n]=a S[k,n]+(1-a)N[k,n],
其中系数a的范围为[0,1]。
信号混合之后,其实际信噪比为:
Figure PCTCN2020088399-appb-000001
其中σ x=E{|X[k,n]| 2}是带噪语音信号方差,σ n=E{|N[k,n]| 2}是噪声信号方差。混合得到的带噪语音信号X[k,n]作为训练集的输入,实际信噪比γ truth[k,n]作为训练集的目标输出。
AI模型的输入是带噪语音的幅度谱,输出是语音存在概率P[k,n]。得到语音存在概率后,先计算平滑因子a n[k,n],
a n[k,n]=a 0+(1-a 0)P[k,n],
其中a 0是一个固定值,取值范围[0.7,0.95]。
然后,再基于平滑因子a n估计噪声功率谱
Figure PCTCN2020088399-appb-000002
Figure PCTCN2020088399-appb-000003
根据估计出来的噪声功率谱,计算后验信噪比
Figure PCTCN2020088399-appb-000004
Figure PCTCN2020088399-appb-000005
将估计的后验信噪比与计算的实际信噪比进行比较,计算均方误差MSE,作为AI模型训练的评价指标:
Figure PCTCN2020088399-appb-000006
然后在步骤S11中,先将训练得到的目标AI模型参数导入。目标AI模型的输入是 带噪语音的幅度谱,输出是语音存在概率P[k,n]。带噪语音的幅度谱由输入带噪声语音信号经过分帧加窗FFT之后计算得到。语音信号得到语音存在概率后,先计算平滑因子a n[k,n],其中a 0是一个固定值,取值范围[0.7,0.95],
a n[k,n]=a 0+(1-a 0)P[k,n]。
然后,在步骤S12中,基于平滑因子a n估计噪声功率谱
Figure PCTCN2020088399-appb-000007
公式如下:
Figure PCTCN2020088399-appb-000008
然后,在步骤S13中,先根据估计出来的噪声功率谱,计算后验信噪比
Figure PCTCN2020088399-appb-000009
Figure PCTCN2020088399-appb-000010
使用判决引导法得到先验信噪比:
Figure PCTCN2020088399-appb-000011
其中a dd是判决引导法的平滑因子,取值范围[0.9,0.98]。
在估计得到先验信噪比
Figure PCTCN2020088399-appb-000012
之后,在步骤S14中,继续计算增益G[k,n]。
在一种可选的实施方式中,增益计算采用Wiener(维纳增益)算法,其运算量最少,公式如下:
Figure PCTCN2020088399-appb-000013
在第二种可选的实施方式中,增益计算采用结合语音存在概率的MMSE-LogSTSA增益估计,其降噪性能最优,其公式如下:
Figure PCTCN2020088399-appb-000014
其中G LSA代表MMSE-LogSTSA增益。
在第三种可选的实施方式中,采用G OMLSA[k,n]增益,其噪声抑制和语音保真效果最好,其公式如下:
Figure PCTCN2020088399-appb-000015
其中,p 1[k,n]代表语音存在概率,GH0代表纯噪声段噪声抑制最小增益阈值,G min代表整体噪声抑制最小增益阈值。
G LSA代表MMSE-LogSTSA增益,公式如下:
Figure PCTCN2020088399-appb-000016
在其他可选的实施方式中,增益的计算可采用MMSE-STSA等。
然后,输入的带噪语音信号经过分帧加窗FFT后乘以增益得到增强后的语音信号 Y[k,n]:
Y[k,n]=X[k,n]*G[k,n]。该过程中,信号幅度乘增益,信号相位不变。
最后需要经过逆傅里叶变换(IFFT重叠相加),然后合成到时域信号y[t](增强语音信号)。
作为一种可选的实施方式,AI模型包括LSTM、GRU,但不限于LSTM、GRU。
本实施例的结合AI模型的语音增强方法是单通道的结合AI模型的语音增强方法,仅需要一个通道的语音幅度谱信息。本实施例的结合AI模型的语音增强方法既可以用在单麦克风的场景,也可以用在多麦克风阵列的后处理。因为其硬件条件限制较少,应用场景更加广泛。
本实施例的结合AI模型的语音增强方法使用神经网络对语音存在概率进行估计,得到语音存在概率之后,可以估计出噪声功率,先验信噪比,然后再计算输出增益。这样为后续的计算提供了更多的灵活性。比如在计算增益时,可以是维纳增益,也可以是OMLSA增益。而且还可以根据应用场景设置相应的参数,比如维纳增益和OMLSA增益都有参数可以设置降噪的程度。
本实施例的结合AI模型的语音增强方法使用LSTM和GRU作为AI模型的结构更适合语音增强这一类的时间序列问题,但并不仅限于LSTM和GRU。
本实施例的结合AI模型的语音增强方法是单通道语音增强,既可以用于单麦克风语音增强,也可以用于麦克风阵列后处理部分。除麦克风外,其他声电传感器也可以。比如骨传导技术,以及骨传导和麦克风的结合。
本实施例的结合AI模型的语音增强方法用于先验信噪比计算,使用的是判决引导法(DD),但并不仅限于判决引导法。其他方法,包括可变的判决引导法,倒谱平滑估计也是可以用的。
本实施例的结合AI模型的语音增强方法用于增益计算,使用的是G OMLSA,但并不限于G OMLSA。其他方法,包括维纳增益,MMSE-STSA增益,MMSE-LogSTSA增益,以及结合语音存在概率的MMSE-STSA增益都是可用的。
本实施例的结合AI模型的语音增强方法提出的参考值范围是基于实践得出的经验值,实际应用中并不以这些值作为限制。
本实施例的结合AI模型的语音增强方法使用的AI模型,是LSTM和GRU,但是并不局限这两种模型,其他DNN(一种神经网络),CNN(一种神经网络),CRNN(一种神经网络),GMM-HMM(一种神经网络),凡是基于机器学习,深度学习,为了得到语音存在概率的模型均可以作为本实施例的结合AI模型的语音增强方法中的AI模型。
实施例3
本实施例提供一种语音增强系统。参照图5,该语音增强系统包括概率获取单元21、噪声功率获取单元22、信噪比获取单元23、增益获取单元24。
概率获取单元21用于根据目标AI模型获取语音存在概率。噪声功率获取单元22用于根据语音存在概率得到噪声功率。信噪比获取单元23用于根据噪声功率得到先验信噪比。增益获取单元24用于根据先验信噪比得到语音增强增益。
本实施例的语音增强系统可以在复杂嘈杂场景下改善语音的可懂度,也能提高关键词唤醒和语音识别功能的性能。
实施例4
在实施例3的基础上,本实施例提供一种语音增强系统。参照图6,该语音增强系统还包括模型构建单元25;模型构建单元25用于构建目标AI模型。
具体实施时,模型构建单元25将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,训练集的输入为带噪语音信号,训练集的输出为带噪语音信号的实际信噪比;然后根据训练集对AI模型进行训练以得到目标AI模型,以均方误差作为AI模型训练的评价指标,均方误差为训练后验信噪比与实际信噪比的均方误差,训练后验信噪比根据训练噪声功率谱得到,训练噪声功率谱根据训练语音存在概率得到,训练语音存在概率为AI模型的输出。
作为一种可选的实施方式,模型构建单元25在构建目标AI模型时,信号是在频域进行处理的。对于输入纯净语音信号s[t]和纯噪声信号n[t],都需要通过分帧加窗,然后使用傅里叶变换转换到频域。此时分别得到纯净语音信号的频谱S[k,n]和纯噪声信号的频谱N[k,n],其中k表示频点索引,n表示帧索引。根据不同的比例混合,可以得到具有不同信噪比的带噪语音信号X[k,n],信号混合的公式如下:
X[k,n]=a S[k,n]+(1-a)N[k,n],
其中系数a的范围为[0,1]。
信号混合之后,其实际信噪比为:
Figure PCTCN2020088399-appb-000017
其中σ x=E{|X[k,n]| 2}是带噪语音信号方差,σ n=E{|N[k,n]| 2}是噪声信号方差。混合得到的带噪语音信号X[k,n]作为训练集的输入,实际信噪比γ truth[k,n]作为训练集的目标输出。
AI模型的输入是带噪语音的幅度谱,输出是语音存在概率P[k,n]。得到语音存在概 率后,先计算平滑因子a n[k,n],
a n[k,n]=a 0+(1-a 0)P[k,n],
其中a 0是一个固定值,取值范围[0.7,0.95]。
然后,再基于平滑因子a n估计噪声功率谱
Figure PCTCN2020088399-appb-000018
Figure PCTCN2020088399-appb-000019
根据估计出来的噪声功率谱,计算后验信噪比
Figure PCTCN2020088399-appb-000020
Figure PCTCN2020088399-appb-000021
将估计的后验信噪比与计算的实际信噪比进行比较,计算均方误差,作为AI模型训练的评价指标:
Figure PCTCN2020088399-appb-000022
然后,概率获取单元21根据目标AI模型获取语音存在概率。先将训练得到的目标AI模型参数导入。目标AI模型的输入是带噪语音的幅度谱,输出是语音存在概率P[k,n]。带噪语音的幅度谱由输入带噪声语音信号经过分帧加窗FFT之后计算得到。得到语音存在概率后,先计算平滑因子a n[k,n],其中a 0是一个固定值,取值范围[0.7,0.95],
a n[k,n]=a 0+(1-a 0)P[k,n]。
然后,噪声功率获取单元22用于根据语音存在概率得到噪声功率。噪声功率获取单元22基于平滑因子a n估计噪声功率谱
Figure PCTCN2020088399-appb-000023
公式如下:
Figure PCTCN2020088399-appb-000024
然后,信噪比获取单元23用于根据噪声功率得到先验信噪比。信噪比获取单元23先根据估计出来的噪声功率谱,计算后验信噪比
Figure PCTCN2020088399-appb-000025
Figure PCTCN2020088399-appb-000026
使用判决引导法得到先验信噪比:
Figure PCTCN2020088399-appb-000027
其中a dd是判决引导法的平滑因子,取值范围[0.9,0.98]。
在估计得到先验信噪比
Figure PCTCN2020088399-appb-000028
之后,增益获取单元24用于根据先验信噪比得到语音增强增益,继续计算增益G[k,n]。
在一种可选的实施方式中,增益计算采用Wiener(维纳增益)算法,其运算量最少,公式如下:
Figure PCTCN2020088399-appb-000029
在第二种可选的实施方式中,增益计算采用结合语音存在概率的MMSE-LogSTSA增益估计,其降噪性能最优,其公式如下:
Figure PCTCN2020088399-appb-000030
其中G LSA代表MMSE-LogSTSA增益。
在第三种可选的实施方式中,采用G OMLSA[k,n]增益,其噪声抑制和语音保真效果最好,其公式如下:
Figure PCTCN2020088399-appb-000031
其中,p 1[k,n]代表语音存在概率,GH0代表纯噪声段噪声抑制最小增益阈值,G min代表整体噪声抑制最小增益阈值。
G LSA代表MMSE-LogSTSA增益,公式如下:
Figure PCTCN2020088399-appb-000032
在其他可选的实施方式中,增益的计算可采用MMSE-STSA等。
然后,输入的带噪语音信号经过分帧加窗FFT后乘以增益得到增强后的语音信号Y[k,n]:
Y[k,n]=X[k,n]*G[k,n]。该过程中,信号幅度乘增益,信号相位不变。
最后需要经过逆傅里叶变换(IFFT重叠相加),然后合成到时域信号y[t](增强语音信号)。
作为一种可选的实施方式,AI模型包括LSTM、GRU,但不限于LSTM、GRU。
本实施例的语音增强系统是单通道的语音增强系统,仅需要一个通道的语音幅度谱信息。本实施例的语音增强系统既可以用在单麦克风的场景,也可以用在多麦克风阵列的后处理。因为其硬件条件限制较少,应用场景更加广泛。
本实施例的语音增强系统使用神经网络对语音存在概率进行估计,得到语音存在概率之后,可以估计出噪声功率,先验信噪比,然后再计算输出增益。这样为后续的计算提供了更多的灵活性。比如在计算增益时,可以是维纳增益,也可以是OMLSA增益。而且还可以根据应用场景设置相应的参数,比如维纳增益和OMLSA增益都有参数可以设置降噪的程度。
本实施例的语音增强系统使用LSTM和GRU作为AI模型的结构更适合语音增强这一类的时间序列问题,但并不仅限于LSTM和GRU。
本实施例的语音增强系统是单通道语音增强,既可以用于单麦克风语音增强,也可以用于麦克风阵列后处理部分。除麦克风外,其他声电传感器也可以。比如骨传导技术, 以及骨传导和麦克风的结合。
本实施例的语音增强系统用于先验信噪比计算,使用的是判决引导法(DD),但并不仅限于判决引导法。其他方法,包括可变的判决引导法,倒谱平滑估计也是可以用的。
本实施例的语音增强系统用于增益计算,使用的是G OMLSA,但并不限于G OMLSA。其他方法,包括维纳增益,MMSE-STSA增益,MMSE-LogSTSA增益,以及结合语音存在概率的MMSE-STSA增益都是可用的。
本实施例的语音增强系统提出的参考值范围是基于实践得出的经验值,实际应用中并不以这些值作为限制。
本实施例的语音增强系统使用的AI模型,是LSTM和GRU,但是并不局限这两种模型,其他DNN,CNN,CRNN,GMM-HMM,凡是基于机器学习,深度学习,为了得到语音存在概率的模型均可以作为本实施例的语音增强系统中的AI模型。
实施例5
图7为本实施例提供的一种电子设备的结构示意图。所述电子设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序。在一种可选的实施方式中,所述处理器执行所述程序时实现实施例1或实施例2的结合AI模型的语音增强方法。图7显示的电子设备30仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图7所示,电子设备30可以以通用计算设备的形式表现,例如其可以为服务器设备。电子设备30的组件可以包括但不限于:上述至少一个处理器31、上述至少一个存储器32、连接不同系统组件(包括存储器32和处理器31)的总线33。
总线33包括数据总线、地址总线和控制总线。
存储器32可以包括易失性存储器,例如随机存取存储器(RAM)321和/或高速缓存存储器322,还可以进一步包括只读存储器(ROM)323。
存储器32还可以包括具有一组(至少一个)程序模块324的程序/实用工具325,这样的程序模块324包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
处理器31通过运行存储在存储器32中的计算机程序,从而执行各种功能应用以及数据处理,例如本发明实施例1的结合AI模型的语音增强方法。
电子设备30也可以与一个或多个外部设备34(例如键盘、指向设备等)通信。这种通信可以通过输入/输出(I/O)接口35进行。并且,模型生成的设备30还可以通过网络适配器36与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特 网)通信。如图所示,网络适配器36通过总线33与模型生成的设备30的其它模块通信。应当明白,尽管图中未示出,可以结合模型生成的设备30使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID(磁盘阵列)系统、磁带驱动器以及数据备份存储系统等。
应当注意,尽管在上文详细描述中提及了电子设备的若干单元/模块或子单元/模块,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本发明的实施方式,上文描述的两个或更多单元/模块的特征和功能可以在一个单元/模块中具体化。反之,上文描述的一个单元/模块的特征和功能可以进一步划分为由多个单元/模块来具体化。
实施例6
本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现实施例1或实施例2的结合AI模型的语音增强方法的步骤。
其中,可读存储介质可以采用的更具体可以包括但不限于:便携式盘、硬盘、随机存取存储器、只读存储器、可擦拭可编程只读存储器、光存储器件、磁存储器件或上述的任意合适的组合。
在可能的实施方式中,本发明还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行实现实施例1或实施例2的结合AI模型的语音增强方法的步骤。
其中,可以以一种或多种程序设计语言的任意组合来编写用于执行本发明的程序代码,所述程序代码可以完全地在用户设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户设备上部分在远程设备上执行或完全在远程设备上执行。
虽然以上描述了本发明的具体实施方式,但是本领域的技术人员应当理解,这些仅是举例说明,在不背离本发明的原理和实质的前提下,可以对这些实施方式做出多种变更或修改。因此,本发明的保护范围由所附权利要求书限定。

Claims (12)

  1. 一种结合AI模型的语音增强方法,其特征在于,包括以下步骤:
    根据目标AI模型获取语音存在概率;
    根据所述语音存在概率得到噪声功率;
    根据所述噪声功率得到先验信噪比;
    根据所述先验信噪比得到语音增强增益。
  2. 如权利要求1所述的结合AI模型的语音增强方法,其特征在于,在所述根据目标AI模型获取语音存在概率的步骤之前,所述结合AI模型的语音增强方法还包括以下步骤:
    构建所述目标AI模型。
  3. 如权利要求2所述的结合AI模型的语音增强方法,其特征在于,所述构建所述目标AI模型的步骤包括:
    将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,所述训练集的输入为所述带噪语音信号,所述训练集的输出为所述带噪语音信号的实际信噪比;
    根据所述训练集对AI模型进行训练以得到所述目标AI模型,以均方误差作为所述AI模型训练的评价指标,所述均方误差为训练后验信噪比与所述实际信噪比的均方误差,所述训练后验信噪比根据训练噪声功率谱得到,所述训练噪声功率谱根据训练语音存在概率得到,所述训练语音存在概率为所述AI模型的输出。
  4. 如权利要求1-3中至少一项所述的结合AI模型的语音增强方法,其特征在于,所述得到语音增强增益的步骤包括:
    根据预设算法得到所述语音增强增益,所述预设算法包括Wiener、MMSE-STSA、MMSE-LogSTSA或OMLSA。
  5. 如权利要求3所述的结合AI模型的语音增强方法,其特征在于,所述AI模型包括LSTM或GRU。
  6. 一种语音增强系统,其特征在于,包括概率获取单元、噪声功率获取单元、信噪比获取单元和增益获取单元;
    所述概率获取单元用于根据目标AI模型获取语音存在概率;
    所述噪声功率获取单元用于根据所述语音存在概率得到噪声功率;
    所述信噪比获取单元用于根据所述噪声功率得到先验信噪比;
    所述增益获取单元用于根据所述先验信噪比得到语音增强增益。
  7. 如权利要求6所述的语音增强系统,其特征在于,所述语音增强系统还包括模型构建单元;
    所述模型构建单元用于构建所述目标AI模型。
  8. 如权利要求7所述的语音增强系统,其特征在于,所述模型构建单元还用于:
    将纯净语音信号和纯噪声信号根据若干预设比例混合以得到若干带噪语音信号,并构建训练集,所述训练集的输入为所述带噪语音信号,所述训练集的输出为所述带噪语音信号的实际信噪比;
    根据所述训练集对AI模型进行训练以得到所述目标AI模型,以均方误差作为所述AI模型训练的评价指标,所述均方误差为训练后验信噪比与所述实际信噪比的均方误差,所述训练后验信噪比根据训练噪声功率谱得到,所述训练噪声功率谱根据训练语音存在概率得到,所述训练语音存在概率为所述AI模型的输出。
  9. 如权利要求6-8中至少一项所述的语音增强系统,其特征在于,所述增益获取单元还用于:
    根据预设算法得到所述语音增强增益,所述预设算法包括Wiener、MMSE-STSA、MMSE-LogSTSA或OMLSA。
  10. 如权利要求8所述的语音增强系统,其特征在于,所述AI模型包括LSTM或GRU。
  11. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-5中任一项所述的结合AI模型的语音增强方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-5中任一项所述的结合AI模型的语音增强方法的步骤。
PCT/CN2020/088399 2020-03-13 2020-04-30 结合ai模型的语音增强方法、系统、电子设备和介质 WO2021179424A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010173740.0A CN111445919B (zh) 2020-03-13 2020-03-13 结合ai模型的语音增强方法、系统、电子设备和介质
CN202010173740.0 2020-03-13

Publications (1)

Publication Number Publication Date
WO2021179424A1 true WO2021179424A1 (zh) 2021-09-16

Family

ID=71650507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/088399 WO2021179424A1 (zh) 2020-03-13 2020-04-30 结合ai模型的语音增强方法、系统、电子设备和介质

Country Status (2)

Country Link
CN (1) CN111445919B (zh)
WO (1) WO2021179424A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283793A (zh) * 2021-12-24 2022-04-05 北京达佳互联信息技术有限公司 一种语音唤醒方法、装置、电子设备、介质及程序产品

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916101B (zh) * 2020-08-06 2022-01-21 大象声科(深圳)科技有限公司 一种融合骨振动传感器和双麦克风信号的深度学习降噪方法及系统
CN112349277B (zh) * 2020-09-28 2023-07-04 紫光展锐(重庆)科技有限公司 结合ai模型的特征域语音增强方法及相关产品
CN112289337B (zh) * 2020-11-03 2023-09-01 北京声加科技有限公司 一种滤除机器学习语音增强后的残留噪声的方法及装置
CN113823312B (zh) * 2021-02-19 2023-11-07 北京沃东天骏信息技术有限公司 语音增强模型生成方法和装置、语音增强方法和装置
CN113205824B (zh) * 2021-04-30 2022-11-11 紫光展锐(重庆)科技有限公司 声音信号处理方法、装置、存储介质、芯片及相关设备
CN115294983B (zh) * 2022-09-28 2023-04-07 科大讯飞股份有限公司 一种自主移动设备唤醒方法、系统及基站
CN116403594B (zh) * 2023-06-08 2023-08-18 澳克多普有限公司 基于噪声更新因子的语音增强方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6782362B1 (en) * 2000-04-27 2004-08-24 Microsoft Corporation Speech recognition method and apparatus utilizing segment models
CN108831499A (zh) * 2018-05-25 2018-11-16 西南电子技术研究所(中国电子科技集团公司第十研究所) 利用语音存在概率的语音增强方法
CN109979478A (zh) * 2019-04-08 2019-07-05 网易(杭州)网络有限公司 语音降噪方法及装置、存储介质及电子设备
CN110390950A (zh) * 2019-08-17 2019-10-29 杭州派尼澳电子科技有限公司 一种基于生成对抗网络的端到端语音增强方法
CN110634500A (zh) * 2019-10-14 2019-12-31 达闼科技成都有限公司 一种先验信噪比的计算方法、电子设备及存储介质
CN110739005A (zh) * 2019-10-28 2020-01-31 南京工程学院 一种面向瞬态噪声抑制的实时语音增强方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559887B (zh) * 2013-11-04 2016-08-17 深港产学研基地 用于语音增强系统的背景噪声估计方法
CN110164467B (zh) * 2018-12-18 2022-11-25 腾讯科技(深圳)有限公司 语音降噪的方法和装置、计算设备和计算机可读存储介质
CN109473118B (zh) * 2018-12-24 2021-07-20 思必驰科技股份有限公司 双通道语音增强方法及装置
CN109616139B (zh) * 2018-12-25 2023-11-03 平安科技(深圳)有限公司 语音信号噪声功率谱密度估计方法和装置
CN110335619A (zh) * 2019-04-30 2019-10-15 同方电子科技有限公司 一种基于机通平台的语音增强算法
EP3866165B1 (en) * 2020-02-14 2022-08-17 System One Noc & Development Solutions, S.A. Method for enhancing telephone speech signals based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6782362B1 (en) * 2000-04-27 2004-08-24 Microsoft Corporation Speech recognition method and apparatus utilizing segment models
CN108831499A (zh) * 2018-05-25 2018-11-16 西南电子技术研究所(中国电子科技集团公司第十研究所) 利用语音存在概率的语音增强方法
CN109979478A (zh) * 2019-04-08 2019-07-05 网易(杭州)网络有限公司 语音降噪方法及装置、存储介质及电子设备
CN110390950A (zh) * 2019-08-17 2019-10-29 杭州派尼澳电子科技有限公司 一种基于生成对抗网络的端到端语音增强方法
CN110634500A (zh) * 2019-10-14 2019-12-31 达闼科技成都有限公司 一种先验信噪比的计算方法、电子设备及存储介质
CN110739005A (zh) * 2019-10-28 2020-01-31 南京工程学院 一种面向瞬态噪声抑制的实时语音增强方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283793A (zh) * 2021-12-24 2022-04-05 北京达佳互联信息技术有限公司 一种语音唤醒方法、装置、电子设备、介质及程序产品

Also Published As

Publication number Publication date
CN111445919A (zh) 2020-07-24
CN111445919B (zh) 2023-01-20

Similar Documents

Publication Publication Date Title
WO2021179424A1 (zh) 结合ai模型的语音增强方法、系统、电子设备和介质
JP7177167B2 (ja) 混合音声の特定方法、装置及びコンピュータプログラム
US7103541B2 (en) Microphone array signal enhancement using mixture models
CN108615535B (zh) 语音增强方法、装置、智能语音设备和计算机设备
CN110634497B (zh) 降噪方法、装置、终端设备及存储介质
US7725314B2 (en) Method and apparatus for constructing a speech filter using estimates of clean speech and noise
CN100543842C (zh) 基于多统计模型和最小均方误差实现背景噪声抑制的方法
CN108417224B (zh) 双向神经网络模型的训练和识别方法及系统
US9520138B2 (en) Adaptive modulation filtering for spectral feature enhancement
US9548064B2 (en) Noise estimation apparatus of obtaining suitable estimated value about sub-band noise power and noise estimating method
WO2022161277A1 (zh) 语音增强方法、模型训练方法以及相关设备
US10839820B2 (en) Voice processing method, apparatus, device and storage medium
WO2018086444A1 (zh) 噪声抑制信噪比估计方法和用户终端
WO2022183806A1 (zh) 基于神经网络的语音增强方法、装置及电子设备
CN113345460B (zh) 音频信号处理方法、装置、设备及存储介质
Martín-Doñas et al. Dual-channel DNN-based speech enhancement for smartphones
WO2022213825A1 (zh) 基于神经网络的端到端语音增强方法、装置
US11657828B2 (en) Method and system for speech enhancement
CN112712818A (zh) 语音增强方法、装置、设备
CN110992977B (zh) 一种目标声源的提取方法及装置
BR112014009647B1 (pt) Aparelho de atenuação do ruído e método de atenuação do ruído
WO2020015546A1 (zh) 一种远场语音识别方法、语音识别模型训练方法和服务器
TWI749547B (zh) 應用深度學習的語音增強系統
Razani et al. A reduced complexity MFCC-based deep neural network approach for speech enhancement
Chen Noise reduction of bird calls based on a combination of spectral subtraction, Wiener filtering, and Kalman filtering

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20924177

Country of ref document: EP

Kind code of ref document: A1