WO2021068338A1 - 基于人工智能的语音增强方法、服务器及存储介质 - Google Patents
基于人工智能的语音增强方法、服务器及存储介质 Download PDFInfo
- Publication number
- WO2021068338A1 WO2021068338A1 PCT/CN2019/118004 CN2019118004W WO2021068338A1 WO 2021068338 A1 WO2021068338 A1 WO 2021068338A1 CN 2019118004 W CN2019118004 W CN 2019118004W WO 2021068338 A1 WO2021068338 A1 WO 2021068338A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- discriminator
- speech
- generator
- sample
- data
- Prior art date
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 60
- 230000006870 function Effects 0.000 claims description 122
- 238000013528 artificial neural network Methods 0.000 claims description 30
- 230000015654 memory Effects 0.000 claims description 27
- 230000004913 activation Effects 0.000 claims description 20
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 125000004122 cyclic group Chemical group 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 9
- 230000007787 long-term memory Effects 0.000 claims description 8
- 230000006403 short-term memory Effects 0.000 claims description 8
- 230000037433 frameshift Effects 0.000 claims description 7
- 239000002131 composite material Substances 0.000 claims 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Definitions
- This application relates to the field of artificial intelligence technology, and in particular to a voice enhancement method, server and storage medium based on artificial intelligence.
- speech enhancement is mainly to remove complex background noise from noisy speech, and to ensure that the speech intelligibility is improved without distortion of the speech signal.
- Traditional speech enhancement algorithms are mostly based on noise estimation, and they deal with a single type of noise, which cannot well deal with speech denoising in complex backgrounds.
- neural network models With the rapid development of neural networks, more and more neural network models have also been applied to speech enhancement algorithms.
- this application provides a voice enhancement method, server and storage medium based on artificial intelligence, the purpose of which is to improve the effect of voice enhancement.
- an artificial intelligence-based voice enhancement method which includes:
- Obtaining step obtaining a preset number of noisy speech and denoising speech corresponding to each noisy speech as training samples, and dividing the training samples into a first data set, a second data set, and a third data set;
- Construction step construct a generative confrontation network, the generative confrontation network including at least one generator and one discriminator;
- the first training step input the first data set into the discriminator, adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, and update when the loss function value of the discriminator is less than the first preset threshold
- the parameters of the discriminator are used to obtain the first discriminator, and then the noisy speech of the second data set is input into the generator, and the output speech and the noisy speech are input into the first discriminator, using back propagation
- the algorithm updates the parameters of the first discriminator;
- the second training step input the noisy speech of the third data set into the generator, and input the output speech and the noisy speech into the first discriminator after the updated parameters, and according to the first discriminator after the updated parameters
- the output result of the discriminator obtains the loss function of the generator, and the parameter of the generator is adjusted with the value of the loss function of the minimization generator.
- the loss function value of the generator is less than the second preset threshold, the generator's loss function is updated. Parameters, using the updated parameter generator as a speech enhancement model;
- the feedback step receiving the voice data to be enhanced sent by the user, inputting the voice data to be enhanced into the voice enhancement model, generating the enhanced voice data and feeding it back to the user.
- the present application also provides a server, which includes a memory and a processor, wherein the memory stores a voice enhancement program based on artificial intelligence, and the voice enhancement program based on artificial intelligence is used by the computer.
- the processor executes and implements the following steps:
- Obtaining step obtaining a preset number of noisy speech and denoising speech corresponding to each noisy speech as training samples, and dividing the training samples into a first data set, a second data set, and a third data set;
- Construction step construct a generative confrontation network, the generative confrontation network including at least one generator and one discriminator;
- the first training step input the first data set into the discriminator, adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, and update when the loss function value of the discriminator is less than the first preset threshold
- the parameters of the discriminator are used to obtain the first discriminator, and then the noisy speech of the second data set is input into the generator, and the output speech and the noisy speech are input into the first discriminator, using back propagation
- the algorithm updates the parameters of the first discriminator;
- the second training step input the noisy speech of the third data set into the generator, and input the output speech and the noisy speech into the first discriminator after the updated parameters, and according to the first discriminator after the updated parameters
- the output result of the discriminator obtains the loss function of the generator, and the parameter of the generator is adjusted with the value of the loss function of the minimization generator.
- the loss function value of the generator is less than the second preset threshold, the generator's loss function is updated. Parameters, using the updated parameter generator as a speech enhancement model;
- the feedback step receiving the voice data to be enhanced sent by the user, inputting the voice data to be enhanced into the voice enhancement model, generating the enhanced voice data and feeding it back to the user.
- the present application also provides a computer-readable storage medium that includes an artificial intelligence-based speech enhancement program, and when the artificial intelligence-based speech enhancement program is executed by a processor, it can Realize any step in the voice enhancement method based on artificial intelligence as described above.
- the artificial intelligence-based speech enhancement method, server, and storage medium proposed in this application acquire noisy speech and its corresponding denoised speech as training samples to construct A generative confrontation network of the generator and the generator, and based on the noisy speech and the speech output from the generator, the first discriminator is obtained by multiple adjustments and updates of the parameters of the discriminator, and then the loss function of the generator is obtained based on the first discriminator, and finally The parameter of the generator is adjusted by minimizing the loss function value of the generator to obtain the speech enhancement model, which is applied to speech data enhancement.
- the above-mentioned generative confrontation network applied by the artificial intelligence-based speech enhancement method provided in this application does not have similar recursive operations in RNN, and has higher timeliness and faster data processing speed than neural networks, thereby realizing rapid speech enhancement.
- the generators and discriminators of the above-mentioned generative confrontation network process the original audio without manually extracting features. It can also learn voice features from different speakers and different types of noise and combine them to form shared parameters, so that The system is simple and has strong generalization ability.
- Figure 1 is a schematic diagram of a preferred embodiment of the application server
- FIG. 2 is a schematic diagram of modules of a preferred embodiment of the artificial intelligence-based speech enhancement program in FIG. 1;
- FIG. 3 is a flowchart of a preferred embodiment of a voice enhancement method based on artificial intelligence in this application;
- FIG. 1 it is a schematic diagram of a preferred embodiment of the server 1 of this application.
- the server 1 includes but is not limited to: a memory 11, a processor 12, a display 13, and a network interface 14.
- the server 1 is connected to the network through the network interface 14 to obtain original data.
- the network may be an intranet, the Internet, a global system of mobile communication (GSM), a wideband code division multiple access (WCDMA), or a 4G network. , 5G network, Bluetooth (Bluetooth), Wi-Fi, call network and other wireless or wired networks.
- the memory 11 includes at least one type of readable storage medium
- the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), random access memory (RAM), static memory Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
- the storage 11 may be an internal storage unit of the server 1, for example, a hard disk or a memory of the server 1.
- the memory 11 may also be an external storage device of the server 1, such as a plug-in hard disk equipped with the server 1, a smart media card (SMC), and a secure digital (Secure Digital). , SD) card, flash card (Flash Card), etc.
- the memory 11 may also include both an internal storage unit of the server 1 and an external storage device thereof.
- the memory 11 is generally used to store an operating system and various application software installed on the server 1, for example, the program code of the artificial intelligence-based voice enhancement program 10, etc.
- the memory 11 can also be used to temporarily store various types of data that have been output or will be output.
- the processor 12 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips.
- the processor 12 is generally used to control the overall operation of the server 1, such as performing data interaction or communication-related control and processing.
- the processor 12 is configured to run the program code or process data stored in the memory 11, for example, to run the program code of the artificial intelligence-based speech enhancement program 10, etc.
- the display 13 may be referred to as a display screen or a display unit.
- the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an organic light-emitting diode (OLED) touch device, and the like.
- the display 13 is used for displaying the information processed in the server 1 and for displaying a visualized work interface, for example, displaying the results of data statistics.
- the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
- the network interface 14 is generally used to establish a communication connection between the server 1 and other electronic devices.
- FIG. 2 only shows the server 1 with components 11-14 and an artificial intelligence-based speech enhancement program 10. However, it should be understood that it is not required to implement all of the illustrated components, and more or fewer may be implemented instead. Components.
- the server 1 may also include a user interface.
- the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
- the optional user interface may also include a standard wired interface and a wireless interface.
- the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an organic light-emitting diode (OLED) touch device, and the like.
- the display can also be called a display screen or a display unit as appropriate, and is used to display the information processed in the server 1 and to display a visualized user interface.
- the server 1 may also include a radio frequency (RF) circuit, a sensor, an audio circuit, etc., which are not described here.
- RF radio frequency
- Obtaining step obtaining a preset number of noisy speech and denoising speech corresponding to each noisy speech as training samples, and dividing the training samples into a first data set, a second data set, and a third data set;
- Construction step construct a generative confrontation network, the generative confrontation network including at least one generator and one discriminator;
- the first training step input the first data set into the discriminator, adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, and update when the loss function value of the discriminator is less than the first preset threshold
- the parameters of the discriminator are used to obtain the first discriminator, and then the noisy speech of the second data set is input into the generator, and the output speech and the noisy speech are input into the first discriminator, using back propagation
- the algorithm updates the parameters of the first discriminator;
- the second training step input the noisy speech of the third data set into the generator, and input the output speech and the noisy speech into the first discriminator after the updated parameters, and according to the first discriminator after the updated parameters
- the output result of the discriminator obtains the loss function of the generator, and the parameter of the generator is adjusted with the value of the loss function of the minimization generator.
- the loss function value of the generator is less than the second preset threshold, the generator's loss function is updated. Parameters, using the updated parameter generator as a speech enhancement model;
- the feedback step receiving the voice data to be enhanced sent by the user, inputting the voice data to be enhanced into the voice enhancement model, generating the enhanced voice data and feeding it back to the user.
- the artificial intelligence-based speech enhancement program 10 may be divided into multiple modules, and the multiple modules are stored in the memory 12 and executed by the processor 13 to complete the application.
- the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions.
- FIG. 2 it is a program module diagram of an embodiment of the artificial intelligence-based speech enhancement program 10 in FIG. 2.
- the artificial intelligence-based speech enhancement program 10 can be divided into: an acquisition module 110, a construction module 120, a first training module 130, a second training module 140, and a feedback module 150.
- the obtaining module 110 is used to obtain a preset number of noisy speech and denoised speech corresponding to each noisy speech as a training sample, and divide the training sample into a first data set, a second data set, and a third data set.
- a preset number of noisy speech data and denoised speech data corresponding to each noisy speech data can be obtained from a preset third-party speech library as training samples.
- the denoising voice data and the noisy voice data are sampled using 16KHz, the voice frame length is set to 16ms, and the voice frame shift is set to 8ms. It is understandable that the present application does not limit the frame length, frame shift, and acoustic characteristics contained in the acquired speech spectrum.
- the noisy voice and denoised voice obtained from the preset voice library are unprocessed voice data, and the unprocessed voice data may contain some invalid and redundant voice data.
- the voice time is not as long as required, and the voice quality does not meet the requirements. It is invalid and redundant voice data. Or, there will be some invalid or redundant voice periods in the unprocessed voice data. The existence of this part of the redundant or invalid voice periods will have a bad impact on the subsequent voice data processing process, so it needs to be removed This part of the redundant or invalid voice period, where the voice period is a part of the unprocessed voice data.
- the original voice data can be cleaned and filtered to improve the processing efficiency of subsequent voice data.
- the construction module 120 is used to construct a generative confrontation network, the generative confrontation network including at least one generator and one discriminator.
- the constructed generative confrontation network includes a generator and a discriminator, the output of the generator is connected with the input of the discriminator, and the discriminator's discriminating result is fed back to the generator.
- the generator can be composed of a two-layer convolutional network and a two-layer fully connected neural network.
- the activation function of the convolutional network and the first layer of fully connected neural network is Relu function
- the activation function of the second layer of fully connected network is The sigmoid function
- the generator inputs the generated speech and denoised speech into the discriminator, and trains the discriminator neural network.
- the discriminator judges the predicted speech produced by the generator as false data and gives a low score (close to 0) to denoise the real
- the speech is judged to be real data and given a high score (close to 1) to learn the distribution of denoised speech and speech data generated by the generator.
- the discriminator can be composed of an eight-layer convolutional network, a one-layer long and short-term memory cyclic network and a two-layer fully connected neural network.
- the activation function is the Relu function
- the activation function of the second-layer fully connected network is the sigmoid function.
- the first training module 130 is configured to input the first data set into the discriminator, and adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, when the loss function value of the discriminator is less than the first preset
- the parameters of the discriminator are updated when the threshold is used to obtain the first discriminator, and then the noisy speech of the second data set is input to the generator, and the output speech and the noisy speech are input to the first discriminator, using The backpropagation algorithm updates the parameters of the first discriminator.
- the discriminator At the beginning of iterative training, first input the speech of the first data set into the discriminator.
- the output value of the discriminator is the authenticity score of the input noisy speech, and the loss function of the discriminator is obtained according to the authenticity score of the noisy speech.
- the parameters of the discriminator are updated using the back propagation algorithm to obtain the first discriminator.
- the noisy speech input of the second data set is generated against the generator in the network, and the speech output by the generator and the noisy speech are input to the first discriminator, and the output result of the first discriminator is passed through the backpropagation algorithm, Update the parameters of the first discriminator.
- the discriminator outputs the real number [0,1], which is used to indicate the authenticity of the input X. The closer to 0, the lower the authenticity, and the closer to 1 the real number. The higher the degree.
- the generative countermeasure network is optimized according to the target formula, and the target formula is:
- V represents the loss value
- G represents the generator
- D represents the discriminator
- log is the logarithmic function
- X is the denoised speech data
- X ⁇ P data (X) represents the distribution of the denoised speech X
- Z represents the noise Speech
- Z ⁇ P z (z) represents the distribution of noisy speech Z
- D(x) represents the authenticity score of the denoised speech X by the discriminator
- G(z) represents the generation of noisy speech input after the generator.
- D(G(z)) represents the authenticity score of the generated speech output by the generator by the discriminator
- E represents the average value of the output of sample X or sample Z.
- the loss function of the discriminator is:
- D represents the discriminator
- X represents the denoised speech data
- X c represents the speech output after the noisy speech input generator
- P data represents the training sample
- X, X c ⁇ P data (X, X c ) represents the training
- D(X, X c ) represents the authenticity score of X and X c using the discriminator
- Z ⁇ P z (z) represents the distribution of noisy speech sample Z
- X c ⁇ P data (X c ) represents the distribution of the generated speech X c output by the generator
- E represents the mean value of the output of sample X, X c or sample Z
- X c , D(G(Z,X c ),X c ) represents The discriminator scores the authenticity of the synthetic data G(Z, X c ) and X c generated by the generator, and G(Z, X c ) indicates that the generator converts the sample Z and the
- the second training module 140 is configured to input the noisy speech of the third data set into the generator, and input the output speech and the noisy speech into the first discriminator with updated parameters, and according to the updated parameters
- the output result of the first discriminator of the generator is the loss function of the generator, and the parameter of the generator is adjusted with the loss function value of the minimization generator as the objective.
- the loss function value of the generator is less than the second preset threshold, the generator is updated.
- the parameters of the generator, the generator after the updated parameters is used as the speech enhancement model.
- the loss function of the generator when optimizing the generator G, it is necessary to minimize the authenticity score of the generated sample. According to the above target formula, the loss function of the generator can be known:
- G represents the generator
- D represents the discriminator
- Z represents the noisy speech
- Z ⁇ P z (Z) represents the distribution of the noisy speech sample Z
- E represents the average value of the output of the sample X c and Z
- X c represents And the generated speech outputted by the noisy speech input generator
- X c ⁇ P data (X c ) represents the distribution of sample X c
- G(Z, X c ) represents that the generator converts sample Z and sample X c into synthesis Data
- D(G(Z, X c ), X c ) represents the authenticity score of the synthetic data G(Z, X c ) and X c generated by the generator by the discriminator.
- a total of 86 epochs are trained, the learning rate is 0.0002, and the Batchsize is 400.
- An epoch refers to the process of sending all data into the network to complete a forward calculation and back propagation. Since an epoch is too large for the computer to load, it is divided into several smaller batches. Batch is a part of the data sent to the network for training each time, and Batch Size is the number of training samples in each batch.
- the feedback module 150 is configured to receive the voice data to be enhanced sent by the user, input the voice data to be enhanced into the voice enhancement model, generate the enhanced voice data and feed it back to the user.
- the voice to be enhanced sent by the user can be received through a microphone, converted into a spectrogram after short-time Fourier transform, and sent to the trained voice enhancement model to generate predictive denoising voice data, and then pass the reverse short
- the time-Fourier transform is converted into a voice analog signal, the voice analog signal is fed back to the user, and played out through a device such as a speaker to obtain an enhanced voice, and the enhanced voice is fed back to the user.
- this application also provides a voice enhancement method based on artificial intelligence.
- FIG. 3 is a schematic diagram of a method flow of an embodiment of an artificial intelligence-based speech enhancement method of this application.
- the processor 12 of the server 1 executes the artificial intelligence-based speech enhancement program 10 stored in the memory 11, the following steps of the artificial intelligence-based speech enhancement method are implemented:
- Step S10 Obtain a preset number of noisy speech and denoised speech corresponding to each noisy speech as training samples, and divide the training samples into a first data set, a second data set, and a third data set.
- a preset number of noisy speech data and denoised speech data corresponding to each noisy speech data can be obtained from a preset third-party speech library as training samples.
- the denoised speech data and noisy speech data are sampled at 16KHz, the speech frame length is set to 16ms, and the speech frame shift is set to 8ms. It is understandable that the present application does not limit the frame length, frame shift, and acoustic characteristics contained in the acquired speech spectrum.
- the noisy voice and denoised voice obtained from the preset voice library are unprocessed voice data, and the unprocessed voice data may contain some invalid and redundant voice data.
- the voice time is not as long as required, and the voice quality does not meet the requirements. It is invalid and redundant voice data. Or, there will be some invalid or redundant voice periods in the unprocessed voice data. The existence of this part of the redundant or invalid voice periods will have a bad impact on the subsequent voice data processing process, so it needs to be removed This part of the redundant or invalid voice period, where the voice period is a part of the unprocessed voice data.
- the original voice data can be cleaned and filtered to improve the processing efficiency of subsequent voice data.
- Step S20 Construct a generative confrontation network, which includes at least one generator and one discriminator.
- the constructed generative confrontation network includes a generator and a discriminator, the output of the generator is connected with the input of the discriminator, and the discriminator's discriminating result is fed back to the generator.
- the generator can be composed of a two-layer convolutional network and a two-layer fully connected neural network.
- the activation function of the convolutional network and the first layer of fully connected neural network is Relu function
- the activation function of the second layer of fully connected network is The sigmoid function
- the generator inputs the generated speech and denoised speech into the discriminator, and trains the discriminator neural network.
- the discriminator judges the predicted speech produced by the generator as false data and gives a low score (close to 0) to denoise the real
- the speech is judged to be real data and given a high score (close to 1) to learn the distribution of denoised speech and speech data generated by the generator.
- the discriminator can be composed of an eight-layer convolutional network, a one-layer long and short-term memory cyclic network and a two-layer fully connected neural network.
- the activation function is the Relu function
- the activation function of the second-layer fully connected network is the sigmoid function.
- Step S30 Input the first data set to the discriminator, adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, and update the discriminator when the loss function value of the discriminator is less than the first preset threshold.
- the parameters of the discriminator, the first discriminator is obtained, the noisy speech of the second data set is input to the generator, the output speech and the noisy speech are input to the first discriminator, and the backpropagation algorithm is used to update The parameters of the first discriminator.
- the discriminator At the beginning of iterative training, first input the speech of the first data set into the discriminator.
- the output value of the discriminator is the authenticity score of the input noisy speech, and the loss function of the discriminator is obtained according to the authenticity score of the noisy speech.
- the parameters of the discriminator are updated using the back propagation algorithm to obtain the first discriminator.
- the noisy speech input of the second data set is generated against the generator in the network, and the speech output by the generator and the noisy speech are input to the first discriminator, and the output result of the first discriminator is passed through the backpropagation algorithm, Update the parameters of the first discriminator.
- the discriminator outputs the real number [0,1], which is used to indicate the authenticity of the input X. The closer to 0, the lower the authenticity, and the closer to 1 the real number. The higher the degree.
- the generative countermeasure network is optimized according to the target formula, and the target formula is:
- V represents the loss value
- G represents the generator
- D represents the discriminator
- log is the logarithmic function
- X is the denoised speech data
- X ⁇ P data (X) represents the distribution of the denoised speech X
- Z represents the noise Speech
- Z ⁇ P z (z) represents the distribution of noisy speech Z
- D(x) represents the authenticity score of the denoised speech X by the discriminator
- G(z) represents the generation of noisy speech input after the generator.
- D(G(z)) represents the authenticity score of the generated speech output by the generator by the discriminator
- E represents the average value of the output of sample X or sample Z.
- the loss function of the discriminator is:
- D represents the discriminator
- X represents the denoised speech data
- X c represents the speech output after the noisy speech input generator
- P data represents the training sample
- X, X c ⁇ P data (X, X c ) represents the training
- D(X, X c ) represents the authenticity score of X and X c using the discriminator
- Z ⁇ P z (z) represents the distribution of noisy speech sample Z
- X c ⁇ P data (X c ) represents the distribution of the generated speech X c output by the generator
- E represents the mean value of the output of sample X, X c or sample Z
- X c , D(G(Z,X c ),X c ) represents The discriminator scores the authenticity of the synthetic data G(Z, X c ) and X c generated by the generator, and G(Z, X c ) indicates that the generator converts the sample Z and the
- Step S40 Input the noisy speech of the third data set into the generator, input the output speech and the noisy speech into the first discriminator after updating the parameters, and according to the first discriminator after updating the parameters
- the output result of is to obtain the loss function of the generator, the parameter of the generator is adjusted with the value of the loss function of the minimization generator, and the parameter of the generator is updated when the value of the loss function of the generator is less than the second preset threshold,
- the generator with updated parameters is used as a speech enhancement model.
- the loss function of the generator when optimizing the generator G, it is necessary to minimize the authenticity score of the generated sample. According to the above target formula, the loss function of the generator can be known:
- G represents the generator
- D represents the discriminator
- Z represents the noisy speech
- Z ⁇ P z (Z) represents the distribution of the noisy speech sample Z
- E represents the average value of the output of the sample X c and Z
- X c represents And the generated speech outputted by the noisy speech input generator
- X c ⁇ P data (X c ) represents the distribution of sample X c
- G(Z, X c ) represents that the generator converts sample Z and sample X c into synthesis Data
- D(G(Z, X c ), X c ) represents the authenticity score of the synthetic data G(Z, X c ) and X c generated by the generator by the discriminator.
- a total of 86 epochs are trained, the learning rate is 0.0002, and the Batchsize is 400.
- An epoch refers to the process of sending all data into the network to complete a forward calculation and back propagation. Since an epoch is too large for the computer to load, it is divided into several smaller batches. Batch is a part of the data sent to the network for training each time, and Batch Size is the number of training samples in each batch.
- Step S50 Receive the voice data to be enhanced sent by the user, input the voice data to be enhanced into the voice enhancement model, generate enhanced voice data and feed it back to the user.
- the voice to be enhanced sent by the user can be received through a microphone, converted into a spectrogram after short-time Fourier transform, and sent to the trained voice enhancement model to generate predictive denoising voice data, and then pass the reverse short
- the time-Fourier transform is converted into a voice analog signal, the voice analog signal is fed back to the user, and played out through a device such as a speaker to obtain an enhanced voice, and the enhanced voice is fed back to the user.
- the embodiment of the present application also proposes a computer-readable storage medium.
- the computer-readable storage medium may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read-only memory (ROM), an erasable programmable only Any one or any combination of EPROM, CD-ROM, USB memory, etc.
- the computer-readable storage medium includes an artificial intelligence-based speech enhancement program 10, and the artificial intelligence-based speech enhancement program 10 implements the following operations when executed by a processor:
- Obtaining step obtaining a preset number of noisy speech and denoising speech corresponding to each noisy speech as training samples, and dividing the training samples into a first data set, a second data set, and a third data set;
- Construction step construct a generative confrontation network, the generative confrontation network including at least one generator and one discriminator;
- the first training step input the first data set into the discriminator, adjust the parameters of the discriminator with the objective of minimizing the loss function value of the discriminator, and update when the loss function value of the discriminator is less than the first preset threshold
- the parameters of the discriminator are used to obtain the first discriminator, and then the noisy speech of the second data set is input into the generator, and the output speech and the noisy speech are input into the first discriminator, using back propagation
- the algorithm updates the parameters of the first discriminator;
- the second training step input the noisy speech of the third data set into the generator, and input the output speech and the noisy speech into the first discriminator after the updated parameters, and according to the first discriminator after the updated parameters
- the output result of the discriminator obtains the loss function of the generator, and the parameter of the generator is adjusted with the value of the loss function of the minimization generator.
- the loss function value of the generator is less than the second preset threshold, the generator's loss function is updated. Parameters, using the updated parameter generator as a speech enhancement model;
- the feedback step receiving the voice data to be enhanced sent by the user, inputting the voice data to be enhanced into the voice enhancement model, generating the enhanced voice data and feeding it back to the user.
- the specific implementation of the computer-readable storage medium of the present application is substantially the same as the specific implementation of the artificial intelligence-based speech enhancement method, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Telephonic Communication Services (AREA)
Abstract
一种基于人工智能的语音增强方法、服务器及存储介质。该方法首先获取语音数据作为训练样本,构建生成对抗网络,将带噪语音与其对应的去噪语音输入鉴别器,通过损失函数更新鉴别器参数,然后将带噪语音输入生成器,将输出的语音与该带噪语音一起输入鉴别器,计算损失更新鉴别器的参数,固定鉴别器的参数,将带噪语音输入生成器,将输出的语音与该带噪语音输入鉴别器,通过生成器的损失函数更新生成器的参数,将更新参数后的生成器作为语音增强模型,将待增强语音数据输入语音增强模型,生成增强后的语音数据。该方法可以提升基于生成对抗网络的语音增强模型的性能,进而提高语音增强的效果。
Description
本申请基于巴黎公约申明享有2019年10月12日递交的申请号为CN201910969019.X、名称为“基于人工智能的语音增强方法、服务器及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
本申请涉及人工智能技术领域,尤其涉及一种基于人工智能的语音增强方法、服务器及存储介质。
语音增强的目的主要是从带噪语音中去除复杂的背景噪声,并保证在语音信号不失真的条件下提升语音可懂度。传统的语音增强算法大多是基于噪声估计,且处理的噪声类型单一,并不能很好的处理复杂背景下的语音去噪问题。随着神经网络的迅速发展,越来越多的神经网络模型也被应用到语音增强算法中。
然而,由于语音噪声的分布通常复杂,现有的通过基于深度学习的语音增强方法,模型收敛不稳定,导致语音增强效果差。
发明内容
鉴于以上内容,本申请提供一种基于人工智能的语音增强方法、服务器及存储介质,其目的在于本提升语音增强的效果。
为实现上述目的,本申请提供一种基于人工智能的语音增强方法,该方法包括:
获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;
构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;
第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的 损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;
第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及
反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
为实现上述目的,本申请还提供一种服务器,该服务器包括:存储器及处理器,其特征在于,所述存储器上存储基于人工智能的语音增强程序,所述基于人工智能的语音增强程序被所述处理器执行,实现如下步骤:
获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;
构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;
第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;
第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及
反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括基于人工智能的语音增强程序,所述基于人工智能的语音增强程序被处理器执行时,可实现如上所述基于人工智能的语音增强方法中的任意步骤。
相比现有技术的基于人工智能的语音增强方法,本申请提出的基于人工智能的语音增强方法、服务器及存储介质,通过获取带噪语音及其对应的去噪语音作为训练样本,构建包括鉴别器和生成器的生成式对抗网络,并基于带噪语音及生成器输出的语音多次调整、更新鉴别器的参数得到第一鉴别器,再基于第一鉴别器得到生成器的损失函数,最后通过最小化生成器的损失函数值调整生成器的参数得到语音增强模型,应用于语音数据增强。本申请提供的基于人工智能的语音增强方法应用的上述生成式对抗网络,没有RNN中类似的递归操作,相较于神经网络时效性更高、数据处理速度更快,从而实现快速增强语音。此外,上述生成式对抗网络的生成器和鉴别器处理的是原始音频,不需要手动提取特征,还可以从不同说话者和不同类型噪声中学习语音特征并将其结合在一起形成共享参数,使得系统简单且泛化能力较强。
图1为本申请服务器较佳实施例的示意图;
图2为图1中基于人工智能的语音增强程序较佳实施例的模块示意图;
图3为本申请基于人工智能的语音增强方法较佳实施例的流程图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1所示,为本申请服务器1较佳实施例的示意图。
该服务器1包括但不限于:存储器11、处理器12、显示器13及网络接口14。所述服务器1通过网络接口14连接网络,获取原始数据。其中,所述网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯系统(Global System of Mobile communication,GSM)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi、通话网络等无线或有线网络。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器11可以是所述服务器1的内部存储单元,例如该服务器1的硬盘或内存。在另一些实施例中,所述存储器11也可以是所述服务器1的外部存储设备,例如该服务器1配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器11还可以既包括所述服务器1的内部存储单元也包括其外部存储设备。本实施例中,存储器11通常用于存储安装于所述服务器1的操作系统和各类应用软件,例如基于人工智能的语音增强程序10的程序代码等。此外,存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述服务器1的总体操作,例如执行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行基于人工智能的语音增强程序10的程序代码等。
显示器13可以称为显示屏或显示单元。在一些实施例中显示器13可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器13用于显示在服务器1中处理的信息以及用于显示可视化的工作界面,例如显示数据统计的结果。
网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口),该网络接口14通常用于在所述服务器1与其它电子设备之间建立通信连接。
图2仅示出了具有组件11-14以及基于人工智能的语音增强程序10的服务器1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,所述服务器1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在服务器1中处理的信息以及用于显示可视化的用户界面。
该服务器1还可以包括射频(Radio Frequency,RF)电路、传感器和音频电路等等,在此不再赘述。
在上述实施例中,处理器12执行存储器11中存储的基于人工智能的语音增强程序10时可以实现如下步骤:
获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;
构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;
第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;
第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及
反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
关于上述步骤的详细介绍,请参照下述图2关于基于人工智能的语音增强程序10实施例的程序模块图以及图3关于基于人工智能的语音增强方法实施例的流程图的说明。
在其他实施例中,所述基于人工智能的语音增强程序10可以被分割为多个模块,该多个模块被存储于存储器12中,并由处理器13执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
参照图2所示,为图2中基于人工智能的语音增强程序10一实施例的程序模块图。在本实施例中,所述基于人工智能的语音增强程序10可以被分割为:获取模块110、构建模块120、第一训练模块130、第二训练模块140及反馈模块150。
获取模块110,用于获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集。
在本实施中,可以从预设第三方语音库获取预设数量的带噪语音数据以及与各带噪语音数据对应的去噪语音数据作为训练样本。所述去噪语音数据与带噪语音数据使用16KHz进行采样,语音帧长设置为16ms,语音帧移设置为8ms。可以理解的是,本申请对所获取的语音频谱的帧长、帧移以及语音频谱中所包含的声学特征不进行限定。
从预设语音库获取到的带噪语音和去噪语音是未经处理的语音数据,未经处理的语音数据可能会包含一些无效、冗余的语音数据。例如,语音时长达不到要求,语音质量不符合要求等为无效、冗余的语音数据。或者,在未经处理的语音数据中会存在部分无效或者冗余的语音时段,这部分冗余或无效的语音时段的存在会对后续的语音数据处理过程带来不好的影响,因此需去除这部分冗余或无效的语音时段,其中,语音时段是未经处理的语音数据的一部分。可以对原始语音数据作除杂和滤波处理,以提高后续语音数据的处理效率。
构建模块120,用于构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器。
在本实施例中,构建的生成式对抗网络包括1个生成器和1个鉴别器,生成器的输出与鉴别器的输入相连,鉴别器的判别结果再反馈至生成器。
生成器可以由一个两层的卷积网络和一个两层的全连接神经网络组成,卷积网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接网络的激活函数为sigmoid函数,生成器将生成的语音和去噪语音输入鉴别器,训练鉴别器神经网络,鉴别器对生成器产生的预测语音判定为虚假数据并给予低分(接近0),对真实的去噪语音判定为真实数据并给予高分(接近1),以此学习去噪语音和生成器生成的语音数据的分布。鉴别器可以由一个八层的卷积网络、一个一层的长短期记忆循环网络和一个二层的全连接神经网络组成,卷积网络、长短期记忆循环网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接网络的激活函数为sigmoid函数。
第一训练模块130,用于将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数。
在迭代训练开始时,首先将第一数据集的语音输入鉴别器,鉴别器的输出值为输入的带噪语音的真实度评分,根据该带噪语音的真实度评分得到鉴别器的损失函数,根据鉴别器的损失函数利用反向传播算法更新鉴别器的参数,得到第一鉴别器。再将第二数据集的带噪语音输入生成对抗网络中的生成器,将生成器输出的语音和该带噪语音输入第一鉴别器,第一鉴别器的输出的结果通过反向传播算法,更新第一鉴别器的参数。在本实施例中,对任意的输入样本带噪语音X,鉴别器输出[0,1]的实数,用来表示输入X的真实度,越接近0表示真实度越低,越接近1表示真实度越高。
根据目标公式对生成式对抗网络进行优化,所述目标公式为:
其中,V表示损失值,G表示生成器,D表示鉴别器,log为对数函数,X为去噪语音数据,X~P
data(X)表示关于去噪语音X的分布,Z表示带噪语 音,Z~P
z(z)表示关于带噪语音Z的分布,D(x)表示鉴别器对去噪语音X的真实度评分,G(z)表示带噪语音输入生成器后输出的生成语音,D(G(z))表示鉴别器对由生成器输出的生成语音的真实度评分,E表示求样本X或样本Z输出的均值。
在对鉴别器进行优化时,需要最大化带噪语音Z与去燥语音X的均值之和,根据上述目标公式可得知鉴别器的损失函数为:
其中,D表示鉴别器,X表示去噪语音数据,X
c表示带噪语音输入生成器后输出的语音,P
data表示训练样本,X,X
c~P
data(X,X
c)表示关于训练样本特征X和X
c的分布,D(X,X
c)表示利用鉴别器对X和X
c的真实度评分,Z~P
z(z)表示带噪语音样本Z的分布,X
c~P
data(X
c)表示关于生成器输出的生成语音X
c的分布,E表示求样本X、X
c或样本Z、X
c输出的均值,D(G(Z,X
c),X
c)表示鉴别器对由生成器生成的合成数据G(Z,X
c)和X
c的真实度评分,G(Z,X
c)表示该生成器将样本Z和样本X
c转换为合成数据。
将训练样本Z和训练样本X、X
c的真实度评分代入鉴别器的损失函数中,通过不断最小化鉴别器的损失函数值,可以优化鉴别器不同层节点间的权重,当鉴别器的损失函数值小于第一预设阈值时,更新鉴别器的参数。
第二训练模块140,用于将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型。
在本实施例中,在对生成器G进行优化时,需要最小化生成样本的真实度评分值,根据上述目标公式可得知生成器的损失函数:
其中,G表示生成器,D表示鉴别器,Z表示带噪语音,Z~P
z(Z)表示关于带噪语音样本Z的分布,E表示求样本X
c、Z输出的均值,X
c表示与带噪语音输入生成器后输出的生成语音,X
c~P
data(X
c)表示样本X
c的分布,G(Z,X
c)表示该生成器将样本Z和样本X
c转换为合成数据,D(G(Z,X
c),X
c)表示该鉴别器对由生成器生成的合成数据G(Z,X
c)和X
c的真实度评分。
将训练样本Z和训练样本X
c的真实度评分代入生成器的损失函数,通过不断最小化生成器的损失函数值,可以优化生成器不同层节点间的权重,当生成器的损失函数值小于第二预设阈值时,更新生成器的参数。
在本实施例中,总共训练86个epoch,学习率为0.0002,Batchsize为400。一个epoch指所有的数据送入网络中完成一次前向计算及反向传播的过程。由于一个epoch太大,计算机难以负荷,因此将它分成几个较小的batches,batch就是每次送入网络中训练的一部分数据,而Batch Size就是每个batch中训练样本的数量。
反馈模块150,用于接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
在本实施例中,可以通过传声器接收用户发送的待增强语音,经短时傅里叶变换转化成语谱图送入已经训练好的语音增强模型中,生成预测去噪语音数据,再通过反短时傅里叶变换转化成语音模拟信号,将该语音模拟信号反馈给用户,经扬声器等装置播放出来,即得到增强的语音,将增强后的语音反馈给所述用户。
此外,本申请还提供一种基于人工智能的语音增强方法。参照图3所示,为本申请基于人工智能的语音增强方法的实施例的方法流程示意图。服务器1的处理器12执行存储器11中存储的基于人工智能的语音增强程序10时实现基于人工智能的语音增强方法的如下步骤:。
步骤S10:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集。
在本实施中,可以从预设第三方语音库获取预设数量的带噪语音数据以及与各带噪语音数据对应的去噪语音数据作为训练样本。在一个实施例中,所述去噪语音数据与带噪语音数据使用16KHz进行采样,语音帧长设置为16ms,语音帧移设置为8ms。可以理解的是,本申请对所获取的语音频谱的帧长、帧移以及语音频谱中所包含的声学特征不进行限定。
从预设语音库获取到的带噪语音和去噪语音是未经处理的语音数据,未经处理的语音数据可能会包含一些无效、冗余的语音数据。例如,语音时长达不到要求,语音质量不符合要求等为无效、冗余的语音数据。或者,在未经处理的语音数据中会存在部分无效或者冗余的语音时段,这部分冗余或无 效的语音时段的存在会对后续的语音数据处理过程带来不好的影响,因此需去除这部分冗余或无效的语音时段,其中,语音时段是未经处理的语音数据的一部分。可以对原始语音数据作除杂和滤波处理,以提高后续语音数据的处理效率。
步骤S20:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器。
在本实施例中,构建的生成式对抗网络包括1个生成器和1个鉴别器,生成器的输出与鉴别器的输入相连,鉴别器的判别结果再反馈至生成器。
生成器可以由一个两层的卷积网络和一个两层的全连接神经网络组成,卷积网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接网络的激活函数为sigmoid函数,生成器将生成的语音和去噪语音输入鉴别器,训练鉴别器神经网络,鉴别器对生成器产生的预测语音判定为虚假数据并给予低分(接近0),对真实的去噪语音判定为真实数据并给予高分(接近1),以此学习去噪语音和生成器生成的语音数据的分布。鉴别器可以由一个八层的卷积网络、一个一层的长短期记忆循环网络和一个二层的全连接神经网络组成,卷积网络、长短期记忆循环网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接网络的激活函数为sigmoid函数。
步骤S30:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数。
在迭代训练开始时,首先将第一数据集的语音输入鉴别器,鉴别器的输出值为输入的带噪语音的真实度评分,根据该带噪语音的真实度评分得到鉴别器的损失函数,根据鉴别器的损失函数利用反向传播算法更新鉴别器的参数,得到第一鉴别器。再将第二数据集的带噪语音输入生成对抗网络中的生成器,将生成器输出的语音和该带噪语音输入第一鉴别器,第一鉴别器的输出的结果通过反向传播算法,更新第一鉴别器的参数。在本实施例中,对任意的输入样本带噪语音X,鉴别器输出[0,1]的实数,用来表示输入X的真实度,越接近0表示真实度越低,越接近1表示真实度越高。
根据目标公式对生成式对抗网络进行优化,所述目标公式为:
其中,V表示损失值,G表示生成器,D表示鉴别器,log为对数函数,X为去噪语音数据,X~P
data(X)表示关于去噪语音X的分布,Z表示带噪语音,Z~P
z(z)表示关于带噪语音Z的分布,D(x)表示鉴别器对去噪语音X的真实度评分,G(z)表示带噪语音输入生成器后输出的生成语音,D(G(z))表示鉴别器对由生成器输出的生成语音的真实度评分,E表示求样本X或样本Z输出的均值。
在对鉴别器进行优化时,需要最大化带噪语音Z与去燥语音X的均值之和,根据上述目标公式可得知鉴别器的损失函数为:
其中,D表示鉴别器,X表示去噪语音数据,X
c表示带噪语音输入生成器后输出的语音,P
data表示训练样本,X,X
c~P
data(X,X
c)表示关于训练样本特征X和X
c的分布,D(X,X
c)表示利用鉴别器对X和X
c的真实度评分,Z~P
z(z)表示带噪语音样本Z的分布,X
c~P
data(X
c)表示关于生成器输出的生成语音X
c的分布,E表示求样本X、X
c或样本Z、X
c输出的均值,D(G(Z,X
c),X
c)表示鉴别器对由生成器生成的合成数据G(Z,X
c)和X
c的真实度评分,G(Z,X
c)表示该生成器将样本Z和样本X
c转换为合成数据。
将训练样本Z和训练样本X、X
c的真实度评分代入鉴别器的损失函数中,通过不断最小化鉴别器的损失函数值,可以优化鉴别器不同层节点间的权重,当鉴别器的损失函数值小于第一预设阈值时,更新鉴别器的参数。
步骤S40:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型。
在本实施例中,在对生成器G进行优化时,需要最小化生成样本的真实度评分值,根据上述目标公式可得知生成器的损失函数:
其中,G表示生成器,D表示鉴别器,Z表示带噪语音,Z~P
z(Z)表示关 于带噪语音样本Z的分布,E表示求样本X
c、Z输出的均值,X
c表示与带噪语音输入生成器后输出的生成语音,X
c~P
data(X
c)表示样本X
c的分布,G(Z,X
c)表示该生成器将样本Z和样本X
c转换为合成数据,D(G(Z,X
c),X
c)表示该鉴别器对由生成器生成的合成数据G(Z,X
c)和X
c的真实度评分。
将训练样本Z和训练样本X
c的真实度评分代入生成器的损失函数,通过不断最小化生成器的损失函数值,可以优化生成器不同层节点间的权重,当生成器的损失函数值小于第二预设阈值时,更新生成器的参数。
在本实施例中,总共训练86个epoch,学习率为0.0002,Batchsize为400。一个epoch指所有的数据送入网络中完成一次前向计算及反向传播的过程。由于一个epoch太大,计算机难以负荷,因此将它分成几个较小的batches,batch就是每次送入网络中训练的一部分数据,而Batch Size就是每个batch中训练样本的数量。
步骤S50:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
在本实施例中,可以通过传声器接收用户发送的待增强语音,经短时傅里叶变换转化成语谱图送入已经训练好的语音增强模型中,生成预测去噪语音数据,再通过反短时傅里叶变换转化成语音模拟信号,将该语音模拟信号反馈给用户,经扬声器等装置播放出来,即得到增强的语音,将增强后的语音反馈给所述用户。
此外,本申请实施例还提出一种计算机可读存储介质,该计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等等中的任意一种或者几种的任意组合。所述计算机可读存储介质中包括基于人工智能的语音增强程序10,所述基于人工智能的语音增强程序10被处理器执行时实现如下操作:
获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;
构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;
第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的 损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;
第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及
反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
本申请之计算机可读存储介质的具体实施方式与上述基于人工智能的语音增强方法的具体实施方式大致相同,在此不再赘述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质中,包括若干指令用以使得一台终端设备执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
Claims (20)
- 一种基于人工智能的语音增强方法,应用于服务器,其特征在于,所述方法包括:获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
- 如权利要求1所述的基于人工智能的语音增强方法,其特征在于,所述生成器由一个两层的卷积网络及一个两层的全连接神经网络组成,所述卷积网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求1所述的基于人工智能的语音增强方法,其特征在于,所述鉴别器由一个八层的卷积网络、一个一层的长短期记忆循环网络及一个二层的全连接神经网络组成,所述卷积网络、长短期记忆循环网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求1所述的基于人工智能的语音增强方法,其特征在于,所述带噪语音及与各带噪语音对应的去噪语音使用16KHz进行采样,语音帧长设置为16ms,语音帧移设置为8ms。
- 如权利要求1所述的基于人工智能的语音增强方法,其特征在于,所述反馈步骤包括:接收用户发送的待增强语音,待增强语音经短时傅里叶变换转化成语谱图输入所述语音增强模型,生成对应的去噪语音数据,将该去噪语音数据通过反短时傅里叶变换转化成语音模拟信号,将该语音模拟信号反馈至用户。
- 一种服务器,该服务器包括存储器及处理器,其特征在于,所述存储器上存储基于人工智能的语音增强程序,所述基于人工智能的语音增强程序被所述处理器执行,实现如下步骤:获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生 成器和一个鉴别器;第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
- 如权利要求8所述的服务器,其特征在于,所述生成器由一个两层的卷积网络及一个两层的全连接神经网络组成,所述卷积网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求8所述的服务器,其特征在于,所述鉴别器由一个八层的卷积网络、一个一层的长短期记忆循环网络及一个二层的全连接神经网络组成,所述卷积网络、长短期记忆循环网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求8所述的服务器,其特征在于,所述带噪语音及与各带噪语音对应的去噪语音使用16KHz进行采样,语音帧长设置为16ms,语音帧移设置为8ms。
- 如权利要求8所述的服务器,其特征在于,所述反馈步骤包括:接收用户发送的待增强语音,待增强语音经短时傅里叶变换转化成语谱图输入所述语音增强模型,生成对应的去噪语音数据,将该去噪语音数据通过反短时傅里叶变换转化成语音模拟信号,将该语音模拟信号反馈至用户。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括基于人工智能的语音增强程序,所述基于人工智能的语音增强程序被处理器执行时,实现如下步骤:获取步骤:获取预设数量的带噪语音及与各带噪语音对应的去噪语音,作为训练样本,将所述训练样本分为第一数据集、第二数据集及第三数据集;构建步骤:构建生成式对抗网络,所述生成式对抗网络包括至少一个生成器和一个鉴别器;第一训练步骤:将所述第一数据集输入所述鉴别器,以最小化鉴别器的损失函数值为目标调整鉴别器的参数,当鉴别器的损失函数值小于第一预设阈值时更新所述鉴别器的参数,得到第一鉴别器,再将第二数据集的带噪语音输入所述生成器,将输出的语音和该带噪语音输入所述第一鉴别器,利用反向传播算法更新第一鉴别器的参数;第二训练步骤:将所述第三数据集的带噪语音输入所述生成器,将输出的语音及该带噪语音输入更新参数后的第一鉴别器,根据所述更新参数后的第一鉴别器的输出结果得到生成器的损失函数,以最小化生成器的损失函数 值为目标调整生成器的参数,当生成器的损失函数值小于第二预设阈值时,更新所述生成器的参数,将更新参数后的生成器作为语音增强模型;及反馈步骤:接收用户发送的待增强的语音数据,将待增强语音数据输入所述语音增强模型,生成增强后的语音数据并反馈至所述用户。
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述生成器由一个两层的卷积网络及一个两层的全连接神经网络组成,所述卷积网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述鉴别器由一个八层的卷积网络、一个一层的长短期记忆循环网络及一个二层的全连接神经网络组成,所述卷积网络、长短期记忆循环网络及第一层全连接神经网络的激活函数为Relu函数,第二层全连接神经网络的激活函数为sigmoid函数。
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述带噪语音及与各带噪语音对应的去噪语音使用16KHz进行采样,语音帧长设置为16ms,语音帧移设置为8ms。
- 如权利要求15所述的计算机可读存储介质,其特征在于,所述反馈步骤包括:接收用户发送的待增强语音,待增强语音经短时傅里叶变换转化成语谱图输入所述语音增强模型,生成对应的去噪语音数据,将该去噪语音数据通过反短时傅里叶变换转化成语音模拟信号,将该语音模拟信号反馈至用户。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910969019.XA CN110853663B (zh) | 2019-10-12 | 2019-10-12 | 基于人工智能的语音增强方法、服务器及存储介质 |
CN201910969019.X | 2019-10-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021068338A1 true WO2021068338A1 (zh) | 2021-04-15 |
Family
ID=69598020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/118004 WO2021068338A1 (zh) | 2019-10-12 | 2019-11-13 | 基于人工智能的语音增强方法、服务器及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110853663B (zh) |
WO (1) | WO2021068338A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114495958A (zh) * | 2022-04-14 | 2022-05-13 | 齐鲁工业大学 | 一种基于时间建模生成对抗网络的语音增强系统 |
CN114842863A (zh) * | 2022-04-19 | 2022-08-02 | 电子科技大学 | 一种基于多分支-动态合并网络的信号增强方法 |
CN117351940A (zh) * | 2023-12-05 | 2024-01-05 | 中国科学院自动化研究所 | 基于语音大模型的合成语音检测方法及装置 |
CN117877517A (zh) * | 2024-03-08 | 2024-04-12 | 深圳波洛斯科技有限公司 | 基于对抗神经网络的环境音生成方法、装置、设备及介质 |
CN118366479A (zh) * | 2024-06-19 | 2024-07-19 | 中国科学院自动化研究所 | 一种基于持续强化学习的语音攻防博弈自反馈方法及装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111785288B (zh) * | 2020-06-30 | 2022-03-15 | 北京嘀嘀无限科技发展有限公司 | 语音增强方法、装置、设备及存储介质 |
CN112259068B (zh) * | 2020-10-21 | 2023-04-11 | 上海协格空调工程有限公司 | 一种主动降噪空调系统及其降噪控制方法 |
CN112786003A (zh) * | 2020-12-29 | 2021-05-11 | 平安科技(深圳)有限公司 | 语音合成模型训练方法、装置、终端设备及存储介质 |
CN112802491B (zh) * | 2021-02-07 | 2022-06-14 | 武汉大学 | 一种基于时频域生成对抗网络的语音增强方法 |
CN115662441B (zh) * | 2022-12-29 | 2023-03-28 | 北京远鉴信息技术有限公司 | 一种基于自监督学习的语音鉴伪方法、装置及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147810A (zh) * | 2018-09-30 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | 建立语音增强网络的方法、装置、设备和计算机存储介质 |
CN109326302A (zh) * | 2018-11-14 | 2019-02-12 | 桂林电子科技大学 | 一种基于声纹比对和生成对抗网络的语音增强方法 |
US20190130903A1 (en) * | 2017-10-27 | 2019-05-02 | Baidu Usa Llc | Systems and methods for robust speech recognition using generative adversarial networks |
CN110136731A (zh) * | 2019-05-13 | 2019-08-16 | 天津大学 | 空洞因果卷积生成对抗网络端到端骨导语音盲增强方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108805188B (zh) * | 2018-05-29 | 2020-08-21 | 徐州工程学院 | 一种基于特征重标定生成对抗网络的图像分类方法 |
CN108922518B (zh) * | 2018-07-18 | 2020-10-23 | 苏州思必驰信息科技有限公司 | 语音数据扩增方法和系统 |
CN109119090A (zh) * | 2018-10-30 | 2019-01-01 | Oppo广东移动通信有限公司 | 语音处理方法、装置、存储介质及电子设备 |
CN109524020B (zh) * | 2018-11-20 | 2023-07-04 | 上海海事大学 | 一种语音增强处理方法 |
-
2019
- 2019-10-12 CN CN201910969019.XA patent/CN110853663B/zh active Active
- 2019-11-13 WO PCT/CN2019/118004 patent/WO2021068338A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190130903A1 (en) * | 2017-10-27 | 2019-05-02 | Baidu Usa Llc | Systems and methods for robust speech recognition using generative adversarial networks |
CN109147810A (zh) * | 2018-09-30 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | 建立语音增强网络的方法、装置、设备和计算机存储介质 |
CN109326302A (zh) * | 2018-11-14 | 2019-02-12 | 桂林电子科技大学 | 一种基于声纹比对和生成对抗网络的语音增强方法 |
CN110136731A (zh) * | 2019-05-13 | 2019-08-16 | 天津大学 | 空洞因果卷积生成对抗网络端到端骨导语音盲增强方法 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114495958A (zh) * | 2022-04-14 | 2022-05-13 | 齐鲁工业大学 | 一种基于时间建模生成对抗网络的语音增强系统 |
CN114495958B (zh) * | 2022-04-14 | 2022-07-05 | 齐鲁工业大学 | 一种基于时间建模生成对抗网络的语音增强系统 |
CN114842863A (zh) * | 2022-04-19 | 2022-08-02 | 电子科技大学 | 一种基于多分支-动态合并网络的信号增强方法 |
CN114842863B (zh) * | 2022-04-19 | 2023-06-02 | 电子科技大学 | 一种基于多分支-动态合并网络的信号增强方法 |
CN117351940A (zh) * | 2023-12-05 | 2024-01-05 | 中国科学院自动化研究所 | 基于语音大模型的合成语音检测方法及装置 |
CN117351940B (zh) * | 2023-12-05 | 2024-03-01 | 中国科学院自动化研究所 | 基于语音大模型的合成语音检测方法及装置 |
CN117877517A (zh) * | 2024-03-08 | 2024-04-12 | 深圳波洛斯科技有限公司 | 基于对抗神经网络的环境音生成方法、装置、设备及介质 |
CN117877517B (zh) * | 2024-03-08 | 2024-05-24 | 深圳波洛斯科技有限公司 | 基于对抗神经网络的环境音生成方法、装置、设备及介质 |
CN118366479A (zh) * | 2024-06-19 | 2024-07-19 | 中国科学院自动化研究所 | 一种基于持续强化学习的语音攻防博弈自反馈方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110853663A (zh) | 2020-02-28 |
CN110853663B (zh) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021068338A1 (zh) | 基于人工智能的语音增强方法、服务器及存储介质 | |
CN110491404B (zh) | 语音处理方法、装置、终端设备及存储介质 | |
CN109841226B (zh) | 一种基于卷积递归神经网络的单通道实时降噪方法 | |
CN110600017B (zh) | 语音处理模型的训练方法、语音识别方法、系统及装置 | |
US10332507B2 (en) | Method and device for waking up via speech based on artificial intelligence | |
US12073828B2 (en) | Method and apparatus for speech source separation based on a convolutional neural network | |
CN111968658B (zh) | 语音信号的增强方法、装置、电子设备和存储介质 | |
WO2020098256A1 (zh) | 基于全卷积神经网络的语音增强方法、装置及存储介质 | |
CN110232436A (zh) | 卷积神经网络的修剪方法、装置及存储介质 | |
CN110942779A (zh) | 一种噪声处理方法、装置、系统 | |
CN112489668B (zh) | 去混响方法、装置、电子设备和存储介质 | |
WO2024027295A1 (zh) | 语音增强模型的训练、增强方法、装置、电子设备、存储介质及程序产品 | |
US20230186943A1 (en) | Voice activity detection method and apparatus, and storage medium | |
US20220293118A1 (en) | Method and apparatus for noise reduction, electronic device, and storage medium | |
Liu et al. | Golden gemini is all you need: Finding the sweet spots for speaker verification | |
EP4254400A1 (en) | Method and device for determining user intent | |
CN112289337B (zh) | 一种滤除机器学习语音增强后的残留噪声的方法及装置 | |
CN113782044A (zh) | 一种语音增强方法及装置 | |
JP2020086023A (ja) | 行動識別方法、行動識別装置、行動識別プログラム、機械学習方法、機械学習装置及び機械学習プログラム | |
JP2018092117A (ja) | 音響信号処理のパラメータ予測装置及びパラメータ予測方法 | |
CN115440240A (zh) | 语音降噪的训练方法、语音降噪系统及语音降噪方法 | |
CN116306889A (zh) | 模型训练方法、装置、电子设备及介质 | |
CN114743561A (zh) | 语音分离装置及方法、存储介质、计算机设备 | |
CN112786047B (zh) | 一种语音处理方法、装置、设备、存储介质及智能音箱 | |
CN115798453A (zh) | 语音重建方法、装置、计算机设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19948263 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19948263 Country of ref document: EP Kind code of ref document: A1 |