US20210343305A1 - Using a predictive model to automatically enhance audio having various audio quality issues - Google Patents

Using a predictive model to automatically enhance audio having various audio quality issues Download PDF

Info

Publication number
US20210343305A1
US20210343305A1 US16/863,591 US202016863591A US2021343305A1 US 20210343305 A1 US20210343305 A1 US 20210343305A1 US 202016863591 A US202016863591 A US 202016863591A US 2021343305 A1 US2021343305 A1 US 2021343305A1
Authority
US
United States
Prior art keywords
audio
audios
source
target
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/863,591
Other versions
US11514925B2 (en
Inventor
Zeyu Jin
Jiaqi Su
Adam Finkelstein
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Adobe Inc
Princeton University
Original Assignee
Adobe Inc
Princeton University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adobe Inc, Princeton University filed Critical Adobe Inc
Priority to US16/863,591 priority Critical patent/US11514925B2/en
Assigned to THE TRUSTEES OF PRINCETON UNIVERSITY reassignment THE TRUSTEES OF PRINCETON UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FINKELSTEIN, ADAM
Assigned to ADOBE INC. reassignment ADOBE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JIN, ZEYU, SU, JIAQI
Publication of US20210343305A1 publication Critical patent/US20210343305A1/en
Application granted granted Critical
Publication of US11514925B2 publication Critical patent/US11514925B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G10L21/0205
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0454
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • This disclosure generally relates to audio enhancement and, more specifically, to automatically enhancing audio using a predictive model to automatically enhance audio having various audio quality issues.
  • Deep learning is a machine learning technique whereby a neural network is trained to map a waveform representing an audio recording to an improved version of the audio recording in waveform. Specifically, the neural network is trained only to reduce noise, or the neural network is trained only to reduce reverberation.
  • a neural network learns to map a set of source audio to a corresponding set of target audio, where each source audio has a quality issue and the corresponding target audio is a version of the source audio without the quality issue.
  • a loss function is used to represent a comparison between predicted audio output by the neural network and target audio that is the desired output based on a source audio, and output from the loss function is used to adjust the neural network.
  • the neural network is able to generate a predicted audio from a provided source audio, where the predicted audio is the neural network's prediction of what target audio for the source audio would look like. For example, if the neural network was trained on noisy source audio along with target audio lacking noise, then the neural network is able to reduce noise in the source audio by generating predicted audio.
  • one or more processing devices perform operations to enhance audio recordings.
  • the operations include training a prediction model to map source audios (e.g., audio recordings) to target audios (e.g., higher-quality versions of the audio recordings), and the operations further include utilizing the prediction model, after training, to convert a new source audio to a new predicted audio, which is a higher-quality version of the new source audio.
  • a training subsystem trains the prediction model as part of a generative adversarial network that includes both the prediction model and a discriminator to be jointly trained.
  • the training subsystem obtains training data having a plurality of tuples including the source audios and the target audios, each tuple including a source audio in waveform representation and a corresponding target audio in waveform representation.
  • the target audios are obtained from an existing dataset of recordings of various speakers.
  • the training subsystem generates a set of source audios corresponding to each target audio by convolving the target audio with various room impulse responses and adding noise to the result of each such convolution, resulting in the source audios being lower-quality versions of the target audio.
  • the prediction model During training, the prediction model generates predicted audios based on the source audios in the training data.
  • the training subsystem applies a loss function to the predicted audios and the target audios, where that loss function incorporates a combination of a spectrogram loss and an adversarial loss.
  • the training subsystem updates the prediction model to optimize the loss function.
  • the operations performed by the one or more processors include receiving a request to enhance a new source audio.
  • the new source audio is an audio recording that was recorded on a smartphone or other consumer device outside of a professional recording environment.
  • a user seeks to enhance the audio recording to result in a studio-quality version of the audio recording.
  • the operations further include, responsive to the request to enhance the new source audio, inputting the new source audio into the prediction model that was previously trained as described above. Based on the new source audio, the prediction model generates a new predicted audio as an enhanced version of the new source audio.
  • FIG. 1 illustrates an interface of an enhancement system, according to certain embodiments described herein.
  • FIG. 2 is a diagram of an example architecture of the enhancement system, according to certain embodiments described herein.
  • FIG. 3 is a block diagram of a training subsystem of the enhancement system, according to certain embodiments described herein.
  • FIG. 4 is a flow diagram of a process of training a prediction model of the enhancement system, according to certain embodiments described herein.
  • FIG. 5 shows an example architecture of the prediction model as trained in a generative adversarial network, according to certain embodiments described herein.
  • FIG. 6 is a flow diagram of a process of utilizing the prediction model to enhance audio after training, according to certain embodiments described herein.
  • FIG. 7 depicts an example of a computing system that performs certain operations described herein, according to certain embodiments.
  • Existing systems to improve audio recordings come with significant drawbacks. For instance, existing systems tend to reduce the problem to a single quality issue, such as only noise or only reverberation, and therefore address only that quality issue when improving the audio recording.
  • the existence of multiple quality issues in audio recordings creates more complex artifacts that become difficult to identify and address by a neural network or otherwise.
  • addressing each quality issue independently fails to account for the cases where multiple quality issues exist and impact the audio recording in potentially unexpected ways due to the combination.
  • Another existing technique treats the problem of improving audio as a source separation problem.
  • Systems using this technique seek to separate foreground speech from background audio, such as music and environmental noise, so as to isolate the foreground speech to provide a result with the background.
  • An example of such an existing technique separates speech from background noise by using an machine learning (ML) auto-encoder structure.
  • ML machine learning
  • an existing technique uses a loss function that incorporates a combination of a least absolute deviations loss function, typically referred to as an “L1” loss function, and a least squares error, typically referred to as an “L2” loss function.
  • That existing loss function allows training only with simulated data and not with real data; real data could include phase shifts, clock drifts, and misalignment, all of which are unrecognizable given such a loss function.
  • that loss function fails to capture human perception of sound quality. Because humans hear sound in terms of frequencies rather than as a collection individual samples (i.e., discrete points) in the audio, a loss function such as this, which is based on sampling the audio, fails to enable a neural network to learn to improve audio based on human perceptions.
  • Embodiments of the present disclosure includes techniques for enhancing audio.
  • a prediction model to generate a predicted waveform (i.e., predicted audio in a waveform representation) that is an enhanced version of a source waveform (i.e., source audio in a waveform representation) provided as input.
  • the predicted waveform is high quality (e.g., studio-quality) audio, which is improved over the source audio in terms of a combination of noise, reverberation, distortion, uneven equalization, or other quality issues.
  • certain embodiments train the prediction model based on a loss function that combines spectrogram loss and adversarial loss.
  • This loss function is coupled with generative adversarial training in a generative adversarial network (GAN).
  • GAN generative adversarial network
  • audio refers to a set of audio data.
  • audios refers to one or multiple sets of audio data.
  • an audio could be an audio file or portion of an audio file, and multiple audios could be multiple audio files or portions thereof.
  • source audio refers to an audio (e.g., an audio file) having a source quality, or undesirable quality, of audio data.
  • target audio refers to an audio (e.g., an audio file) that has a target quality, such as studio quality, which is a desired quality of audio data.
  • target audio refers to an audio (e.g., an audio file) generated by a prediction model as a prediction of target audio for a given source audio.
  • an enhancement system for enhancing audio trains a prediction model to learn a mapping from source audios to target audios, where the target audios are enhanced versions of the source audios, and then utilizes the prediction model to predict predicted audios based on other source audios provided during operation.
  • the predicted audios are enhanced versions of the other source audios.
  • an administrator constructs a GAN, which includes a discriminator and also includes a prediction model acting as a generator in the GAN.
  • the administrator obtains training data to train the GAN. Specifically, for instance, the administrator extracts a set of target audios from the Device and Produced Speech (DAPS) dataset. Each target audio is a studio-quality audio recording. This dataset includes ten minutes worth of sample audio recordings from each of ten female speakers and ten male speakers.
  • the administrator also obtains 270 room impulse responses from the Massachusetts Institute of Technology (MIT) Impulse Response Survey dataset.
  • the administrator causes each target audio to be convolved with each room impulse response, resulting in 270 modified versions of each target audio.
  • the administrator adds noise to each such modified version and saves the resulting versions (i.e., as modified and with added noise) as source audios. Together, each source audio and its corresponding target audio form a tuple included in the training data.
  • the administrator trains the GAN, including both the prediction model and the discriminator, based on the training data.
  • the discriminator learns to distinguish authentic target audios (i.e., those in the training data) from inauthentic target audios (i.e., those generated by the prediction model)
  • the prediction model learns to generate progressively better predicted audios (i.e., closer to the corresponding target audios) based in part on feedback from the discriminator. More specifically, during training, the prediction model generates (i.e., predicts) predicted audios based on source audios provided, and the discriminator guesses whether various audios, including both predicted audios and target audios, are authentic.
  • the prediction model is updated based a training signal output by a novel loss function, which includes a spectrogram loss component and an adversarial loss component.
  • a novel loss function which includes a spectrogram loss component and an adversarial loss component.
  • the spectrogram loss component is based on a spectrogram representation of the predicted audio generated by the prediction model as compared to a spectrogram representation of the target audio corresponding to the source audio in the training data.
  • the adversarial loss component is based on feedback from the discriminator.
  • the prediction model is able to generate a predicted audio based on a source audio, where that predicted audio is meant to predict what target audio for the source audio would look like. More specifically, the predicted audio is a studio-quality version of the source audio, because the prediction model was trained to map source audios to studio-quality versions.
  • a user accesses the enhancement system through a web interface, which the user uses to upload an example source audio.
  • the user selects a submission button on the interface, which the enhancement system interprets as a request to enhance the example source audio.
  • the enhancement system utilizes the prediction model to map the example source audio to a resulting predicted audio, which the enhancement system then provides to the user.
  • the resulting predicted audio is a studio-quality version of the example source audio.
  • Certain embodiments described herein represent improvements in the technical field of audio enhancement. Specifically, some embodiments train a prediction model as part of a GAN, to enhance source audio to achieve studio-quality results even when the source audio includes multiple combined quality issues. Some embodiments of the prediction model are trained using a loss function that includes both a spectrogram loss and an adversarial loss. The combination of these losses enables the prediction model to progressively improve in a way that aligns with human perception, such that the prediction model learns to enhance audio effectively without concerning itself with small misalignments or phase shifts that would not be perceptible. Further, given the wide range of training data used in some embodiments, including audio data with multiple combined quality issues, the prediction model is able to generalize to new speakers in various environments with various combinations of quality issues.
  • FIG. 1 illustrates an interface 100 of an enhancement system, according to certain embodiments described herein.
  • An embodiment of the enhancement system causes an interface 100 , such as the one shown, for example, to be displayed to a user.
  • the interface 100 enables the user to request that the enhancement system generate a predicted audio 130 from a source audio 110 , where the predicted audio 130 is an enhanced (i.e., improved) version of the source audio 110 .
  • the source audio 110 has one or multiple quality issues, such as noise, reverberation, frequency distortion (e.g., poor equalization), or other distortion, and the resulting predicted audio 130 lacks such quality issues or at least has a reduction in such quality issues.
  • an example of the interface 100 enables a user to indicate a source audio 110 .
  • the interface 100 includes a source interface object 105 , such as a link or button, selectable by a user, such that selection of the source interface object 105 causes display of a file browser.
  • the user can use the file browser to indicate a source audio 110 .
  • the user has already used the source interface object 105 to choose a source audio 110 titled source-audio.wav.
  • the source audio 110 may be stored locally on a computing device that displays the interface 100 or may be stored remotely, in which case the user might provide a uniform resource locator (URL) to indicate a location of the source audio 110 .
  • URL uniform resource locator
  • the source audio 110 is represented as a waveform, which may be provided as a .wav file. Additionally or alternatively, however, the source audio 110 is in a compressed format or in some other audio representation. If the source audio 110 is not provided as a waveform, some embodiments of the enhancement system convert the source audio 110 a waveform, such as by decompressing the data representing the source audio 110 . Various techniques are known in the art for converting audio representations to waveform, and the enhancement system can use one or more of such techniques or others.
  • the interface 100 enables one-click audio enhancement.
  • the interface 100 enables the user to request audio enhancement by selecting a single interface object.
  • the interface 100 provides the user with a submission interface object 120 , such as a link or button, that the user can select to request enhancement of the source audio 110 .
  • the interface 100 includes a button as the submission interface object 120 with the word “Go,” “Enhance,” or “Convert” on it.
  • the interface 100 enables the user to provide additional information, such as parameters, useable by the enhancement system when enhancing the source audio 110 . After providing such additional information, the user can then select the submission interface object 120 to request enhancement according to the additional information provided.
  • the enhancement system receives a request from the user to enhance the source audio 110 .
  • the user's selection of an interface object can be interpreted as such a request.
  • an embodiment of the enhancement system Responsive to receiving that request, an embodiment of the enhancement system generates a predicted audio 130 corresponding to the source audio 110 by applying a prediction model to the source audio 110 .
  • the enhancement system has generated a predicted audio 130 with the filename predicted-audio.wav.
  • the prediction model Based on the source audio 110 , the prediction model generates the predicted audio 130 , which the interface 100 provides to the user.
  • the interface 100 provides a results interface object 135 , such as a link or button, which the user can select to stream the predicted audio 130 or to store the predicted audio 130 on a local or remote computing device.
  • FIG. 2 is a diagram of an example architecture of the enhancement system 200 according to certain embodiments described herein.
  • an embodiment of the enhancement system 200 includes a training subsystem 210 and a prediction subsystem 220 .
  • the training subsystem 210 trains a prediction model 230 to learn a mapping between source audios 110 and target audios, thereby causing the prediction model 230 to be able to generate predicted audios 130 (i.e., predictions of hypothetical target audios) based on new source audios 110 ;
  • the prediction subsystem 220 receives a new source audio 110 and uses the prediction model 230 , after training, to generate a corresponding predicted audio 130 .
  • the prediction model 230 is implemented as a neural network, but it will be understood that other machine-learning models might be used additionally or alternatively to a neural network.
  • the interface 100 discussed above provides access to the prediction subsystem 220 , enabling a user to request that the prediction subsystem 220 enhance a new source audio 110 by generating a corresponding predicted audio 130 .
  • the training subsystem 210 and the prediction subsystem 220 are maintained on respective servers, specifically a training server 215 and a prediction server 225 respectively.
  • the training server 215 could be, for instance, a server utilized for software development or administrative tasks.
  • the training server 215 could be under the control of a company that produces all or a portion of an application embodying part of a portion of the prediction subsystem 220 .
  • the prediction server 225 could be, for example, a web server that services a website useable to enhance audio by generating predicted audio 130 .
  • the interface 100 described above could be integrated into a web page of that website.
  • An example of the training server 215 is implemented as one or more computing nodes, and further, an example of the prediction server 225 is implemented as one or more computing nodes that may be distinct from or overlap with the computing nodes of the training server 215 .
  • the training subsystem 210 trains the prediction model 230 prior to use of the prediction model 230 to enhance audio as part of the prediction subsystem 220 .
  • the training subsystem 210 employs deep learning to update the prediction model 230 , which is implemented as a neural network in some embodiment, so as to teach the prediction model 230 a mapping from source audios 110 to target audios in a set of training data.
  • the training subsystem 210 will be described in more detail below.
  • the prediction model 230 may be accessible by the prediction subsystem 220 , For instance, in the example of FIG.
  • the training server 215 copies (i.e., transmits) the trained prediction model 230 to the prediction server 225 for user by the prediction subsystem 220 .
  • the prediction subsystem 220 can utilize the prediction model 230 to enhance audio.
  • a client 240 can access the enhancement system 200 ; specifically, for instance, the client 240 accesses the prediction subsystem 220 of the enhancement system 200 on behalf of a user to enhance source audio 110 provided by the user.
  • the user operates the client 240 .
  • the client 240 is a computing device, such as a notebook computer, a desktop computer, a tablet, or a smartphone, but it will be understood that the client 240 could be one or a combination of various computing devices.
  • the client 240 connects to the prediction subsystem 220 , which provides the interface 100 to the client 240 for display to the user.
  • the client 240 provides the source audio 110 to the prediction subsystem 220 and receives an indication of the predicted audio 130 after generation of the predicted audio 130 .
  • the user can indicate the source audio 110 to the client 240 , such as through the interface 100 displayed at the client 240 .
  • An example of the client 240 accesses the source audio 110 and transmits the source audio 110 to the prediction subsystem 220 , or additionally or alternatively, if the source audio 110 is stored remotely from the client 240 , an example of the client 240 indicates the source audio 110 to the prediction subsystem 220 , which then retrieves the source audio 110 .
  • an embodiment of the prediction subsystem 220 enhances the source audio 110 .
  • the prediction subsystem 220 applies the prediction model 230 to the source audio 110 , causing the prediction model 230 to generate a predicted audio 130 corresponding to the source audio 110 .
  • the predicted audio 130 is an enhanced version (e.g., a studio-quality version) of the source audio 110 .
  • the prediction subsystem 220 provides the predicted audio 130 to the client 240 , such as by transmitting the predicted audio 130 to the client 240 or by transmitting to the client 240 a reference to (e.g., an URL) the predicted audio 130 .
  • An example of the client 240 then provides the predicted audio 130 to the user as an enhanced version of the source audio 110 .
  • the prediction subsystem 220 including the prediction model 230 is stored locally on the client 240 rather than residing at a prediction server 225 .
  • the enhancement occurs locally at the client 240 .
  • this can enable the user to utilize the enhancement system 200 offline (i.e., without access to a remote server).
  • the prediction server 225 need not be a distinct device from the client 240 as shown in FIG. 2 . Rather, the prediction server 225 is integrated with the client 240 .
  • An example of the prediction subsystem 220 is integrated into, or accessible by, an application that enhances audio.
  • an application is an audio-related or video-related product, such as Adobe® Audition, Adobe Premiere, or Adobe Spark.
  • that application has a client portion that runs on the client 240 and communicates with a cloud service managed by the prediction server 225 .
  • the application is a local application that need not communicate with a remote server to generate predicted audios 130 .
  • the application is configured to enhance audio by applying the trained prediction model 230 to source audio 110 to generate prediction audio that is potentially studio quality.
  • FIG. 3 is a block diagram of the training subsystem 210 according to some embodiments.
  • the training subsystem 210 includes a generative adversarial network 310 , which is used to train the prediction model 230 as well as to train a discriminator 320 .
  • the prediction model 230 itself is a neural network acting as a generator, which is trained jointly with the discriminator 320 in an adversarial manner.
  • the GAN 310 of the training subsystem 210 further includes an evaluation tool 330 , which applies one or more objective functions, also referred to as loss functions, to modify the prediction model 230 , the discriminator 320 , or both.
  • an evaluation tool 330 which applies one or more objective functions, also referred to as loss functions, to modify the prediction model 230 , the discriminator 320 , or both.
  • the prediction model 230 acting as a generator, learns to generate predicted audio 130 given source audio 110 , while the discriminator 320 learns to determine whether audio provided to the discriminator 320 is authentic (i.e., is target audio that is an enhanced version of the source audio 110 ).
  • the GAN 310 is thus adversarial in nature because the prediction model 230 and the discriminator 320 compete against each other and thus improve jointly.
  • the discriminator 320 must improve in order to continue identifying fakes (i.e., audios other than target audios of input source audios 110 ), and as the discriminator 320 improves, the generator must improve to continue producing predicted audios 130 capable of fooling the discriminator 320 .
  • the adversarial nature of the GAN 310 and of the training subsystem 210 causes the prediction model 230 to learn how to convert source audios 110 into corresponding predicted audios 130 (i.e., to generate predicted audios 130 based on source audios 110 ) that are close to the corresponding target audios.
  • the prediction model 230 learns how to predict audio (i.e., to generate predicted audio 130 ) based on training data 340 .
  • the training data 340 includes a set of tuples, each tuple including a source audio 110 and a corresponding target audio 345 .
  • the target audio 345 is a high quality (e.g., studio quality) version of the corresponding source audio 110 .
  • the training subsystem 210 seeks to teach the prediction model 230 how to generate predicted audio 130 that matches the target audio 345 for each source audio 110 in the training data 340 .
  • an example of the prediction model 230 learns a mapping of source audios 110 having source quality to target audios 345 having target quality, such as studio quality. As a result, the prediction model 230 is then potentially able to generate a new predicted audio 130 for a new source audio 110 that is not part of the training data 340 , where that predicted audio 130 is of the target quality.
  • the evaluation tool 330 executes a generator loss function 350 and a discriminator loss function 360 .
  • the generator loss function 350 includes both a spectrogram loss 370 and an adversarial loss 380 .
  • the evaluation tool 330 updates the prediction model to better represent the mapping from source audios to target audios 345 in the training data 340 .
  • the evaluation tool 330 updates the discriminator 320 to better identify authentic target audios 345 .
  • FIG. 4 is a flow diagram of a process 400 of training a prediction model 230 to map source audios 110 to corresponding target audios 345 , so as to teach the prediction model 230 to predict audios corresponding to new source audios 110 , according to some embodiments.
  • some embodiments described herein acquire seed data for use in training the prediction model 230 . All or a portion of the seed data is used as training data 340 .
  • An example of seed data includes a set of tuples, each tuple including a source audio 110 and a target audio 345 corresponding to the source audio 110 .
  • each tuple need not include unique target audio 345 ; rather, as described in the below, a particular target audio 345 may be included in multiple tuples, paired with a different corresponding source audio 110 in each such tuple.
  • the training subsystem 210 receives a first portion of the seed data (e.g., target audios 345 ) and simulates (e.g., generates by simulation) a second portion of the seed data (e.g., source audios 110 ) based on that first portion.
  • the process 400 of the training subsystem 210 begins at block 405 of the flow diagram shown in FIG. 4 .
  • the seed data Prior to block 405 , however, the seed data is empty or includes seed data collected previously. However, additional tuples are added to the seed data as described below, specifically at blocks 405 through 440 .
  • block 405 of the process involves obtaining (i.e., receiving) target audios 345 .
  • each such target audio 345 obtained is a studio-quality sample of audio. More generally, each such target audio 345 is an example of acceptable output desired to be generated by the prediction model 230 .
  • the training subsystem 210 can obtain the target audios 345 in various ways.
  • the training subsystem 210 downloads or otherwise receives the target audios 345 from an existing data set, such as the DAPS dataset.
  • the DAPS dataset includes a ten-minute audio clip from each of twenty speakers, including ten male voices and ten female voices, and an example of the training subsystem 210 downloads these audio clips to use each as a target audios 345 .
  • the training subsystem 210 can use each ten-minute clip as a target audio 345 , or the training subsystem 210 can divide each ten-minute clip into smaller clips, such that each smaller clip is used as a target audio 345 .
  • the training subsystem 210 trains the GAN 310 with target audios 345 and source audios 110 in waveform representations.
  • the training subsystem 210 receives the target audios 345 as waveforms, or the training subsystem 210 converts each target audio 345 to a corresponding waveform. For instance, if a target audio 345 is received in a compressed format, the training subsystem 210 decompresses the compressed format to extract a version of the target audio 345 that is an uncompressed waveform.
  • some embodiments of the training subsystem 210 generate source audios 110 based on the target audios 345 , where each corresponding source audio 110 of a target audio 345 is a reduced-quality version of the target audio 345 .
  • the source audios 110 generated below are generated as waveforms in some embodiments as well.
  • the process 400 involves obtaining a set of room impulse responses.
  • a room impulse response describes how sound changes between its source (e.g., a speaker) and a microphone that records the resulting audio.
  • the room impulse response associated with a specific environment reflects how environmental factors (e.g., reverberation and background noise) impact sound as recorded in that environment.
  • the room impulse responses are extracted or otherwise received from the MIT Impulse Response Survey dataset, which currently includes 270 room impulse responses.
  • the room impulse responses are used to reduce the quality of the target audios 345 so as to generate source audios 110 . It will be understood, however, that other techniques exist and can be used to generate the source audios 110 based on the target audios 345 .
  • Block 415 of the process 400 begins an iterative loop, where, in each iteration of the loop, the training subsystem 210 generates a corresponding set of source audios 110 , including one or multiple source audios 110 , based on a given target audio 345 .
  • the process 400 involves selecting a target audio 345 to associate with the current iteration of the loop, where source audios 110 have not yet been generated for the selected target audio 345 in prior iterations of the loop. Specifically, for instance, from among the target audios 345 received at block 405 , the training subsystem 210 selects one of such target audios 345 for which source audios 110 have not yet been generated in a prior loop, if any such prior loops exist.
  • the process 400 involves applying each room impulse response to the selected target audio 345 to generate a set of source audios 110 .
  • the training subsystem 210 convolves the selected target audio 345 with each room impulse response separately, resulting in a set of source audios 110 in the same quantity as the set of room impulse responses used.
  • the set of source audios 110 resulting includes a corresponding source audio 110 for each room impulse response applied, and all such source audios 110 are based on and therefore correspond to the selected target audio 345 .
  • these resulting source audios 110 are intermediate and are further modified prior to training the prediction model 230 , as described below.
  • the process 400 involves adding noise to each source audio 110 that was generated at block 420 based on the selected target audio 345 .
  • the training subsystem 210 adds noise to each source audio 110 , such as by adding noise extracted from a Gaussian distribution, resulting in an updated version of the source audio 110 that includes noise and still corresponds to the selected target audio 345 .
  • the selected target audio 345 is now associated with a set of source audios 110 that include additional noise.
  • the training subsystem 210 does not add noise to the source audios 110 , and in that case, block 425 is skipped.
  • the process 400 involves generating a set of tuples corresponding to the selected target audio 345 .
  • the training subsystem 210 generates a respective tuple associated with each source audio 110 generated at block 425 , if noise is being added, or generated at block 420 , if noise is not being added.
  • Such a tuple includes the source audio 110 as well as the selected target audio 345 on which the source audio 110 is based.
  • the training subsystem 210 constructs a set of tuples having a quantity equal to the quantity of source audios 110 , which can be equal to the quantity of room impulse responses used.
  • This set of tuples associated with the selected target audio 345 is a subset of the tuples that make up the seed data, which can include not only the selected target audio 345 , but also other target audios 345 obtained at block 405 .
  • the process involves adding the tuples generated at block 430 to seed data used to train the prediction model 230 .
  • the training subsystem 210 updates the seed data by adding each tuple, including the selected target audio 345 and a corresponding source audio 110 , to the seed data.
  • the training subsystem 210 determines whether all target audios 345 have been selected in block 415 and have has associated source audios 110 generated. If any target audios 345 have not yet been selected, then the process 400 returns to block 415 , where another target audio 345 is selected. However, if all target audios 345 have been selected and have had corresponding source audios 110 generated, then the process 400 continues to block 445 .
  • the process 400 involves selecting at least a portion of the seed data to use as training data 340 .
  • not all of the seed data is used as training data 340 to train the GAN 310 .
  • an example of the training subsystem 210 sets aside a portion of the seed data for validation and testing of the prediction model 230 , and that portion is not included in the training data 340 .
  • the training subsystem 210 identifies target audios 345 corresponding to two minutes of audio from one female voice and two minutes of audio from one male voice.
  • the training subsystem 210 selects a subset of the corresponding tuples that represent seventy environments (i.e., seventy room impulse responses). The training subsystem 210 extracts those selected tuples to be used for validation and testing, rather than as training data 340 . The training subsystem 210 uses the remaining seed data as training data 340 .
  • the process 400 involves training the prediction model 230 within the GAN 310 .
  • the training subsystem 210 utilizes the training data 340 selected at block 445 to train the GAN 310 and, thus, to train the prediction model 230 . Due to being trained based on this diverse set of training data 340 including multiple speakers and multiple environments, the prediction model 230 is able to generalize its learning to recordings of new speakers, new speech content, and new recording environments.
  • Training within the GAN 310 utilizes one or more loss functions, as will be described further below, to update the prediction model 230 and the discriminator 320 .
  • one of skill in the art will understand how to utilize the training data 340 to train the prediction model 230 in the GAN 310 , utilizing one or more loss functions.
  • embodiments described herein can utilize a novel set of loss functions, as described in more detail below.
  • the training subsystem 210 trains the prediction model 230 of the GAN 310 for at least one million iterations with a batch size of ten using an Adam optimization algorithm with an initial learning rate of 0.001, reduced by a factor of ten every three hundred thousand iterations. Given the pre-trained prediction model 230 , the training subsystem 210 trains the discriminator 320 from scratch for five thousand iterations while keeping the generator fixed. The training subsystem 210 then performs joint training on both the prediction model 230 and the discriminator 320 .
  • the training subsystem 210 performs one or more tasks to overcome overfitting of the prediction model 230 .
  • the training subsystem 210 generates augmented versions of the target audios 345 prior to generating the corresponding source audios 110 , and the training subsystem 210 bases the source audios 110 on the augmented versions, although the original versions of the target audios 345 are used in the tuples of the seed data.
  • the impulse control responses are applied to the augmented versions rather than to the original target audios 345 themselves.
  • a target audio 345 is augmented, for instance, by re-scaling its volume, changing its speed via linear interpolation, or both.
  • the training subsystem 210 augments one or more room impulse responses before applying such room impulses to the target audios 345 , regardless of whether the target audios 345 have been augmented themselves; for instance, a room impulse response is augmented by re-scaling the energy of its reverberation while keeping its direct signal the same, by changing reverberation time such as via nearest neighbor interpolation, or by the use of both such techniques.
  • the training subsystem 210 adds that noise to various source audios 110 at various signal-to-noise ratios ranging from, for instance, twenty decibels to thirty decibels, resulting in a relatively wide range of noise being added to the source audios 110 for a given target audio 345 .
  • training a GAN 310 requires use of one or more loss functions (i.e., objective functions), for instance including a first loss function for training the generator and a second loss function for training the discriminator 320 .
  • Some embodiments of the training subsystem 210 utilize a novel set of loss functions so as to train the prediction model 230 , acting as the generator, to learn a mapping between source audios 110 and target audios 345 .
  • the training subsystem 210 utilizes a generator loss function 350 to train the prediction model 230 , where the generator loss function 350 is designed in consideration of human perceptions.
  • the generator loss function 350 is perceptually motivated in that it incorporates an aspect of spectrogram loss 370 , which discounts small misalignments and phase shifts that are typically not perceivable.
  • the generator loss function 350 includes a spectrogram loss 370 , or spectrogram loss component, based on a spectrogram representation of the predicted audio 130 as compared to a spectrogram representation of the target audio 345 , and the generator loss function 350 further includes an adversarial loss 380 , or adversarial loss component, that is based on feedback from the discriminator 320 .
  • an example of the training subsystem 210 computes the L2 difference (i.e., the least squares error) of the log spectrogram of a predicted waveform (i.e., the predicted audio 130 in a waveform representation) as compared to a target waveform (i.e., the target audio 345 in a waveform representation).
  • the spectrogram loss 370 is invariant to small misalignments between the source waveform (i.e., the source audio 110 is waveform representation) and the target waveform during training. As such, the spectrogram loss 370 is therefore better in accordance with human perception of speech quality, compared to popular L1 or L2 sample-based (i.e., based on consideration of discrete points, or samples) loss functions.
  • join training in conjunction with the discriminator 320 removes noise and artifacts in predicted waveforms.
  • the discriminator 320 tries to distinguish between output of the prediction model 230 and authentic clean speech not outputted by the prediction model 230 .
  • the discriminator 320 learns to identify telltale artifacts that occur during generation by the prediction model 230 .
  • the prediction model 230 improves its fidelity as it learns to fool the discriminator 320 in an adversarial manner, and doing so causes such telltale artifacts to be reduced over time.
  • the generator loss function 350 and the discriminator loss function 360 are differentiable, and thus, some embodiments of the training subsystem 210 use standard forward and back propagation to train the prediction model 230 and the discriminator 320 .
  • the prediction model G acting as the generator optimizes (e.g., minimizes) a generator loss function L G , which is the sum of a spectrogram loss 370 and adversarial loss 380 as follows:
  • the discriminator D optimizes (e.g., minimizes) a discriminator loss function L D as follows:
  • LogSpec represents the log spectrogram
  • LogMel represents the log mel-spectrogram.
  • the values for ⁇ and ⁇ are weighting factors.
  • an embodiment of the generator loss function 350 incorporated both a spectrogram loss 370 and an adversarial loss 380 .
  • represents a spectrogram loss 370 because this difference is the difference between the log spectrogram of the predicted audio 130 and the log spectrogram of the target audio 345
  • 1 ⁇ D(LogMel(G(x)) represents an adversarial loss 380 because this difference is the discriminator score of the log mel-spectrogram of the predicted audio 130 subtracted from the desired discriminator score of 1, where a discriminator score of 1 would indicate a discriminator 320 finding of authenticity at the highest likelihood.
  • the above generator loss function 350 includes a combination of spectrogram loss 370 and adversarial loss 380 .
  • the weighting factors ⁇ and ⁇ for that combination are selected such that the spectrogram loss 370 and the adversarial loss 380 are considered roughly at the same magnitude or, in other words, such that ⁇
  • the training subsystem 210 computes multiple spectrogram losses
  • the training subsystem 210 uses an equally weighted combination of two spectrogram losses with two sets of STFTs for the sampling rate of 16 kHz: one spectrogram loss with a relatively large fast Fourier transform (FFT) window size of 2048 and a hop size of 512, and another spectrogram loss with a smaller FFT window size of 512 and a hop size of 128.
  • FFT fast Fourier transform
  • the larger FFT window size gives more frequency resolutions
  • the smaller FFT window size gives more temporal resolutions.
  • this example training subsystem 210 uses kernel sizes of (3, 9), (3, 8), (3, 8), and (3, 6), stride sizes of (1, 2), (1, 2), (1, 2), and (1, 2), and channel sizes of (1, 32), (32, 32), (32, 32), (32, 32) respectively for the sequence of the network layers.
  • the input to the discriminator 320 is computed as the log of mel-spectrogram with 80 mel bandpass filters ranging from 80 Hz to 7600 Hz, using the STFT parameters of large FFT window.
  • FIG. 5 shows an example architecture of the prediction model 230 within the GAN 310 , according to some embodiments.
  • an example of the prediction model 230 is the same as, or similar to, the feed-forward WaveNet architecture as presented in the work of Rethage, Pons, and Serra (Dario Rethage, “A Wavenet for Speech Denoising,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2019, pp. 5039-5043).
  • the training subsystem 210 trains the prediction model 230 and the discriminator 320 , specifically, based on training data 340 that includes source audios 110 and target audios 345 .
  • the training subsystem 210 trains the prediction model 230 using a training signal output by the generator loss function 350 , which includes both a spectrogram loss 370 and an adversarial loss 380 as described above.
  • An example of the prediction model 230 generates predicted audios 130 in waveforms (i.e., predicted waveforms 510 ). Although training data 340 is not shown in FIG. 5 , the generation of such predicted audios 130 is based on corresponding source audios 110 in the training data 340 in some embodiments, as described above.
  • An example of the training subsystem 210 applies a STFT to the predicted waveforms 510 to generate predicted spectrograms 520 , which are spectrogram representations of the predicted audios 130 .
  • the training subsystem 210 provides to the discriminator 320 the predicted spectrograms 520 generated during training as well as target spectrograms, which are spectrogram representations of target audios 345 in the training data 340 .
  • the discriminator 320 predicts which of the various spectrograms received are authentic (i.e., are target spectrograms); specifically, the discriminator 320 generates a score associated with each spectrogram to indicate the likelihood that the spectrogram is authentic.
  • an evaluation tool 330 is configured to apply the generator loss function 350 and the discriminator loss function 360 .
  • the evaluation tool 330 applies the generator loss function 350 to the corresponding target audio 345 , predicted audio 130 , and discriminator score for the predicted audio 130 (e.g., according to the formula for the generator loss function 350 described above) to determine a generator loss value.
  • Various such generator loss values together form a generator training signal, where that generator training signal is used by the training subsystem 210 to update the weights of nodes within the neural network of the prediction model 230 so as to train the prediction model 230 through backpropagation.
  • the evaluation tool 330 applies the discriminator loss function 360 to the corresponding discriminator score of the predicted audio 130 and the discriminator score of the target audio 345 (e.g., according to the formula for the discriminator loss function 360 described above) to determine a discriminator loss value.
  • Various such discriminator loss values together form a discriminator training signal, where that discriminator training signal is used by the training subsystem 210 to update the weights of nodes within the neural network of the discriminator 320 so as to train the discriminator 320 through backpropagation.
  • an example of the prediction model 230 is implemented as a neural network including stacks of one-dimensional (1D) dilated convolutions. The dilation rates exponentially increase for each layer up to a limit and then repeat.
  • the prediction model 230 further includes one or more Gated Activation Units (GAU) for nonlinearity.
  • GAU Gated Activation Units
  • a unit block of the prediction model 230 performs the following computations, where z k , z r,k , and z s,k and intermediate values used to compute x k+1 :
  • the prediction model 230 uses both a residual connection and a skip connection.
  • the value of x r,k is computed for the residual connection and is added to the input of the current block to construct input to the next block.
  • the prediction model 230 sums outputs z s,k , for all k, from the various skip connections.
  • the prediction model 230 applies Rectified Linear Unit (ReLU) followed by three 1D non-dilated convolutions in sequence during post-processing. The last of the three 1D non-dilated convolutions uses a filter width of 1 and projects the channel dimension to 1.
  • ReLU Rectified Linear Unit
  • the channel size is 128 across the entire neural network because, for instance, the source waveform is projected to 128 channel dimensions using 1D non-dilated convolution during preprocessing.
  • Some embodiments use a filter width of 3 for the dilated convolutions and post-processing, and a filter width of 1 for skip connections and residual connections.
  • the dilation rates exponentially increase for each layer up to a limit and then repeat. Specifically, for instance, the sequence of dilation rates is, for example, (1, 2, 4, 8, 16, 32, 64, 128, 256, 512).
  • the prediction model 230 includes two such stacks dilated convolutions, resulting in a total of 20 layers.
  • the discriminator 320 takes in the log mel-spectrogram of target audios 345 and predicted audios 130 and, based on the log mel-spectrogram, outputs a prediction for each. For instance, that prediction is a score indicating a believed likelihood that the target audio 345 or predicted audio 130 is authentic (i.e., is a target audio 345 ).
  • An example discriminator 320 takes the structure described in StarGAN-VC (Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo, “StarGAN-VC: Non-Parallel Many-to-Many Voice Conversion with Star Generative Adversarial Networks,” arXiv preprint arXiv:1806.02169, 2018).
  • the example of the discriminator 320 is a gated convolutional neural network (CNN) with several stacks of convolutional layers, a batch normalization layer, and a Gated Linear Unit (GLU).
  • CNN gated convolutional neural network
  • GLU Gated Linear Unit
  • the discriminator 320 is fully convolutional, thus allowing inputs of arbitrary temporal length (i.e., audios of various lengths).
  • FIG. 6 is a flow diagram of a process 600 of utilizing the prediction model 230 after training, according to some embodiments.
  • the prediction subsystem 220 performs part or all of the process 600 described below.
  • the training subsystem 210 performs the process 400 of FIG. 4 prior to the process 600 of FIG. 6 being performed, thus causing the prediction model 230 to have previously been trained as part of the GAN 310 before being placed into operation.
  • the process 600 involves receiving a source audio 110 .
  • the source audio 110 is received from a user via an interface 100 , where that user can be human or automated.
  • that interface 100 enables a user to indicate the source audio 110 , thereby enabling the prediction subsystem 220 to access the source audio 110 .
  • the source audio 110 is received as a waveform.
  • an embodiment of the prediction subsystem 220 converts the source audio 110 to a waveform.
  • the process 600 involves receiving a request to enhance the source audio 110 .
  • an example of the interface 100 includes a button or link enabling the user to request enhancement of the source audio 110 .
  • the request can be made through a single click or selection made by the user, such as by selecting a button labeled “Go,” “Enhance,” or “Convert” in the interface 100 . If the request is received through the interface 100 and the interface 100 is displayed on a first computing device other than a second computing device running the prediction subsystem 220 , as shown in FIG. 2 , then the first computing device transmits the request to the prediction subsystem 220 at the second computing device, where the request is received and processed.
  • the process 600 includes providing the source audio 110 to the prediction model 230 .
  • an embodiments of the prediction subsystem 220 inputs the source audio 110 into the prediction model 230 .
  • the process 600 includes generating a predicted audio 130 corresponding to the source audio 110 .
  • the prediction model 230 Based on the source audio 110 , the prediction model 230 generates and outputs the predicted audio 130 .
  • the prediction model 230 learned to map a new source audio 110 to an enhanced version (e.g., a studio-quality version) of the new source audio 110 .
  • the predicted audio 130 generated at this block 620 is an enhanced version of the source audio 110 .
  • the process 600 involves outputting the predicted audio 130 generated at block 620 , such as by providing the predicted audio 130 to the user. For instance, if the prediction subsystem 220 runs on a second computing device that differs from a first computing device displaying the interface 100 to the user, then the prediction subsystem 220 causes the second computing device to transmit the predicted audio 130 to the first computing device, which provides the predicted audio 130 to the user. For instance, the first computing device enables the user to download or stream the predicted audio 130 through the interface 100 .
  • embodiments described herein provide an end-to-end deep learning solution to enhancing audio, including audio with multiple types of quality issues. Some embodiments fix noise, reverberation, and distortion (e.g., undesirable equalization) to raise a source audio 110 to studio quality sufficient for a professional production.
  • noise e.g., undesirable equalization
  • FIG. 7 depicts an example of a computing system 700 that can be used as the prediction server 225 , the training server 215 , or various other computing devices performing operations described herein.
  • the computing system 700 executes all or a portion of the prediction subsystem 220 , the training subsystem 210 , or both.
  • a first computing system having devices similar to those depicted in FIG. 7 e.g., a processor, a memory, etc. executes the prediction subsystem 220 while another such computing system executes the training subsystem 210 , such as in the example of FIG. 1 .
  • the depicted example of a computing system 700 includes a processor 702 communicatively coupled to one or more memory devices 704 .
  • the processor 702 executes computer-executable program code stored in a memory device 704 , accesses information stored in the memory device 704 , or both.
  • Examples of the processor 702 include a microprocessor, an application-specific integrated circuit (“ASIC”), a field-programmable gate array (“FPGA”), or any other suitable processing device.
  • the processor 702 can include any number of processing devices, including a single processing device.
  • the memory device 704 includes any suitable non-transitory computer-readable medium for storing data, program code, or both.
  • a computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code.
  • Non-limiting examples of a computer-readable medium include a magnetic disk, a memory chip, a ROM, a RAM, an ASIC, optical storage, magnetic tape or other magnetic storage, or any other medium from which a processing device can read instructions.
  • the instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript.
  • the computing system 700 may also include a number of external or internal devices, such as input or output devices.
  • the computing system 700 is shown with one or more input/output (“I/O”) interfaces 708 .
  • I/O interface 708 can receive input from input devices or provide output to output devices.
  • One or more buses 706 are also included in the computing system 700 .
  • the bus 706 communicatively couples one or more components of a respective one of the computing system 700 .
  • the computing system 700 executes program code that configures the processor 702 to perform one or more of the operations described herein.
  • the program code includes, for example, the prediction subsystem 220 , the prediction model 230 , the training subsystem 210 , or other suitable applications that perform one or more operations described herein.
  • the program code may be resident in the memory device 704 or any suitable computer-readable medium and may be executed by the processor 702 or any other suitable processor.
  • both the prediction subsystem 220 and the training subsystem 210 are stored in the memory device 704 , as depicted in FIG. 7 .
  • one or more of the prediction subsystem 220 and the training subsystem 210 are stored in different memory devices of different computing systems.
  • the program code described above is stored in one or more other memory devices accessible via a data network.
  • the computing system 700 can access the prediction model 230 or other models, datasets, or functions in any suitable manner.
  • some or all of one or more of these models, datasets, and functions are stored in the memory device 704 of a common computer system 700 , as in the example depicted in FIG. 7 .
  • that separate computing system that executes the training subsystem 210 can provide access to the trained prediction model 230 to enable running of the prediction model 230 by the prediction subsystem 220 .
  • one or more programs, models, datasets, and functions described herein are stored in one or more other memory devices accessible via a data network.
  • the computing system 700 also includes a network interface device 710 .
  • the network interface device 710 includes any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks.
  • Non-limiting examples of the network interface device 710 include an Ethernet network adapter, a modem, and the like.
  • the computing system 700 is able to communicate with one or more other computing devices (e.g., a computing device acting as a client 240 ) via a data network using the network interface device 710 .
  • a computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs.
  • Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
  • Embodiments of the methods disclosed herein may be performed in the operation of such computing devices.
  • the order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Operations of a method include receiving a request to enhance a new source audio. Responsive to the request, the new source audio is input into a prediction model that was previously trained. Training the prediction model includes providing a generative adversarial network including the prediction model and a discriminator. Training data is obtained including tuples of source audios and target audios, each tuple including a source audio and a corresponding target audio. During training, the prediction model generates predicted audios based on the source audios. Training further includes applying a loss function to the predicted audios and the target audios, where the loss function incorporates a combination of a spectrogram loss and an adversarial loss. The prediction model is updated to optimize that loss function. After training, based on the new source audio, the prediction model generates a new predicted audio as an enhanced version of the new source audio.

Description

    TECHNICAL FIELD
  • This disclosure generally relates to audio enhancement and, more specifically, to automatically enhancing audio using a predictive model to automatically enhance audio having various audio quality issues.
  • BACKGROUND
  • People make various types of audio recordings in their daily lives. For instance, recording telephonic conferences, video blogs (vlogs), voice messages, audiobooks, and others are among the recordings a person might make once or on a regular basis. Although, on some consumer devices, such as webcams or smartphones, recording is as easy as pressing a button and speaking or providing other live audio, the recordings themselves do not always have desirable quality. Environmental factors, such as background noise or poor microphone quality, can impact the resulting recordings, as can corruption of data making up the recording.
  • Some limited techniques exist to convert an audio recording into an improved version of the audio recording. For instance, deep learning is a machine learning technique whereby a neural network is trained to map a waveform representing an audio recording to an improved version of the audio recording in waveform. Specifically, the neural network is trained only to reduce noise, or the neural network is trained only to reduce reverberation.
  • In an existing system, for instance, a neural network learns to map a set of source audio to a corresponding set of target audio, where each source audio has a quality issue and the corresponding target audio is a version of the source audio without the quality issue. During training, a loss function is used to represent a comparison between predicted audio output by the neural network and target audio that is the desired output based on a source audio, and output from the loss function is used to adjust the neural network. After training, the neural network is able to generate a predicted audio from a provided source audio, where the predicted audio is the neural network's prediction of what target audio for the source audio would look like. For example, if the neural network was trained on noisy source audio along with target audio lacking noise, then the neural network is able to reduce noise in the source audio by generating predicted audio.
  • SUMMARY
  • In one embodiment, one or more processing devices perform operations to enhance audio recordings. The operations include training a prediction model to map source audios (e.g., audio recordings) to target audios (e.g., higher-quality versions of the audio recordings), and the operations further include utilizing the prediction model, after training, to convert a new source audio to a new predicted audio, which is a higher-quality version of the new source audio.
  • For instance, a training subsystem trains the prediction model as part of a generative adversarial network that includes both the prediction model and a discriminator to be jointly trained. The training subsystem obtains training data having a plurality of tuples including the source audios and the target audios, each tuple including a source audio in waveform representation and a corresponding target audio in waveform representation. In one example, the target audios are obtained from an existing dataset of recordings of various speakers. The training subsystem generates a set of source audios corresponding to each target audio by convolving the target audio with various room impulse responses and adding noise to the result of each such convolution, resulting in the source audios being lower-quality versions of the target audio. During training, the prediction model generates predicted audios based on the source audios in the training data. The training subsystem applies a loss function to the predicted audios and the target audios, where that loss function incorporates a combination of a spectrogram loss and an adversarial loss. The training subsystem updates the prediction model to optimize the loss function.
  • After training, the operations performed by the one or more processors include receiving a request to enhance a new source audio. For example, the new source audio is an audio recording that was recorded on a smartphone or other consumer device outside of a professional recording environment. A user seeks to enhance the audio recording to result in a studio-quality version of the audio recording. The operations further include, responsive to the request to enhance the new source audio, inputting the new source audio into the prediction model that was previously trained as described above. Based on the new source audio, the prediction model generates a new predicted audio as an enhanced version of the new source audio.
  • These illustrative embodiments are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Features, embodiments, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings.
  • FIG. 1 illustrates an interface of an enhancement system, according to certain embodiments described herein.
  • FIG. 2 is a diagram of an example architecture of the enhancement system, according to certain embodiments described herein.
  • FIG. 3 is a block diagram of a training subsystem of the enhancement system, according to certain embodiments described herein.
  • FIG. 4 is a flow diagram of a process of training a prediction model of the enhancement system, according to certain embodiments described herein.
  • FIG. 5 shows an example architecture of the prediction model as trained in a generative adversarial network, according to certain embodiments described herein.
  • FIG. 6 is a flow diagram of a process of utilizing the prediction model to enhance audio after training, according to certain embodiments described herein.
  • FIG. 7 depicts an example of a computing system that performs certain operations described herein, according to certain embodiments.
  • DETAILED DESCRIPTION
  • Existing systems to improve audio recordings come with significant drawbacks. For instance, existing systems tend to reduce the problem to a single quality issue, such as only noise or only reverberation, and therefore address only that quality issue when improving the audio recording. The existence of multiple quality issues in audio recordings creates more complex artifacts that become difficult to identify and address by a neural network or otherwise. However, addressing each quality issue independently fails to account for the cases where multiple quality issues exist and impact the audio recording in potentially unexpected ways due to the combination.
  • Another existing technique treats the problem of improving audio as a source separation problem. Systems using this technique seek to separate foreground speech from background audio, such as music and environmental noise, so as to isolate the foreground speech to provide a result with the background. An example of such an existing technique separates speech from background noise by using an machine learning (ML) auto-encoder structure. However, such systems are not appropriate when the goal is to maintain both foreground and background audio in a cleaner form.
  • The assumptions on which existing techniques are based do not necessarily apply to real audio recordings, where both foreground and background noise are sought to be retained and where multiple quality issues can be mixed together. Additionally, however, existing techniques suffer further flaws based on the type of data they manipulate. Techniques that work in the time-frequency domain can usually match a target spectrogram (i.e., the spectrogram of target audio desired to be predicted) well because a time-frequency representation retains some time information as well as some spectral information; however, after predicting a spectrogram, such techniques have trouble recovering a waveform from the predicted spectrogram. This is because an inverse short-term Fourier transform (STFT), which is used to convert from spectrogram to waveform, requires phase information, which is typically unknown. Inaccurate phase information leads to noise and artifacts in the resulting audio, potentially adding new quality issues that did not exist in the original audio recording. Also because such techniques must choose STFT parameters, the techniques make a tradeoff between temporal resolution and frequency resolution. Although techniques that are waveform-based avoid these specific issues, such techniques typically rely on simulation for training the neural network being used. This is because popular sample-based loss functions used during training require perfect alignment between the waveform of input audio and the waveform of target audio. Such loss functions neither align with human perception of sound quality nor capture temporal structures of samples used during training.
  • In one example, an existing technique uses a loss function that incorporates a combination of a least absolute deviations loss function, typically referred to as an “L1” loss function, and a least squares error, typically referred to as an “L2” loss function. That existing loss function allows training only with simulated data and not with real data; real data could include phase shifts, clock drifts, and misalignment, all of which are unrecognizable given such a loss function. Further, that loss function fails to capture human perception of sound quality. Because humans hear sound in terms of frequencies rather than as a collection individual samples (i.e., discrete points) in the audio, a loss function such as this, which is based on sampling the audio, fails to enable a neural network to learn to improve audio based on human perceptions. Since each individual audio sample is treated as an individual, the temporal structure of audio is not enforced when improving a source audio provided as input during operation post-training, potentially causing a resulting predicted audio to lose the temporal structure of the source audio. This can make the predicted audio potentially unrecognizable as a version of the source audio.
  • Embodiments of the present disclosure includes techniques for enhancing audio. Specifically, embodiments described herein utilize a prediction model to generate a predicted waveform (i.e., predicted audio in a waveform representation) that is an enhanced version of a source waveform (i.e., source audio in a waveform representation) provided as input. The predicted waveform is high quality (e.g., studio-quality) audio, which is improved over the source audio in terms of a combination of noise, reverberation, distortion, uneven equalization, or other quality issues. Prior to operation of the prediction model, certain embodiments train the prediction model based on a loss function that combines spectrogram loss and adversarial loss. This loss function is coupled with generative adversarial training in a generative adversarial network (GAN). By training on a diverse set of speech recordings obtained from simulation and data augmentation for various types of environments, the prediction model is able to generalize to recordings of new speakers, new speech content, and new recording conditions. As a result, the prediction model can effectively enhance a wide range of audio into high-quality audio.
  • As used herein, the term “audio” refers to a set of audio data. Analogously, the term “audios” refers to one or multiple sets of audio data. For instance, an audio could be an audio file or portion of an audio file, and multiple audios could be multiple audio files or portions thereof. Further, as used herein, the term “source audio” refers to an audio (e.g., an audio file) having a source quality, or undesirable quality, of audio data. The term “target audio” refers to an audio (e.g., an audio file) that has a target quality, such as studio quality, which is a desired quality of audio data. The term “predicted audio” refers to an audio (e.g., an audio file) generated by a prediction model as a prediction of target audio for a given source audio.
  • The following non-limiting example is provided to introduce certain embodiments. In this example, an enhancement system for enhancing audio trains a prediction model to learn a mapping from source audios to target audios, where the target audios are enhanced versions of the source audios, and then utilizes the prediction model to predict predicted audios based on other source audios provided during operation. The predicted audios are enhanced versions of the other source audios.
  • In this example, an administrator constructs a GAN, which includes a discriminator and also includes a prediction model acting as a generator in the GAN. The administrator obtains training data to train the GAN. Specifically, for instance, the administrator extracts a set of target audios from the Device and Produced Speech (DAPS) dataset. Each target audio is a studio-quality audio recording. This dataset includes ten minutes worth of sample audio recordings from each of ten female speakers and ten male speakers. The administrator also obtains 270 room impulse responses from the Massachusetts Institute of Technology (MIT) Impulse Response Survey dataset. The administrator causes each target audio to be convolved with each room impulse response, resulting in 270 modified versions of each target audio. The administrator adds noise to each such modified version and saves the resulting versions (i.e., as modified and with added noise) as source audios. Together, each source audio and its corresponding target audio form a tuple included in the training data.
  • The administrator trains the GAN, including both the prediction model and the discriminator, based on the training data. During training, while the discriminator learns to distinguish authentic target audios (i.e., those in the training data) from inauthentic target audios (i.e., those generated by the prediction model), the prediction model learns to generate progressively better predicted audios (i.e., closer to the corresponding target audios) based in part on feedback from the discriminator. More specifically, during training, the prediction model generates (i.e., predicts) predicted audios based on source audios provided, and the discriminator guesses whether various audios, including both predicted audios and target audios, are authentic. During training, the prediction model is updated based a training signal output by a novel loss function, which includes a spectrogram loss component and an adversarial loss component. For a given source audio, the spectrogram loss component is based on a spectrogram representation of the predicted audio generated by the prediction model as compared to a spectrogram representation of the target audio corresponding to the source audio in the training data. The adversarial loss component is based on feedback from the discriminator.
  • After training, the prediction model is able to generate a predicted audio based on a source audio, where that predicted audio is meant to predict what target audio for the source audio would look like. More specifically, the predicted audio is a studio-quality version of the source audio, because the prediction model was trained to map source audios to studio-quality versions.
  • In this example, a user accesses the enhancement system through a web interface, which the user uses to upload an example source audio. The user then selects a submission button on the interface, which the enhancement system interprets as a request to enhance the example source audio. The enhancement system utilizes the prediction model to map the example source audio to a resulting predicted audio, which the enhancement system then provides to the user. The resulting predicted audio is a studio-quality version of the example source audio.
  • Certain embodiments described herein represent improvements in the technical field of audio enhancement. Specifically, some embodiments train a prediction model as part of a GAN, to enhance source audio to achieve studio-quality results even when the source audio includes multiple combined quality issues. Some embodiments of the prediction model are trained using a loss function that includes both a spectrogram loss and an adversarial loss. The combination of these losses enables the prediction model to progressively improve in a way that aligns with human perception, such that the prediction model learns to enhance audio effectively without concerning itself with small misalignments or phase shifts that would not be perceptible. Further, given the wide range of training data used in some embodiments, including audio data with multiple combined quality issues, the prediction model is able to generalize to new speakers in various environments with various combinations of quality issues.
  • Referring now to the drawings, FIG. 1 illustrates an interface 100 of an enhancement system, according to certain embodiments described herein. An embodiment of the enhancement system causes an interface 100, such as the one shown, for example, to be displayed to a user. The interface 100 enables the user to request that the enhancement system generate a predicted audio 130 from a source audio 110, where the predicted audio 130 is an enhanced (i.e., improved) version of the source audio 110. For instance, the source audio 110 has one or multiple quality issues, such as noise, reverberation, frequency distortion (e.g., poor equalization), or other distortion, and the resulting predicted audio 130 lacks such quality issues or at least has a reduction in such quality issues.
  • As shown in FIG. 1, an example of the interface 100 enables a user to indicate a source audio 110. For instance, the interface 100 includes a source interface object 105, such as a link or button, selectable by a user, such that selection of the source interface object 105 causes display of a file browser. The user can use the file browser to indicate a source audio 110. In FIG. 1, the user has already used the source interface object 105 to choose a source audio 110 titled source-audio.wav. The source audio 110 may be stored locally on a computing device that displays the interface 100 or may be stored remotely, in which case the user might provide a uniform resource locator (URL) to indicate a location of the source audio 110. It will be understood that many techniques exist for enabling the user to select the source audio 110, and an embodiment can utilize one or more of such techniques.
  • In some embodiments, the source audio 110 is represented as a waveform, which may be provided as a .wav file. Additionally or alternatively, however, the source audio 110 is in a compressed format or in some other audio representation. If the source audio 110 is not provided as a waveform, some embodiments of the enhancement system convert the source audio 110 a waveform, such as by decompressing the data representing the source audio 110. Various techniques are known in the art for converting audio representations to waveform, and the enhancement system can use one or more of such techniques or others.
  • An example of the interface 100 enables one-click audio enhancement. In other words, the interface 100 enables the user to request audio enhancement by selecting a single interface object. For instance, the interface 100 provides the user with a submission interface object 120, such as a link or button, that the user can select to request enhancement of the source audio 110. For instance, as shown in FIG. 1, the interface 100 includes a button as the submission interface object 120 with the word “Go,” “Enhance,” or “Convert” on it. In another example, however, the interface 100 enables the user to provide additional information, such as parameters, useable by the enhancement system when enhancing the source audio 110. After providing such additional information, the user can then select the submission interface object 120 to request enhancement according to the additional information provided.
  • The enhancement system receives a request from the user to enhance the source audio 110. For instance, as described above, the user's selection of an interface object can be interpreted as such a request. Responsive to receiving that request, an embodiment of the enhancement system generates a predicted audio 130 corresponding to the source audio 110 by applying a prediction model to the source audio 110. As shown in FIG. 1, the enhancement system has generated a predicted audio 130 with the filename predicted-audio.wav. Based on the source audio 110, the prediction model generates the predicted audio 130, which the interface 100 provides to the user. For instance, the interface 100 provides a results interface object 135, such as a link or button, which the user can select to stream the predicted audio 130 or to store the predicted audio 130 on a local or remote computing device.
  • FIG. 2 is a diagram of an example architecture of the enhancement system 200 according to certain embodiments described herein. As shown in FIG. 2, an embodiment of the enhancement system 200 includes a training subsystem 210 and a prediction subsystem 220. Generally, the training subsystem 210 trains a prediction model 230 to learn a mapping between source audios 110 and target audios, thereby causing the prediction model 230 to be able to generate predicted audios 130 (i.e., predictions of hypothetical target audios) based on new source audios 110; the prediction subsystem 220 receives a new source audio 110 and uses the prediction model 230, after training, to generate a corresponding predicted audio 130. In certain embodiments, the prediction model 230 is implemented as a neural network, but it will be understood that other machine-learning models might be used additionally or alternatively to a neural network. The interface 100 discussed above provides access to the prediction subsystem 220, enabling a user to request that the prediction subsystem 220 enhance a new source audio 110 by generating a corresponding predicted audio 130.
  • In the example of FIG. 2, the training subsystem 210 and the prediction subsystem 220 are maintained on respective servers, specifically a training server 215 and a prediction server 225 respectively. The training server 215 could be, for instance, a server utilized for software development or administrative tasks. For instance, the training server 215 could be under the control of a company that produces all or a portion of an application embodying part of a portion of the prediction subsystem 220. The prediction server 225 could be, for example, a web server that services a website useable to enhance audio by generating predicted audio 130. In that case, the interface 100 described above could be integrated into a web page of that website. An example of the training server 215 is implemented as one or more computing nodes, and further, an example of the prediction server 225 is implemented as one or more computing nodes that may be distinct from or overlap with the computing nodes of the training server 215.
  • Prior to use of the prediction model 230 to enhance audio as part of the prediction subsystem 220, the training subsystem 210 trains the prediction model 230. To this end, an example of the training subsystem 210 employs deep learning to update the prediction model 230, which is implemented as a neural network in some embodiment, so as to teach the prediction model 230 a mapping from source audios 110 to target audios in a set of training data. The training subsystem 210 will be described in more detail below. However, after training, the prediction model 230 may be accessible by the prediction subsystem 220, For instance, in the example of FIG. 2, in which the training subsystem 210 and the prediction subsystem 220 run on distinct servers, the training server 215 copies (i.e., transmits) the trained prediction model 230 to the prediction server 225 for user by the prediction subsystem 220. As such, the prediction subsystem 220 can utilize the prediction model 230 to enhance audio.
  • As also shown in FIG. 2, in some embodiments, a client 240 can access the enhancement system 200; specifically, for instance, the client 240 accesses the prediction subsystem 220 of the enhancement system 200 on behalf of a user to enhance source audio 110 provided by the user. For instance, the user operates the client 240. In one example, the client 240 is a computing device, such as a notebook computer, a desktop computer, a tablet, or a smartphone, but it will be understood that the client 240 could be one or a combination of various computing devices. In some embodiments, the client 240 connects to the prediction subsystem 220, which provides the interface 100 to the client 240 for display to the user. The client 240 provides the source audio 110 to the prediction subsystem 220 and receives an indication of the predicted audio 130 after generation of the predicted audio 130. For instance, the user can indicate the source audio 110 to the client 240, such as through the interface 100 displayed at the client 240. An example of the client 240 accesses the source audio 110 and transmits the source audio 110 to the prediction subsystem 220, or additionally or alternatively, if the source audio 110 is stored remotely from the client 240, an example of the client 240 indicates the source audio 110 to the prediction subsystem 220, which then retrieves the source audio 110.
  • In the example of FIG. 2, upon receiving source audio 110 and a request to enhance the source audio 110, both of which can be provided through the interface 100 or otherwise, an embodiment of the prediction subsystem 220 enhances the source audio 110. To this end, the prediction subsystem 220 applies the prediction model 230 to the source audio 110, causing the prediction model 230 to generate a predicted audio 130 corresponding to the source audio 110. The predicted audio 130 is an enhanced version (e.g., a studio-quality version) of the source audio 110. The prediction subsystem 220 provides the predicted audio 130 to the client 240, such as by transmitting the predicted audio 130 to the client 240 or by transmitting to the client 240 a reference to (e.g., an URL) the predicted audio 130. An example of the client 240 then provides the predicted audio 130 to the user as an enhanced version of the source audio 110.
  • Although not shown in FIG. 2, in an alternative architecture, the prediction subsystem 220 including the prediction model 230 is stored locally on the client 240 rather than residing at a prediction server 225. In that case, rather than enhancement of the source audio 110 occurring remotely (e.g., as a cloud service), the enhancement occurs locally at the client 240. In some cases, this can enable the user to utilize the enhancement system 200 offline (i.e., without access to a remote server). In such an embodiment, the prediction server 225 need not be a distinct device from the client 240 as shown in FIG. 2. Rather, the prediction server 225 is integrated with the client 240.
  • An example of the prediction subsystem 220 is integrated into, or accessible by, an application that enhances audio. For example, such an application is an audio-related or video-related product, such as Adobe® Audition, Adobe Premiere, or Adobe Spark. In the example of FIG. 2, for instance, that application has a client portion that runs on the client 240 and communicates with a cloud service managed by the prediction server 225. In the case where the prediction subsystem 220 operates on the client 240, however, the application is a local application that need not communicate with a remote server to generate predicted audios 130. Through access to the prediction subsystem 220, the application is configured to enhance audio by applying the trained prediction model 230 to source audio 110 to generate prediction audio that is potentially studio quality.
  • Prior to being put into operation in the prediction subsystem 220, an embodiment of the training subsystem 210 trains the prediction model 230. FIG. 3 is a block diagram of the training subsystem 210 according to some embodiments. As shown in FIG. 3, the training subsystem 210 includes a generative adversarial network 310, which is used to train the prediction model 230 as well as to train a discriminator 320. In this GAN 310, the prediction model 230 itself is a neural network acting as a generator, which is trained jointly with the discriminator 320 in an adversarial manner. The GAN 310 of the training subsystem 210 further includes an evaluation tool 330, which applies one or more objective functions, also referred to as loss functions, to modify the prediction model 230, the discriminator 320, or both. One of skill in the art will understand how to construct a GAN 310 in general and, thus, given this disclosure, how to construct a GAN 310 as described herein.
  • Within the GAN 310, the prediction model 230, acting as a generator, learns to generate predicted audio 130 given source audio 110, while the discriminator 320 learns to determine whether audio provided to the discriminator 320 is authentic (i.e., is target audio that is an enhanced version of the source audio 110). The GAN 310 is thus adversarial in nature because the prediction model 230 and the discriminator 320 compete against each other and thus improve jointly. As the prediction model 230, the discriminator 320 must improve in order to continue identifying fakes (i.e., audios other than target audios of input source audios 110), and as the discriminator 320 improves, the generator must improve to continue producing predicted audios 130 capable of fooling the discriminator 320. Thus, the adversarial nature of the GAN 310 and of the training subsystem 210 causes the prediction model 230 to learn how to convert source audios 110 into corresponding predicted audios 130 (i.e., to generate predicted audios 130 based on source audios 110) that are close to the corresponding target audios.
  • In the training subsystem 210, the prediction model 230 learns how to predict audio (i.e., to generate predicted audio 130) based on training data 340. Generally, the training data 340 includes a set of tuples, each tuple including a source audio 110 and a corresponding target audio 345. In some embodiments, for each such tuple, the target audio 345 is a high quality (e.g., studio quality) version of the corresponding source audio 110. Generally, the training subsystem 210 seeks to teach the prediction model 230 how to generate predicted audio 130 that matches the target audio 345 for each source audio 110 in the training data 340. In other words, an example of the prediction model 230 learns a mapping of source audios 110 having source quality to target audios 345 having target quality, such as studio quality. As a result, the prediction model 230 is then potentially able to generate a new predicted audio 130 for a new source audio 110 that is not part of the training data 340, where that predicted audio 130 is of the target quality.
  • To facilitate teaching the prediction model 230 and the discriminator 320, an embodiment the evaluation tool 330 executes a generator loss function 350 and a discriminator loss function 360. As described in more detail below, the generator loss function 350 includes both a spectrogram loss 370 and an adversarial loss 380. Based on application of the generator loss function 350 to target audios 345 in the training data 340, predicted audios generated by the prediction model during training, and scores computed by the discriminator 320 during training, the evaluation tool 330 updates the prediction model to better represent the mapping from source audios to target audios 345 in the training data 340. Additionally, based on application of the discriminator loss function 360 to scores computed by the discriminator 320 during training, the evaluation tool 330 updates the discriminator 320 to better identify authentic target audios 345.
  • FIG. 4 is a flow diagram of a process 400 of training a prediction model 230 to map source audios 110 to corresponding target audios 345, so as to teach the prediction model 230 to predict audios corresponding to new source audios 110, according to some embodiments. Generally, some embodiments described herein acquire seed data for use in training the prediction model 230. All or a portion of the seed data is used as training data 340. An example of seed data includes a set of tuples, each tuple including a source audio 110 and a target audio 345 corresponding to the source audio 110. In the seed data, each tuple need not include unique target audio 345; rather, as described in the below, a particular target audio 345 may be included in multiple tuples, paired with a different corresponding source audio 110 in each such tuple. As described below, in some embodiments, the training subsystem 210 receives a first portion of the seed data (e.g., target audios 345) and simulates (e.g., generates by simulation) a second portion of the seed data (e.g., source audios 110) based on that first portion.
  • The process 400 of the training subsystem 210 begins at block 405 of the flow diagram shown in FIG. 4. Prior to block 405, however, the seed data is empty or includes seed data collected previously. However, additional tuples are added to the seed data as described below, specifically at blocks 405 through 440.
  • As shown in FIG. 4, block 405 of the process involves obtaining (i.e., receiving) target audios 345. In some embodiments, each such target audio 345 obtained is a studio-quality sample of audio. More generally, each such target audio 345 is an example of acceptable output desired to be generated by the prediction model 230.
  • The training subsystem 210 can obtain the target audios 345 in various ways. In one example, the training subsystem 210 downloads or otherwise receives the target audios 345 from an existing data set, such as the DAPS dataset. At this time, the DAPS dataset includes a ten-minute audio clip from each of twenty speakers, including ten male voices and ten female voices, and an example of the training subsystem 210 downloads these audio clips to use each as a target audios 345. For instance, the training subsystem 210 can use each ten-minute clip as a target audio 345, or the training subsystem 210 can divide each ten-minute clip into smaller clips, such that each smaller clip is used as a target audio 345.
  • In some embodiments, the training subsystem 210 trains the GAN 310 with target audios 345 and source audios 110 in waveform representations. Thus, the training subsystem 210 receives the target audios 345 as waveforms, or the training subsystem 210 converts each target audio 345 to a corresponding waveform. For instance, if a target audio 345 is received in a compressed format, the training subsystem 210 decompresses the compressed format to extract a version of the target audio 345 that is an uncompressed waveform. As described below, some embodiments of the training subsystem 210 generate source audios 110 based on the target audios 345, where each corresponding source audio 110 of a target audio 345 is a reduced-quality version of the target audio 345. Thus, given the target audios 345 are in waveform representations, the source audios 110 generated below are generated as waveforms in some embodiments as well.
  • At block 410, the process 400 involves obtaining a set of room impulse responses. A room impulse response describes how sound changes between its source (e.g., a speaker) and a microphone that records the resulting audio. Thus, the room impulse response associated with a specific environment reflects how environmental factors (e.g., reverberation and background noise) impact sound as recorded in that environment. In one example, the room impulse responses are extracted or otherwise received from the MIT Impulse Response Survey dataset, which currently includes 270 room impulse responses. As described below, in this example process 400, the room impulse responses are used to reduce the quality of the target audios 345 so as to generate source audios 110. It will be understood, however, that other techniques exist and can be used to generate the source audios 110 based on the target audios 345.
  • Block 415 of the process 400 begins an iterative loop, where, in each iteration of the loop, the training subsystem 210 generates a corresponding set of source audios 110, including one or multiple source audios 110, based on a given target audio 345. At block 415, the process 400 involves selecting a target audio 345 to associate with the current iteration of the loop, where source audios 110 have not yet been generated for the selected target audio 345 in prior iterations of the loop. Specifically, for instance, from among the target audios 345 received at block 405, the training subsystem 210 selects one of such target audios 345 for which source audios 110 have not yet been generated in a prior loop, if any such prior loops exist.
  • At block 420, the process 400 involves applying each room impulse response to the selected target audio 345 to generate a set of source audios 110. For instance, the training subsystem 210 convolves the selected target audio 345 with each room impulse response separately, resulting in a set of source audios 110 in the same quantity as the set of room impulse responses used. In this case, the set of source audios 110 resulting includes a corresponding source audio 110 for each room impulse response applied, and all such source audios 110 are based on and therefore correspond to the selected target audio 345. In some embodiments, these resulting source audios 110 are intermediate and are further modified prior to training the prediction model 230, as described below.
  • At block 425, the process 400 involves adding noise to each source audio 110 that was generated at block 420 based on the selected target audio 345. For instance, the training subsystem 210 adds noise to each source audio 110, such as by adding noise extracted from a Gaussian distribution, resulting in an updated version of the source audio 110 that includes noise and still corresponds to the selected target audio 345. Thus, after adding noise to each source audio 110, the selected target audio 345 is now associated with a set of source audios 110 that include additional noise. In some embodiments, however, the training subsystem 210 does not add noise to the source audios 110, and in that case, block 425 is skipped.
  • At block 430, the process 400 involves generating a set of tuples corresponding to the selected target audio 345. For instance, the training subsystem 210 generates a respective tuple associated with each source audio 110 generated at block 425, if noise is being added, or generated at block 420, if noise is not being added. Such a tuple includes the source audio 110 as well as the selected target audio 345 on which the source audio 110 is based. Thus, in some embodiments, for the selected target audio 345, the training subsystem 210 constructs a set of tuples having a quantity equal to the quantity of source audios 110, which can be equal to the quantity of room impulse responses used. This set of tuples associated with the selected target audio 345 is a subset of the tuples that make up the seed data, which can include not only the selected target audio 345, but also other target audios 345 obtained at block 405.
  • At block 435, the process involves adding the tuples generated at block 430 to seed data used to train the prediction model 230. For instance, the training subsystem 210 updates the seed data by adding each tuple, including the selected target audio 345 and a corresponding source audio 110, to the seed data.
  • At decision block 440, the training subsystem 210 determines whether all target audios 345 have been selected in block 415 and have has associated source audios 110 generated. If any target audios 345 have not yet been selected, then the process 400 returns to block 415, where another target audio 345 is selected. However, if all target audios 345 have been selected and have had corresponding source audios 110 generated, then the process 400 continues to block 445.
  • At block 445, the process 400 involves selecting at least a portion of the seed data to use as training data 340. In some embodiments, not all of the seed data is used as training data 340 to train the GAN 310. Rather, an example of the training subsystem 210 sets aside a portion of the seed data for validation and testing of the prediction model 230, and that portion is not included in the training data 340. For instance, in the above example, in which the target audios 345 include ten minutes of audio for each of twenty speakers, including ten male voices and ten female voices, the training subsystem 210 identifies target audios 345 corresponding to two minutes of audio from one female voice and two minutes of audio from one male voice. From among those identified target audios 345, the training subsystem 210 selects a subset of the corresponding tuples that represent seventy environments (i.e., seventy room impulse responses). The training subsystem 210 extracts those selected tuples to be used for validation and testing, rather than as training data 340. The training subsystem 210 uses the remaining seed data as training data 340.
  • At block 450, the process 400 involves training the prediction model 230 within the GAN 310. For instance, the training subsystem 210 utilizes the training data 340 selected at block 445 to train the GAN 310 and, thus, to train the prediction model 230. Due to being trained based on this diverse set of training data 340 including multiple speakers and multiple environments, the prediction model 230 is able to generalize its learning to recordings of new speakers, new speech content, and new recording environments.
  • Training within the GAN 310 utilizes one or more loss functions, as will be described further below, to update the prediction model 230 and the discriminator 320. Given this disclosure, one of skill in the art will understand how to utilize the training data 340 to train the prediction model 230 in the GAN 310, utilizing one or more loss functions. However, embodiments described herein can utilize a novel set of loss functions, as described in more detail below.
  • In some embodiments, the training subsystem 210 trains the prediction model 230 of the GAN 310 for at least one million iterations with a batch size of ten using an Adam optimization algorithm with an initial learning rate of 0.001, reduced by a factor of ten every three hundred thousand iterations. Given the pre-trained prediction model 230, the training subsystem 210 trains the discriminator 320 from scratch for five thousand iterations while keeping the generator fixed. The training subsystem 210 then performs joint training on both the prediction model 230 and the discriminator 320.
  • It will be understood that the process 400 described above is provided for illustrative purposes only and, further, that the process 400 could vary while remaining within the scope of this disclosure. For instance, operations of the iterative loop as described above form one example implementation, and one of skill in the art will understand that various implementations are possible to utilize simulation to generate source audios 110 based on target audios 345. For instance, although the above describes the tuples as being constructed inside the loop, an alternative embodiment might construct the tuples after all iterations of the loop have been performed and source audios 110 have already been generated for all target audios 345. In another example, block 425 is skipped and no noise is added to the source audios 110. One of skill in the art will understand that other implementations are also possible for the process 400.
  • In some embodiments, the training subsystem 210 performs one or more tasks to overcome overfitting of the prediction model 230. For example, the training subsystem 210 generates augmented versions of the target audios 345 prior to generating the corresponding source audios 110, and the training subsystem 210 bases the source audios 110 on the augmented versions, although the original versions of the target audios 345 are used in the tuples of the seed data. In other words, at block 420, the impulse control responses are applied to the augmented versions rather than to the original target audios 345 themselves. A target audio 345 is augmented, for instance, by re-scaling its volume, changing its speed via linear interpolation, or both. For another example, the training subsystem 210 augments one or more room impulse responses before applying such room impulses to the target audios 345, regardless of whether the target audios 345 have been augmented themselves; for instance, a room impulse response is augmented by re-scaling the energy of its reverberation while keeping its direct signal the same, by changing reverberation time such as via nearest neighbor interpolation, or by the use of both such techniques. For another example, when the noise is added at block 425, the training subsystem 210 adds that noise to various source audios 110 at various signal-to-noise ratios ranging from, for instance, twenty decibels to thirty decibels, resulting in a relatively wide range of noise being added to the source audios 110 for a given target audio 345.
  • In general, training a GAN 310 requires use of one or more loss functions (i.e., objective functions), for instance including a first loss function for training the generator and a second loss function for training the discriminator 320. Some embodiments of the training subsystem 210 utilize a novel set of loss functions so as to train the prediction model 230, acting as the generator, to learn a mapping between source audios 110 and target audios 345.
  • In some embodiments, to train the prediction model 230 in the GAN 310, the training subsystem 210 utilizes a generator loss function 350 to train the prediction model 230, where the generator loss function 350 is designed in consideration of human perceptions. In other words, the generator loss function 350 is perceptually motivated in that it incorporates an aspect of spectrogram loss 370, which discounts small misalignments and phase shifts that are typically not perceivable. For instance, the generator loss function 350 includes a spectrogram loss 370, or spectrogram loss component, based on a spectrogram representation of the predicted audio 130 as compared to a spectrogram representation of the target audio 345, and the generator loss function 350 further includes an adversarial loss 380, or adversarial loss component, that is based on feedback from the discriminator 320.
  • More specifically, to determine the spectrogram loss 370, an example of the training subsystem 210 computes the L2 difference (i.e., the least squares error) of the log spectrogram of a predicted waveform (i.e., the predicted audio 130 in a waveform representation) as compared to a target waveform (i.e., the target audio 345 in a waveform representation). In some embodiments, the spectrogram loss 370 is invariant to small misalignments between the source waveform (i.e., the source audio 110 is waveform representation) and the target waveform during training. As such, the spectrogram loss 370 is therefore better in accordance with human perception of speech quality, compared to popular L1 or L2 sample-based (i.e., based on consideration of discrete points, or samples) loss functions.
  • In some embodiments, join training in conjunction with the discriminator 320 removes noise and artifacts in predicted waveforms. The discriminator 320 tries to distinguish between output of the prediction model 230 and authentic clean speech not outputted by the prediction model 230. Through training, the discriminator 320 learns to identify telltale artifacts that occur during generation by the prediction model 230. Meanwhile, the prediction model 230 improves its fidelity as it learns to fool the discriminator 320 in an adversarial manner, and doing so causes such telltale artifacts to be reduced over time. Further, in some embodiments such as the below example, the generator loss function 350 and the discriminator loss function 360 are differentiable, and thus, some embodiments of the training subsystem 210 use standard forward and back propagation to train the prediction model 230 and the discriminator 320.
  • Specifically, in one example GAN 310, in which x represents a source audio 110 from the training data 340, x′ represents a target audio 345 from the training data 340, and the tuple (x, x′) includes the source audio x and the target audio x′, the prediction model G acting as the generator optimizes (e.g., minimizes) a generator loss function LG, which is the sum of a spectrogram loss 370 and adversarial loss 380 as follows:

  • L G(x,x′)=α|LogSpec(G(x))−LogSpec(x′)|+β(1−D(LogMel(G(x))))
  • Additionally, in the example GAN 310, the discriminator D optimizes (e.g., minimizes) a discriminator loss function LD as follows:

  • L D(x,x′)=D(LogMel(G(x)))+1−D(LogMel(x′))
  • In the above, LogSpec represents the log spectrogram, and LogMel represents the log mel-spectrogram. The values for α and β are weighting factors.
  • As mentioned above, an embodiment of the generator loss function 350 incorporated both a spectrogram loss 370 and an adversarial loss 380. Specifically, in the above example formula for the generator loss function 350, |LogSpec(G(x))−LogSpec(x′)| represents a spectrogram loss 370 because this difference is the difference between the log spectrogram of the predicted audio 130 and the log spectrogram of the target audio 345, and 1−D(LogMel(G(x)) represents an adversarial loss 380 because this difference is the discriminator score of the log mel-spectrogram of the predicted audio 130 subtracted from the desired discriminator score of 1, where a discriminator score of 1 would indicate a discriminator 320 finding of authenticity at the highest likelihood. Thus, the above generator loss function 350 includes a combination of spectrogram loss 370 and adversarial loss 380. Further, in some embodiments, the weighting factors α and β for that combination are selected such that the spectrogram loss 370 and the adversarial loss 380 are considered roughly at the same magnitude or, in other words, such that α|LogSpec(G(x))−LogSpec(x′)| and β(1−D(LogMel(G(x)))) are equal or nearly equal.
  • In some embodiments, the training subsystem 210 computes multiple spectrogram losses |LogSpec(G(x))−LogSpec(x′)|, each with a different set of STFT parameters (e.g., a different LogSpec function), and sums the multiple spectrogram losses together. For instance, the multiple spectrogram losses are added together with equal weighting (i.e., without scaling). In one example, the training subsystem 210 uses an equally weighted combination of two spectrogram losses with two sets of STFTs for the sampling rate of 16 kHz: one spectrogram loss with a relatively large fast Fourier transform (FFT) window size of 2048 and a hop size of 512, and another spectrogram loss with a smaller FFT window size of 512 and a hop size of 128. The larger FFT window size gives more frequency resolutions, and the smaller FFT window size gives more temporal resolutions. For the discriminator 320, this example training subsystem 210 uses kernel sizes of (3, 9), (3, 8), (3, 8), and (3, 6), stride sizes of (1, 2), (1, 2), (1, 2), and (1, 2), and channel sizes of (1, 32), (32, 32), (32, 32), (32, 32) respectively for the sequence of the network layers. The input to the discriminator 320 is computed as the log of mel-spectrogram with 80 mel bandpass filters ranging from 80 Hz to 7600 Hz, using the STFT parameters of large FFT window.
  • FIG. 5 shows an example architecture of the prediction model 230 within the GAN 310, according to some embodiments. Specifically, an example of the prediction model 230 is the same as, or similar to, the feed-forward WaveNet architecture as presented in the work of Rethage, Pons, and Serra (Dario Rethage, “A Wavenet for Speech Denoising,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2019, pp. 5039-5043). As shown in FIG. 5, in some embodiments, the training subsystem 210 trains the prediction model 230 and the discriminator 320, specifically, based on training data 340 that includes source audios 110 and target audios 345. As is further shown, the training subsystem 210 trains the prediction model 230 using a training signal output by the generator loss function 350, which includes both a spectrogram loss 370 and an adversarial loss 380 as described above.
  • An example of the prediction model 230 generates predicted audios 130 in waveforms (i.e., predicted waveforms 510). Although training data 340 is not shown in FIG. 5, the generation of such predicted audios 130 is based on corresponding source audios 110 in the training data 340 in some embodiments, as described above. An example of the training subsystem 210 applies a STFT to the predicted waveforms 510 to generate predicted spectrograms 520, which are spectrogram representations of the predicted audios 130. The training subsystem 210 provides to the discriminator 320 the predicted spectrograms 520 generated during training as well as target spectrograms, which are spectrogram representations of target audios 345 in the training data 340. The discriminator 320 predicts which of the various spectrograms received are authentic (i.e., are target spectrograms); specifically, the discriminator 320 generates a score associated with each spectrogram to indicate the likelihood that the spectrogram is authentic.
  • As also shown in FIG. 5, an evaluation tool 330 is configured to apply the generator loss function 350 and the discriminator loss function 360. In some embodiments, for a given source audio 110 in the training data 340, the evaluation tool 330 applies the generator loss function 350 to the corresponding target audio 345, predicted audio 130, and discriminator score for the predicted audio 130 (e.g., according to the formula for the generator loss function 350 described above) to determine a generator loss value. Various such generator loss values together form a generator training signal, where that generator training signal is used by the training subsystem 210 to update the weights of nodes within the neural network of the prediction model 230 so as to train the prediction model 230 through backpropagation. Analogously, the evaluation tool 330 applies the discriminator loss function 360 to the corresponding discriminator score of the predicted audio 130 and the discriminator score of the target audio 345 (e.g., according to the formula for the discriminator loss function 360 described above) to determine a discriminator loss value. Various such discriminator loss values together form a discriminator training signal, where that discriminator training signal is used by the training subsystem 210 to update the weights of nodes within the neural network of the discriminator 320 so as to train the discriminator 320 through backpropagation.
  • As is also shown in FIG. 5, an example of the prediction model 230 is implemented as a neural network including stacks of one-dimensional (1D) dilated convolutions. The dilation rates exponentially increase for each layer up to a limit and then repeat. The prediction model 230 further includes one or more Gated Activation Units (GAU) for nonlinearity. Where xk is input from previous layers into the kth block, Wf,k and Wg,k are weights for dilated convolutions in the kth block, and Wr,k and Ws,k are weights for non-dilated convolutions in the kth block, a unit block of the prediction model 230 performs the following computations, where zk, zr,k, and zs,k and intermediate values used to compute xk+1:

  • z k=tanh(W f,k ⊗x k)⊙σ(W g,k ⊗x k)

  • z r,k =W r,k ⊗z k

  • z s,k =W s,k ⊗z k

  • x k+1 =x k +z r,k
  • In some embodiments, in each unit block, the prediction model 230 uses both a residual connection and a skip connection. The value of xr,k is computed for the residual connection and is added to the input of the current block to construct input to the next block. To generate the prediction, as shown in FIG. 5, the prediction model 230 sums outputs zs,k, for all k, from the various skip connections. To the resulting sum, the prediction model 230 applies Rectified Linear Unit (ReLU) followed by three 1D non-dilated convolutions in sequence during post-processing. The last of the three 1D non-dilated convolutions uses a filter width of 1 and projects the channel dimension to 1. This gives a real-valued sample sequence as the predicted waveform 510. In some embodiments, the channel size is 128 across the entire neural network because, for instance, the source waveform is projected to 128 channel dimensions using 1D non-dilated convolution during preprocessing. Some embodiments use a filter width of 3 for the dilated convolutions and post-processing, and a filter width of 1 for skip connections and residual connections. As mentioned above, the dilation rates exponentially increase for each layer up to a limit and then repeat. Specifically, for instance, the sequence of dilation rates is, for example, (1, 2, 4, 8, 16, 32, 64, 128, 256, 512). The prediction model 230 includes two such stacks dilated convolutions, resulting in a total of 20 layers.
  • Generally, the discriminator 320 takes in the log mel-spectrogram of target audios 345 and predicted audios 130 and, based on the log mel-spectrogram, outputs a prediction for each. For instance, that prediction is a score indicating a believed likelihood that the target audio 345 or predicted audio 130 is authentic (i.e., is a target audio 345). An example discriminator 320 takes the structure described in StarGAN-VC (Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo, “StarGAN-VC: Non-Parallel Many-to-Many Voice Conversion with Star Generative Adversarial Networks,” arXiv preprint arXiv:1806.02169, 2018). The example of the discriminator 320 is a gated convolutional neural network (CNN) with several stacks of convolutional layers, a batch normalization layer, and a Gated Linear Unit (GLU). In some embodiments, the discriminator 320 is fully convolutional, thus allowing inputs of arbitrary temporal length (i.e., audios of various lengths).
  • FIG. 6 is a flow diagram of a process 600 of utilizing the prediction model 230 after training, according to some embodiments. In some embodiments, the prediction subsystem 220 performs part or all of the process 600 described below. Further, the training subsystem 210 performs the process 400 of FIG. 4 prior to the process 600 of FIG. 6 being performed, thus causing the prediction model 230 to have previously been trained as part of the GAN 310 before being placed into operation.
  • As shown in FIG. 6, at block 605, the process 600 involves receiving a source audio 110. As described above with respect to FIG. 1, in some embodiments, the source audio 110 is received from a user via an interface 100, where that user can be human or automated. For example, that interface 100 enables a user to indicate the source audio 110, thereby enabling the prediction subsystem 220 to access the source audio 110. In one example, the source audio 110 is received as a waveform. However, if the source audio 110 is not stored as a waveform, then an embodiment of the prediction subsystem 220 converts the source audio 110 to a waveform.
  • At block 610, the process 600 involves receiving a request to enhance the source audio 110. For instance, as described with respect to FIG. 1, an example of the interface 100 includes a button or link enabling the user to request enhancement of the source audio 110. In some embodiments, the request can be made through a single click or selection made by the user, such as by selecting a button labeled “Go,” “Enhance,” or “Convert” in the interface 100. If the request is received through the interface 100 and the interface 100 is displayed on a first computing device other than a second computing device running the prediction subsystem 220, as shown in FIG. 2, then the first computing device transmits the request to the prediction subsystem 220 at the second computing device, where the request is received and processed.
  • At block 615, the process 600 includes providing the source audio 110 to the prediction model 230. For instance, responsive to receiving the source audio 110, an embodiments of the prediction subsystem 220 inputs the source audio 110 into the prediction model 230.
  • At block 620, the process 600 includes generating a predicted audio 130 corresponding to the source audio 110. For instance, based on the source audio 110, the prediction model 230 generates and outputs the predicted audio 130. Given the training described above, the prediction model 230 learned to map a new source audio 110 to an enhanced version (e.g., a studio-quality version) of the new source audio 110. As such, the predicted audio 130 generated at this block 620 is an enhanced version of the source audio 110.
  • At block 625, the process 600 involves outputting the predicted audio 130 generated at block 620, such as by providing the predicted audio 130 to the user. For instance, if the prediction subsystem 220 runs on a second computing device that differs from a first computing device displaying the interface 100 to the user, then the prediction subsystem 220 causes the second computing device to transmit the predicted audio 130 to the first computing device, which provides the predicted audio 130 to the user. For instance, the first computing device enables the user to download or stream the predicted audio 130 through the interface 100.
  • Thus, as described above, embodiments described herein provide an end-to-end deep learning solution to enhancing audio, including audio with multiple types of quality issues. Some embodiments fix noise, reverberation, and distortion (e.g., undesirable equalization) to raise a source audio 110 to studio quality sufficient for a professional production.
  • Any suitable computing system or group of computing systems can be used for performing the operations described herein. For example, FIG. 7 depicts an example of a computing system 700 that can be used as the prediction server 225, the training server 215, or various other computing devices performing operations described herein. In some embodiments, for instance, the computing system 700 executes all or a portion of the prediction subsystem 220, the training subsystem 210, or both. In other embodiments, a first computing system having devices similar to those depicted in FIG. 7 (e.g., a processor, a memory, etc.) executes the prediction subsystem 220 while another such computing system executes the training subsystem 210, such as in the example of FIG. 1.
  • The depicted example of a computing system 700 includes a processor 702 communicatively coupled to one or more memory devices 704. The processor 702 executes computer-executable program code stored in a memory device 704, accesses information stored in the memory device 704, or both. Examples of the processor 702 include a microprocessor, an application-specific integrated circuit (“ASIC”), a field-programmable gate array (“FPGA”), or any other suitable processing device. The processor 702 can include any number of processing devices, including a single processing device.
  • The memory device 704 includes any suitable non-transitory computer-readable medium for storing data, program code, or both. A computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include a magnetic disk, a memory chip, a ROM, a RAM, an ASIC, optical storage, magnetic tape or other magnetic storage, or any other medium from which a processing device can read instructions. The instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, and ActionScript.
  • The computing system 700 may also include a number of external or internal devices, such as input or output devices. For example, the computing system 700 is shown with one or more input/output (“I/O”) interfaces 708. An I/O interface 708 can receive input from input devices or provide output to output devices. One or more buses 706 are also included in the computing system 700. The bus 706 communicatively couples one or more components of a respective one of the computing system 700.
  • The computing system 700 executes program code that configures the processor 702 to perform one or more of the operations described herein. The program code includes, for example, the prediction subsystem 220, the prediction model 230, the training subsystem 210, or other suitable applications that perform one or more operations described herein. The program code may be resident in the memory device 704 or any suitable computer-readable medium and may be executed by the processor 702 or any other suitable processor. In some embodiments, both the prediction subsystem 220 and the training subsystem 210 are stored in the memory device 704, as depicted in FIG. 7. In additional or alternative embodiments, one or more of the prediction subsystem 220 and the training subsystem 210 are stored in different memory devices of different computing systems. In additional or alternative embodiments, the program code described above is stored in one or more other memory devices accessible via a data network.
  • The computing system 700 can access the prediction model 230 or other models, datasets, or functions in any suitable manner. In some embodiments, some or all of one or more of these models, datasets, and functions are stored in the memory device 704 of a common computer system 700, as in the example depicted in FIG. 7. In other embodiments, such as those in which the training subsystem 210 is executed on a separate computing system, that separate computing system that executes the training subsystem 210 can provide access to the trained prediction model 230 to enable running of the prediction model 230 by the prediction subsystem 220. In additional or alternative embodiments, one or more programs, models, datasets, and functions described herein are stored in one or more other memory devices accessible via a data network.
  • The computing system 700 also includes a network interface device 710. The network interface device 710 includes any device or group of devices suitable for establishing a wired or wireless data connection to one or more data networks. Non-limiting examples of the network interface device 710 include an Ethernet network adapter, a modem, and the like. The computing system 700 is able to communicate with one or more other computing devices (e.g., a computing device acting as a client 240) via a data network using the network interface device 710.
  • Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
  • Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
  • The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
  • Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
  • The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
  • While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.

Claims (20)

1. A method in which one or more processing devices perform operations comprising:
providing a generative adversarial network (GAN) comprising a prediction model and a discriminator to be jointly trained;
accessing training data comprising a plurality of tuples, the plurality of tuples comprising source audios and target audios, each tuple comprising a source audio and a corresponding target audio, wherein the corresponding target audio is a target version of the source audio;
causing the prediction model to generate predicted audios based on the source audios;
applying a loss function to the predicted audios and the target audios, the loss function comprising a combination of a spectrogram loss and an adversarial loss; and
updating the prediction model to optimize the loss function comprising the combination of the spectrogram loss and the adversarial loss.
2. The method of claim 1, the operations further comprising:
receiving a request to enhance a new source audio;
responsive to the request, inputting the new source audio into the prediction model trained to map the source audios to the target audios; and
generating, by the prediction model, based on the new source audio, a new predicted audio as an enhanced version of the new source audio.
3. The method of claim 2, wherein the new source audio comprises a quality issue being two or more of noise, reverberation, or distortion, and wherein the prediction model reduces the quality issue in generating the new predicted audio.
4. The method of claim 1, wherein accessing the training data comprises:
obtaining the target audios; and
generating the source audios based on the target audios by simulating a plurality of room environments as applied to the target audios.
5. The method of claim 4, wherein simulating the plurality of room environments as applied to the target audios comprises convolving a plurality of room impulse responses with each target audio of the target audios.
6. The method of claim 4, wherein generating the plurality of source audios based on the target audios further comprises adding noise to a result of simulating the plurality of room environments as applied to the target audios.
7. The method of claim 4, wherein accessing the training data further comprises:
generating an augmented version of a first target audio, from among the target audios, by at least one of rescaling a volume of the first target audio or modifying a speed of the first target audio,
wherein generating the source audios based on the target audios comprises generating a first source audio based on the augmented version of the first target audio.
8. The method of claim 1, wherein the prediction model comprises a neural network having a feed-forward architecture comprising two or more one-dimensional dilated convolutions with varying dilation rates that increase exponentially to a limit.
9. The method of claim 8, wherein:
the source audios and the target audios are in waveform representation; and
the discriminator maps a log mel-spectrogram of a waveform to a score representing a likelihood of authenticity of the waveform, and wherein the discriminator comprises a gated convolutional neural network comprising two or more stacks of convolutional layers, a batch normalization layer, and a gated linear unit.
10. The method of claim 1, wherein the loss function comprises LG(x,x′)=α|LogSpec(G(x))−LogSpec(x′)|+β(1−D(LogMel(G(x)))), where x is a source audio of the source audios, x′ is a target audio corresponding to the source audio, α and β are weighting factors, G (x) is a predicted audio generated by the prediction model based on the source audio, D( ) is output from the discriminator, LogSpec( ) is a log spectrogram, and LogMel( ) is a log mel-spectrogram.
11. A system for enhancing audio, the system comprising:
a training subsystem configured to train a prediction model to enhance source audios, the training subsystem configured to:
access training data comprising a plurality of tuples, the plurality of tuples comprising source audios and target audios, each tuple comprising a source audio and a corresponding target audio, wherein the corresponding target audio is a target version of the source audio;
cause the prediction model to generate predicted audios based on the source audios;
apply a loss function to the predicted audios and the target audios, the loss function comprising a combination of (i) a spectrogram loss based on spectrogram versions of the predicted audios and the target audios and (ii) an adversarial loss based on feedback from a discriminator of a generative adversarial network (GAN); and
update the prediction model to optimize the loss function comprising the combination of the spectrogram loss and the adversarial loss.
12. The system of claim 11, wherein:
the prediction model inputs the source audios as source waveforms and outputs the predicted audios as predicted waveforms; and
the discriminator inputs the predicted audios as predicted spectrograms and outputs scores indicating likelihood of authenticity of the predicted audios.
13. The system of claim 11, further comprising a prediction subsystem comprising:
an interface configured to receive a new source audio; and
a prediction model configured to map the new source audio to a new predicted audio, wherein the new predicted audio is an enhanced version of the new source audio.
14. The system of claim 13, wherein the training subsystem is further configured to generate the source audios, and wherein generating the source audios comprises:
obtaining the target audios; and
generating the source audios based on the target audios by convolving a plurality of room impulse responses with each target audio of the target audios.
15. The system of claim 14, wherein generating the plurality of source audios based on the target audios further comprises adding noise to a result of simulating the plurality of room environments as applied to the target audios.
16. The system of claim 14, wherein the training subsystem is further configured to:
generate an augmented version of a first target audio, from among the target audios, by at least one of rescaling a volume of the first target audio or modifying a speed of the first target audio,
wherein generating the source audios based on the target audios comprises generating a first source audio based on the augmented version of the first target audio.
17. The system of claim 11, wherein the prediction model comprises a neural network having a feed-forward architecture comprising two or more one-dimensional dilated convolutions with varying dilation rates that increase exponentially to a limit.
18. The system of claim 17, wherein the discriminator maps a log mel-spectrogram of a waveform to a score representing a likelihood of authenticity of the waveform, and wherein the discriminator comprises a gated convolutional neural network comprising two or more stacks of convolutional layers, a batch normalization layer, and a gated linear unit.
19. A non-transitory computer-readable medium embodying program code for enhancing audio, the program code comprising instructions that, when executed by a processor, cause the processor to perform operations comprising:
providing a generative adversarial network (GAN) comprising a prediction model and a discriminator to be jointly trained;
obtaining training data comprising a plurality of tuples, the plurality of tuples comprising source audios and target audios, each tuple comprising a source audio and a corresponding target audio, wherein the corresponding target audio is a target version of the source audio;
training the GAN based on the training data, wherein the training comprises:
causing the prediction model to generate predicted audios based on the source audios;
applying a generator loss function to the predicted audios and the target audios, the generator loss function comprising a combination of a spectrogram loss and an adversarial loss; and
updating the prediction model to optimize the generator loss function comprising the combination of the spectrogram loss and the adversarial loss;
wherein, after the training, the prediction model is configured to generate new predicted audios based on new source audios.
20. The non-transitory computer-readable medium of claim 19, wherein:
the prediction model comprises a neural network having a feed-forward architecture comprising two or more one-dimensional dilated convolutions with varying dilation rates that increase exponentially to a limit;
the generator loss function comprises LG(x,x′)=α|LogSpec(G(x))−LogSpec(x′)|+β(1−D(LogMel(G(x)))), where x is a source audio of the source audios, x′ is a target audio corresponding to the source audio, α and β are weighting factors, G (x) is a predicted audio generated by the prediction model based on the source audio, D( ) is output from the discriminator, LogSpec( ) is a log spectrogram, and LogMel( ) is a log mel-spectrogram; and
the discriminator maps a log mel-spectrogram of a waveform to a score representing a likelihood of authenticity of the waveform, wherein the discriminator comprises a gated convolutional neural network comprising two or more stacks of convolutional layers, a batch normalization layer, and a gated linear unit.
US16/863,591 2020-04-30 2020-04-30 Using a predictive model to automatically enhance audio having various audio quality issues Active 2041-01-01 US11514925B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/863,591 US11514925B2 (en) 2020-04-30 2020-04-30 Using a predictive model to automatically enhance audio having various audio quality issues

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/863,591 US11514925B2 (en) 2020-04-30 2020-04-30 Using a predictive model to automatically enhance audio having various audio quality issues

Publications (2)

Publication Number Publication Date
US20210343305A1 true US20210343305A1 (en) 2021-11-04
US11514925B2 US11514925B2 (en) 2022-11-29

Family

ID=78293279

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/863,591 Active 2041-01-01 US11514925B2 (en) 2020-04-30 2020-04-30 Using a predictive model to automatically enhance audio having various audio quality issues

Country Status (1)

Country Link
US (1) US11514925B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220165626A1 (en) * 2020-11-20 2022-05-26 Yangtze Memory Technologies Co., Ltd. Feed-forward run-to-run wafer production control system based on real-time virtual metrology
CN115297420A (en) * 2022-06-22 2022-11-04 荣耀终端有限公司 Signal processing method, device and storage medium
US11689601B1 (en) * 2022-06-17 2023-06-27 International Business Machines Corporation Stream quality enhancement
WO2023132018A1 (en) * 2022-01-05 2023-07-13 日本電信電話株式会社 Learning device, signal processing device, learning method, and learning program
WO2023140488A1 (en) * 2022-01-20 2023-07-27 Samsung Electronics Co., Ltd. Bandwidth extension and speech enhancement of audio
CN116778937A (en) * 2023-03-28 2023-09-19 南京工程学院 Speech conversion method based on speaker versus antigen network
CN117219107A (en) * 2023-11-08 2023-12-12 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium of echo cancellation model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10971142B2 (en) * 2017-10-27 2021-04-06 Baidu Usa Llc Systems and methods for robust speech recognition using generative adversarial networks
CN109326302B (en) * 2018-11-14 2022-11-08 桂林电子科技大学 Voice enhancement method based on voiceprint comparison and generation of confrontation network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220165626A1 (en) * 2020-11-20 2022-05-26 Yangtze Memory Technologies Co., Ltd. Feed-forward run-to-run wafer production control system based on real-time virtual metrology
WO2023132018A1 (en) * 2022-01-05 2023-07-13 日本電信電話株式会社 Learning device, signal processing device, learning method, and learning program
WO2023140488A1 (en) * 2022-01-20 2023-07-27 Samsung Electronics Co., Ltd. Bandwidth extension and speech enhancement of audio
US11689601B1 (en) * 2022-06-17 2023-06-27 International Business Machines Corporation Stream quality enhancement
CN115297420A (en) * 2022-06-22 2022-11-04 荣耀终端有限公司 Signal processing method, device and storage medium
CN116778937A (en) * 2023-03-28 2023-09-19 南京工程学院 Speech conversion method based on speaker versus antigen network
CN117219107A (en) * 2023-11-08 2023-12-12 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium of echo cancellation model

Also Published As

Publication number Publication date
US11514925B2 (en) 2022-11-29

Similar Documents

Publication Publication Date Title
US11514925B2 (en) Using a predictive model to automatically enhance audio having various audio quality issues
JP7337953B2 (en) Speech recognition method and device, neural network training method and device, and computer program
Vasquez et al. Melnet: A generative model for audio in the frequency domain
US11456005B2 (en) Audio-visual speech separation
US10210861B1 (en) Conversational agent pipeline trained on synthetic data
US10540961B2 (en) Convolutional recurrent neural networks for small-footprint keyword spotting
EP3477633A1 (en) Systems and methods for robust speech recognition using generative adversarial networks
US11812254B2 (en) Generating scene-aware audio using a neural network-based acoustic analysis
US10810993B2 (en) Sample-efficient adaptive text-to-speech
CN107077842A (en) System and method for phonetic transcription
US20210089909A1 (en) High fidelity speech synthesis with adversarial networks
CN112687263A (en) Voice recognition neural network model, training method thereof and voice recognition method
US20230395087A1 (en) Machine Learning for Microphone Style Transfer
JP7214798B2 (en) AUDIO SIGNAL PROCESSING METHOD, AUDIO SIGNAL PROCESSING DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
WO2023226839A1 (en) Audio enhancement method and apparatus, and electronic device and readable storage medium
JP2023548707A (en) Speech enhancement methods, devices, equipment and computer programs
CN113886639A (en) Digital human video generation method and device, electronic equipment and storage medium
CN113994427A (en) Source-specific separation of speech in audio recordings by predicting an isolated audio signal conditioned on speaker representation
WO2024114303A1 (en) Phoneme recognition method and apparatus, electronic device and storage medium
US20240096332A1 (en) Audio signal processing method, audio signal processing apparatus, computer device and storage medium
US20230368766A1 (en) Temporal alignment of signals using attention
Bao et al. Lightweight Dual-channel Target Speaker Separation for Mobile Voice Communication
Su Studio-Quality Speech Enhancement
Wang et al. A systematic study of DNN based speech enhancement in reverberant and reverberant-noisy environments
Wang et al. A Study on Domain Adaptation for Audio-Visual Speech Enhancement

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: THE TRUSTEES OF PRINCETON UNIVERSITY, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FINKELSTEIN, ADAM;REEL/FRAME:056111/0342

Effective date: 20210210

Owner name: ADOBE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIN, ZEYU;SU, JIAQI;SIGNING DATES FROM 20200310 TO 20200311;REEL/FRAME:056111/0242

STPP Information on status: patent application and granting procedure in general

Free format text: PRE-INTERVIEW COMMUNICATION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE