CA2810457C - System and method for applying a convolutional neural network to speech recognition - Google Patents
System and method for applying a convolutional neural network to speech recognition Download PDFInfo
- Publication number
- CA2810457C CA2810457C CA2810457A CA2810457A CA2810457C CA 2810457 C CA2810457 C CA 2810457C CA 2810457 A CA2810457 A CA 2810457A CA 2810457 A CA2810457 A CA 2810457A CA 2810457 C CA2810457 C CA 2810457C
- Authority
- CA
- Canada
- Prior art keywords
- layer
- convolution
- pooling
- acoustic signal
- speech
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000011176 pooling Methods 0.000 claims abstract description 62
- 230000000116 mitigating effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 230000004913 activation Effects 0.000 description 16
- 238000001994 activation Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000001228 spectrum Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0631—Creating reference templates; Clustering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Complex Calculations (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
A system and method for applying a convolutional neural network (CNN) to speech recognition. The CNN may provide input to a hidden Markov model and has at least one pair of a convolution layer and a pooling layer. The CNN operates along the frequency axis. The CNN has units that operate upon one or more local frequency bands of an acoustic signal. The CNN mitigates acoustic variation.
Description
2 SPEECH RECOGNITION
3 .. TECHNICAL FIELD
4 [0001] The following relates generally to convolutional neural networks and more specifically to applying a convolutional neural network to speech recognition.
7 [0002] Systems for automatic speech recognition (ASR) are generally challenged with the wide 8 range of speaking, channel, and environmental conditions that humans can generally handle well. The 9 conditions may, for example, include ambient noise, speaker variability, accents, dialects and language differences. Other variations may also be present in a particular speech pattern.
11 [0003] These types of acoustic variations have been found to be challenging to most ASR systems 12 .. that use Hidden Markov Models (HMMs) to model the sequential structure of speech signals, where each 13 HMM state uses a Gaussian Mixture model (GMM) to model short-time spectral representation of speech 14 signal. Better acoustic models should be able to model a variety of acoustic variations in speech signals more effectively to achieve robustness against various speaking and environmental conditions.
16 [0004] More recently, deep neural networks have been proposed to replace GMM as the basic 17 acoustic models for HMM-based speech recognition systems and it has been demonstrated that neural 18 .. network (NN)-based acoustic models can achieve competitive recognition performance in some difficult 19 large vocabulary continuous speech recognition (LVCSR) tasks. One advantage of NNs is the distributed representations of input features (i.e., many neurons are active simultaneously to represent input features) 21 that generally makes them more efficient than GMMs. This property allows NNs to model a diversity of 22 speaking styles and background conditions with typically much less training data because NNs can share 23 similar portions of the input space to train some hidden units but keep other units sensitive to a subset of 24 the input features that are significant to recognition. However, these NNs can be computationally .. expensive to implement.
26 [0005] It is an object of the following to obviate or mitigate at least one of the foregoing issues.
28 [0006] In one aspect, a method for applying a convolutional neural network to a speech signal to 29 .. mitigate acoustic variation in speech is provided, the convolutional neural network comprising at least one processor, the method comprising: (a) obtaining an acoustic signal comprising speech; (b) 31 preprocessing the acoustic signal to: (i) transform the acoustic signal to its frequency domain 1 representation; and (ii) divide the frequency domain representation into a plurality of frequency bands; (c) 2 providing the plurality of frequency bands to a convolution layer of the convolutional neural network, the 3 convolution layer comprising a plurality of convolution units each receiving input from at least one of the 4 frequency bands; and (d) providing the output of the convolution layer to a pooling layer of the convolutional neural network, the pooling layer comprising a plurality of pooling units each receiving 6 input from at least one of the convolution units, the output of the pooling layer being a representation of 7 the acoustic signal mitigating acoustic variation.
8 [0007] In another aspect, a system for mitigating acoustic variation in speech is provided, the system 9 comprising a convolutional neural network, the convolutional neural network comprising at least one pair of: (a) a convolution layer comprising a plurality of convolution units each receiving input from at least 11 one frequency band of an acoustic signal comprising speech; (b) a pooling layer comprising a plurality of 12 pooling units each receiving input from at least one of the convolutional units, the output of the pooling 13 layer being a representation of the acoustic signal mitigating acoustic variation.
[0008] The features will become more apparent in the following detailed description in which 16 reference is made to the appended drawings wherein:
17 [0009] FIG. 1 is an architecture diagram of a convolutional neural network applied to speech 18 recognition;
19 [0010] FIG. 2 is a flowchart of a method for applying a convolutional neural network to speech recognition;
21 [0011] FIG. 3 is a block diagram of an exemplary convolutional neural network with full weight 22 sharing applied to an acoustic signal; and 23 [0012] FIG. 4 is a block diagram of an exemplary convolutional neural network with limited weight 24 sharing applied to an acoustic signal.
DETAILED DESCRIPTION
26 [0013] Embodiments will now be described with reference to the figures. It will be appreciated that 27 for simplicity and clarity of illustration, where considered appropriate, reference numerals may be 28 repeated among the figures to indicate corresponding or analogous elements. In addition, numerous 29 specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described 31 herein may be practiced without these specific details. In other instances, well-known methods, 1 procedures and components have not been described in detail so as not to obscure the embodiments 2 described herein. Also, the description is not to be considered as limiting the scope of the embodiments 3 described herein.
4 [0014] It will also be appreciated that any module, engine, unit, application, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have 6 access to computer readable media such as storage media, computer storage media, or data storage 7 devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
8 Computer storage media may include volatile and non-volatile, removable and non-removable media 9 implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media 11 include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile 12 disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other 13 magnetic storage devices, or any other medium which can be used to store the desired information and 14 which can be accessed by an application, module, or both. Any such computer storage media may be part =
of the device or accessible or connectable thereto. Any application, module or engine herein described 16 may be implemented using computer readable/executable instructions that may be stored or otherwise 17 held by such computer readable media.
18 [00I5] It has been found that acoustic variation can be mitigated by processing an acoustic signal 19 comprising speech along both time and frequency axes. By applying the CNN convolution and pooling operations along the frequency axis, substantial invariance to small shifts along the frequency axis can be 21 achieved to normalize acoustic variation.
22 [0016] A CNN as described herein applies local filtering and pooling along the frequency axis to 23 normalize speaker variance and enforce locality of features to enable an increase in speaker independent 24 speech recognition performance. The CNN comprises at least one pair of layers comprising a convolution layer, comprising a plurality of convolution units, and a pooling layer, comprising a plurality of pooling 26 units, to normalize spectral variations of speech signals. The spectral variations may comprise various 27 speaking and environmental conditions, including, for example, channel noise, colored background, 28 speaker variability, accents, dialects and language differences.
29 [0017] Referring now to Fig. 1, a system for applying a convolutional neural network (CNN) to speech recognition is shown. The CNN (100) may be applied to speech recognition within the framework 31 of a hybrid NN-HMM architecture. That is, the CNN (100) may be applied to an obtained or observed 32 acoustic signal comprising speech (102) with the output of the pooling layer feeding a fully connected 33 hidden NN layer (106), with better speech features (110) having increased robustness to speaker and 1 noise variations. The CNN (100) is operable to analyze spectro-temporal patches of the acoustic signal, 2 providing the HMM component with a signal representation that may be characterized by increased 3 robust to variances in speaker and noise conditions. The HMM component may comprise a decoding unit 4 (decoder) (118), which may be applied to the output of the CNN to output a sequence of labels that were recognized. It will be appreciated that the decoder may alternatively operate relative to another state-6 based model, rather than an HMM, to output a label sequence.
7 [0018] A preprocessing unit (preprocessor) (108) computes speech features that are suitable for the 8 CNN (100). These features are computed from the acoustic signal (102) prior to inputting the frequency 9 domain representation of the signal (104) to the CNN (100). The preprocessor (108) may therefore generate, for each of a plurality of signal frames (in the time domain), a frequency domain representation 11 of the obtained acoustic signal (102) and divide this representation into a plurality of bands (shown in Fig.
12 3) which are input to the CNN (100), where a band refers to a particular frequency range that is 13 represented by a vector of features either in the input or other CNN
layers' units. Alternatively, rather 14 than the preprocessor (108) dividing the frequency domain representation into a plurality of bands, the CNN may comprise a set of filters enabling each convolution unit of the bottom layer to operate on 16 particular bands.
17 [0019] The CNN comprises at least one pair of layers (112), each pair comprising a convolution 18 layer (114) and pooling layer (116). The convolution layer (114) applies a set of kernels, each one of the 19 kernels processing one or more bands of the layer input. Each kernel produces a learnable weight vector.
The pooling layer (116) comprises one or more pooling layer units, each one of the pooling layer units 21 applying a pooling function to one or more convolution unit kernel outputs computed at different bands 22 using a pooling function. The pooling function may be an average or a maximum function or any other 23 function that aggregates multiple values into a single value. Top fully connected layers may be applied to 24 combine pooling layer units from the topmost pooling layer. A final softmax layer may fmally be applied to combine the outputs of the fully connected layer using softmax functions.
26 [0020] The CNN is applied along the frequency axis of the observed speech signal, while the 27 variability along the time axis of the speech signal may be normalized by application of a HMM
28 component. The dependency between adjacent speech frames may be utilised by the application of a long 29 time context window that feeds as input to the CNN.
[0021] The output of the CNN may be the probability P (s V Ot) that the frame at time t belongs to 31 an HMM state s. Generally, in an example, t may be on the order of tens of milliseconds or some other 32 period suitable for the HMM. A decoder may be applied over the signal in the time domain to match the 1 states to speech based on the probabilities P, where the best fit may be used to output a sequence of 2 labels.
3 [0022] As previously mentioned, it has been found that speech signals typically exhibit locality 4 characteristics along the frequency axis. Thus, different phonemes many have energy concentrations in different local bands along the frequency axis. For example, voiced phonemes have a number of formants 6 appearing at different frequencies. The preprocessor generates frequency representations of the signal to 7 enable the CNN to distinguish phonemes based upon the local energy patterns. As a result, kernels 8 provided by the CNN that operate on different bands of local frequency regions may represent these local 9 structures, and may represent combinations of these local structures along the whole frequency axis to enable the CNN to recognize output labels. The locality of processing within these kernels further enables 11 robustness against ambient noises, particularly where noises are only concentrated in parts of the 12 spectrum. In this situation, kernels that process bands in relatively cleaner parts of the spectrum can still 13 detect speech features well, in order to compensate for the ambiguity of noisy parts of the spectrum.
14 [0023] The CNN described herein is capable of modeling these local frequency structures by enabling each dimension of each vector computed by each kernel of the convolution layer to operate upon 16 features representing a limited bandwidth (the receptive field of the respective dimension of the kernel 17 output vector) of the complete speech spectrum. To achieve this, the preprocessor (108) may be operable 18 to represent the observed signal in a frequency scale that can be divided into a number of local bands. The 19 frequency scale may, therefore, comprise any of linear spectrum, Mel-scale spectrum, filter-bank features or any other locality preserving features.
21 [0024] Referring now to Figs. 2 and 3, the preprocessor (108) obtains the acoustic signal, at block 22 200, and generates a speech signal v (300) by transforming, in block 202, the observed speech signal for a 23 particular context window (i.e., a plurality of frames) to its frequency domain representation.
24 [0025] In block 204, the preprocessor then divides v into a plurality of B bands, i.e., v =
[v1v2... vB], where vb is the feature vector representing band b. The feature vector vb may include 26 speech spectral features (s), delta (as) and acceleration (A2s) parameters from local band b of all feature 27 frames within the current context window, where the window comprises c frames, as follows:
A2s(0,s(t+i),As(r+1.), A2s(t+1), s(t+c),As(t+c), A2s(t+c)1 Vb = [S(t), As (t), 28 [0026] In block 206, the speech signal v, for each particular context window, is each individually 29 input to the convolution layer (302) which operates upon each window of the speech signal v. Activations of the convolution layer (302) are divided into K bands where each band contains J different kernel 31 activations. The number of bands K in the convolution layer output may be equal to the number of input
7 [0002] Systems for automatic speech recognition (ASR) are generally challenged with the wide 8 range of speaking, channel, and environmental conditions that humans can generally handle well. The 9 conditions may, for example, include ambient noise, speaker variability, accents, dialects and language differences. Other variations may also be present in a particular speech pattern.
11 [0003] These types of acoustic variations have been found to be challenging to most ASR systems 12 .. that use Hidden Markov Models (HMMs) to model the sequential structure of speech signals, where each 13 HMM state uses a Gaussian Mixture model (GMM) to model short-time spectral representation of speech 14 signal. Better acoustic models should be able to model a variety of acoustic variations in speech signals more effectively to achieve robustness against various speaking and environmental conditions.
16 [0004] More recently, deep neural networks have been proposed to replace GMM as the basic 17 acoustic models for HMM-based speech recognition systems and it has been demonstrated that neural 18 .. network (NN)-based acoustic models can achieve competitive recognition performance in some difficult 19 large vocabulary continuous speech recognition (LVCSR) tasks. One advantage of NNs is the distributed representations of input features (i.e., many neurons are active simultaneously to represent input features) 21 that generally makes them more efficient than GMMs. This property allows NNs to model a diversity of 22 speaking styles and background conditions with typically much less training data because NNs can share 23 similar portions of the input space to train some hidden units but keep other units sensitive to a subset of 24 the input features that are significant to recognition. However, these NNs can be computationally .. expensive to implement.
26 [0005] It is an object of the following to obviate or mitigate at least one of the foregoing issues.
28 [0006] In one aspect, a method for applying a convolutional neural network to a speech signal to 29 .. mitigate acoustic variation in speech is provided, the convolutional neural network comprising at least one processor, the method comprising: (a) obtaining an acoustic signal comprising speech; (b) 31 preprocessing the acoustic signal to: (i) transform the acoustic signal to its frequency domain 1 representation; and (ii) divide the frequency domain representation into a plurality of frequency bands; (c) 2 providing the plurality of frequency bands to a convolution layer of the convolutional neural network, the 3 convolution layer comprising a plurality of convolution units each receiving input from at least one of the 4 frequency bands; and (d) providing the output of the convolution layer to a pooling layer of the convolutional neural network, the pooling layer comprising a plurality of pooling units each receiving 6 input from at least one of the convolution units, the output of the pooling layer being a representation of 7 the acoustic signal mitigating acoustic variation.
8 [0007] In another aspect, a system for mitigating acoustic variation in speech is provided, the system 9 comprising a convolutional neural network, the convolutional neural network comprising at least one pair of: (a) a convolution layer comprising a plurality of convolution units each receiving input from at least 11 one frequency band of an acoustic signal comprising speech; (b) a pooling layer comprising a plurality of 12 pooling units each receiving input from at least one of the convolutional units, the output of the pooling 13 layer being a representation of the acoustic signal mitigating acoustic variation.
[0008] The features will become more apparent in the following detailed description in which 16 reference is made to the appended drawings wherein:
17 [0009] FIG. 1 is an architecture diagram of a convolutional neural network applied to speech 18 recognition;
19 [0010] FIG. 2 is a flowchart of a method for applying a convolutional neural network to speech recognition;
21 [0011] FIG. 3 is a block diagram of an exemplary convolutional neural network with full weight 22 sharing applied to an acoustic signal; and 23 [0012] FIG. 4 is a block diagram of an exemplary convolutional neural network with limited weight 24 sharing applied to an acoustic signal.
DETAILED DESCRIPTION
26 [0013] Embodiments will now be described with reference to the figures. It will be appreciated that 27 for simplicity and clarity of illustration, where considered appropriate, reference numerals may be 28 repeated among the figures to indicate corresponding or analogous elements. In addition, numerous 29 specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described 31 herein may be practiced without these specific details. In other instances, well-known methods, 1 procedures and components have not been described in detail so as not to obscure the embodiments 2 described herein. Also, the description is not to be considered as limiting the scope of the embodiments 3 described herein.
4 [0014] It will also be appreciated that any module, engine, unit, application, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have 6 access to computer readable media such as storage media, computer storage media, or data storage 7 devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
8 Computer storage media may include volatile and non-volatile, removable and non-removable media 9 implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media 11 include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile 12 disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other 13 magnetic storage devices, or any other medium which can be used to store the desired information and 14 which can be accessed by an application, module, or both. Any such computer storage media may be part =
of the device or accessible or connectable thereto. Any application, module or engine herein described 16 may be implemented using computer readable/executable instructions that may be stored or otherwise 17 held by such computer readable media.
18 [00I5] It has been found that acoustic variation can be mitigated by processing an acoustic signal 19 comprising speech along both time and frequency axes. By applying the CNN convolution and pooling operations along the frequency axis, substantial invariance to small shifts along the frequency axis can be 21 achieved to normalize acoustic variation.
22 [0016] A CNN as described herein applies local filtering and pooling along the frequency axis to 23 normalize speaker variance and enforce locality of features to enable an increase in speaker independent 24 speech recognition performance. The CNN comprises at least one pair of layers comprising a convolution layer, comprising a plurality of convolution units, and a pooling layer, comprising a plurality of pooling 26 units, to normalize spectral variations of speech signals. The spectral variations may comprise various 27 speaking and environmental conditions, including, for example, channel noise, colored background, 28 speaker variability, accents, dialects and language differences.
29 [0017] Referring now to Fig. 1, a system for applying a convolutional neural network (CNN) to speech recognition is shown. The CNN (100) may be applied to speech recognition within the framework 31 of a hybrid NN-HMM architecture. That is, the CNN (100) may be applied to an obtained or observed 32 acoustic signal comprising speech (102) with the output of the pooling layer feeding a fully connected 33 hidden NN layer (106), with better speech features (110) having increased robustness to speaker and 1 noise variations. The CNN (100) is operable to analyze spectro-temporal patches of the acoustic signal, 2 providing the HMM component with a signal representation that may be characterized by increased 3 robust to variances in speaker and noise conditions. The HMM component may comprise a decoding unit 4 (decoder) (118), which may be applied to the output of the CNN to output a sequence of labels that were recognized. It will be appreciated that the decoder may alternatively operate relative to another state-6 based model, rather than an HMM, to output a label sequence.
7 [0018] A preprocessing unit (preprocessor) (108) computes speech features that are suitable for the 8 CNN (100). These features are computed from the acoustic signal (102) prior to inputting the frequency 9 domain representation of the signal (104) to the CNN (100). The preprocessor (108) may therefore generate, for each of a plurality of signal frames (in the time domain), a frequency domain representation 11 of the obtained acoustic signal (102) and divide this representation into a plurality of bands (shown in Fig.
12 3) which are input to the CNN (100), where a band refers to a particular frequency range that is 13 represented by a vector of features either in the input or other CNN
layers' units. Alternatively, rather 14 than the preprocessor (108) dividing the frequency domain representation into a plurality of bands, the CNN may comprise a set of filters enabling each convolution unit of the bottom layer to operate on 16 particular bands.
17 [0019] The CNN comprises at least one pair of layers (112), each pair comprising a convolution 18 layer (114) and pooling layer (116). The convolution layer (114) applies a set of kernels, each one of the 19 kernels processing one or more bands of the layer input. Each kernel produces a learnable weight vector.
The pooling layer (116) comprises one or more pooling layer units, each one of the pooling layer units 21 applying a pooling function to one or more convolution unit kernel outputs computed at different bands 22 using a pooling function. The pooling function may be an average or a maximum function or any other 23 function that aggregates multiple values into a single value. Top fully connected layers may be applied to 24 combine pooling layer units from the topmost pooling layer. A final softmax layer may fmally be applied to combine the outputs of the fully connected layer using softmax functions.
26 [0020] The CNN is applied along the frequency axis of the observed speech signal, while the 27 variability along the time axis of the speech signal may be normalized by application of a HMM
28 component. The dependency between adjacent speech frames may be utilised by the application of a long 29 time context window that feeds as input to the CNN.
[0021] The output of the CNN may be the probability P (s V Ot) that the frame at time t belongs to 31 an HMM state s. Generally, in an example, t may be on the order of tens of milliseconds or some other 32 period suitable for the HMM. A decoder may be applied over the signal in the time domain to match the 1 states to speech based on the probabilities P, where the best fit may be used to output a sequence of 2 labels.
3 [0022] As previously mentioned, it has been found that speech signals typically exhibit locality 4 characteristics along the frequency axis. Thus, different phonemes many have energy concentrations in different local bands along the frequency axis. For example, voiced phonemes have a number of formants 6 appearing at different frequencies. The preprocessor generates frequency representations of the signal to 7 enable the CNN to distinguish phonemes based upon the local energy patterns. As a result, kernels 8 provided by the CNN that operate on different bands of local frequency regions may represent these local 9 structures, and may represent combinations of these local structures along the whole frequency axis to enable the CNN to recognize output labels. The locality of processing within these kernels further enables 11 robustness against ambient noises, particularly where noises are only concentrated in parts of the 12 spectrum. In this situation, kernels that process bands in relatively cleaner parts of the spectrum can still 13 detect speech features well, in order to compensate for the ambiguity of noisy parts of the spectrum.
14 [0023] The CNN described herein is capable of modeling these local frequency structures by enabling each dimension of each vector computed by each kernel of the convolution layer to operate upon 16 features representing a limited bandwidth (the receptive field of the respective dimension of the kernel 17 output vector) of the complete speech spectrum. To achieve this, the preprocessor (108) may be operable 18 to represent the observed signal in a frequency scale that can be divided into a number of local bands. The 19 frequency scale may, therefore, comprise any of linear spectrum, Mel-scale spectrum, filter-bank features or any other locality preserving features.
21 [0024] Referring now to Figs. 2 and 3, the preprocessor (108) obtains the acoustic signal, at block 22 200, and generates a speech signal v (300) by transforming, in block 202, the observed speech signal for a 23 particular context window (i.e., a plurality of frames) to its frequency domain representation.
24 [0025] In block 204, the preprocessor then divides v into a plurality of B bands, i.e., v =
[v1v2... vB], where vb is the feature vector representing band b. The feature vector vb may include 26 speech spectral features (s), delta (as) and acceleration (A2s) parameters from local band b of all feature 27 frames within the current context window, where the window comprises c frames, as follows:
A2s(0,s(t+i),As(r+1.), A2s(t+1), s(t+c),As(t+c), A2s(t+c)1 Vb = [S(t), As (t), 28 [0026] In block 206, the speech signal v, for each particular context window, is each individually 29 input to the convolution layer (302) which operates upon each window of the speech signal v. Activations of the convolution layer (302) are divided into K bands where each band contains J different kernel 31 activations. The number of bands K in the convolution layer output may be equal to the number of input
5 1 bands by adding extra bands with zero values before and after the actual input bands. Each band 2 activation may be denoted as hk =h [ -k,1 - h k,2. = = hkj]. The convolution layer activations can be computed 3 as a convolution-like operation of each kernel on the lower layer bands followed by a non-linear 4 activation function:
(P s hic,i = 0 Z / Wjd,,p12b+k_im -I- aj p=1 6=1 where 9(x) is the activation function, s is the kernel size in the number of input bands, P is the size of vb ,
(P s hic,i = 0 Z / Wjd,,p12b+k_im -I- aj p=1 6=1 where 9(x) is the activation function, s is the kernel size in the number of input bands, P is the size of vb ,
6 Wimm is the weight element representing the pth component of the bth band of the jth filter kernel. In full
7 weight sharing, all of the K bands share the same set of filter kernels as shown in the previous equation.
8 [0027] This convolution layer comprises K bands where each band comprises Junits. The
9 convolution layer may be considered similar to a standard NN layer where all nodes of the hidden layer are grouped into bands and each node receives inputs only from bands of the lower layer. Moreover, 11 weights and biases for the jth node of each band may be shared among different hidden convolution layer 12 bands. Note that in Fig. 3, weights represented by the same line style may be shared among all 13 convolution layer bands.
14 [0028] As previously mentioned, speech spectrum includes many local structures and these local structures are distributed over a range of the frequency axis, where each local structure typically appears 16 to center around one particular frequency that can vary within a limited range. For example, central 17 frequencies of formants for the same phoneme may vary within a limited range and typically differ 18 between different speakers and sometimes between different utterances from the same speaker.
19 [0029] A pooling layer may be operable to mitigate the foregoing variability. In block 208, a pooling layer is applied to the output of the convolution layer. The pooling layer activations may be divided into 21 Mbands. Each band of the pooling layer receives input from r convolution layer neighbouring bands to 22 generate Jvalues corresponding to the convolution Jkernels. The jth value represents the result of the 23 pooling function on the corresponding activations of the jth convolution kernel along the r bands of the 24 convolution layer, as shown in Fig. 3. The pooling layer may generate a lower resolution version of the convolution layer by applying this pooling operation every n convolution layer bands, where n is the sub-26 sampling factor. As a result, a smaller number of bands may be obtained in the pooling layer that provide 27 lower frequency resolution features that may contain more useful information that may be further 28 processed by higher layers in the CNN hierarchy.
1 [0030] The activations of the mth band of the pooling layer may be denoted as 2 Pm = [Pm,i Pan,2 Pm,11T. Each activation may be computed as:
r f Pm,]
k=1 3 where r may be referred to as pooling size n may be smaller than r to have some overlap between 4 adjacent pooling bands. p is the pooling function. Examples of this pooling function are the maximum, sum, and average but may be any summary function that can compute a single value from an arbitrary set 6 of values, or it may be learned. The example shown in Fig. 3 has a pooling layer with a sub-sampling 7 factor of 2 and a pooling size of 3.
8 [0031] Referring now to Fig. 4, an exemplary CNN with limited weight sharing is shown. In a 9 standard CNN, a full weight sharing scheme is used where the local filter weights are tied and shared for all positions or bands within the whole input space, as in Fig. 3. In this case, computation of all filters' 11 activations may be a convolution of the filter weights and the input signals.
12 [0032] In speech signals, however, different local patterns appear at different frequencies. Therefore, 13 it may be more effective to have a limited weight sharing scheme. In a limited weight sharing scheme, 14 weight sharing is limited to those local filters that are close to one another and are pooled together in the pooling layer. This weight sharing strategy is depicted in Fig. 4, where one set of kernels weights is used 16 for each pooling band. For example, in Fig. 4, W(1) represents the weights matrix shared between bands 17 1-41), q=), and 141), where 1111) receives input from bands 1-4 in input layer, ie receives input from 18 bands 2-5, and so on.
19 [0033] As a result, the convolution layer may be divided into a number of convolution sections, where all convolution bands in each section are pooled together into one pooling layer band and are 21 computed by convolving section kernels with a small number of the input layer bands. In this case, the 22 pooling layer activations may be computed as:
=
k=1 23 with (P s-1 hrd) = II im(Tri) 'W,TI'mxn+b+km + a jcm) p=1 b=1 24 where 147) is the activation of the jth kernel of the mth section of the convolution layer applied at the kth band position. In this context, n may be referred to as a band shift in the pooling layer.
1 [0034] It should be understood that the full weight sharing implementation as described herein is 2 distinct from the limited weight sharing implementation described earlier. In the case of full weight 3 sharing, the sets of weights in different bands are configured to be the same. However, this configuration 4 does not constrain the choice of value for M, the number of bands into which the pooling layer activations may be divided. In the case of limited weight sharing as described above, however, the sets of weights in 6 different bands are configured to be the same when the convolution layer consists of only one convolution 7 section and, thus, when there is only a single pooling band for the entire corresponding convolution layer.
8 In a general case, there may be multiple convolution sections, and there may be a different number of 9 bands into which the pooling layer activations of each section may be divided.
[0035] This type of limited weight sharing may be applied only in the topmost convolution layer 11 because the filters in different bands are not related and cannot be convoluted and pooled afterwards.
12 [0036] In another aspect, the CNN may comprise one or more pairs of convolution and pooling 13 layers, where the lowest layers process a small number of input frequency bands independently to 14 generate higher level representation with lower frequency resolution.
The number of bands may decrease in higher layers. The input to each convolution layer may be padded to ensure that the first and last input 16 bands are processed by a suitable number of kernels in the convolution layer. For example, each input 17 may be padded by adding a number of dummy bands before and after the first and last bands, 18 respectively, so that the number of bands is consistent between the original input and convolution layers.
19 [0037] In embodiments, the top layers of the CNN are fully connected to combine different local structures extracted in the lower layers for the final recognition.
21 [0038] In block 210, the output from the pooling layer is fed to a number of fully connected hidden 22 layers. The posterior probabilities of output labels may be computed using a top softmax layer (120). The 23 CNN may process each input speech utterance by generating all output label probabilities for each frame.
24 In block 212, a decoder, such as a Vitetbi decoder for example, may then be applied to obtain the sequence of labels corresponding to the input utterance.
26 [0039] In the training stage, the CNN may, for example, be estimated using a back-propagation 27 technique to minimize cross entropy of targets and output layer activations. The training targets may be 28 obtained from forced alignments generated from a trained HMM component.
29 [0040] In exemplary embodiments, in feature extraction, speech may be analyzed using, for example, a 25-ms frame length multiplied by a Hamming function with a 10-ms fixed frame rate. The 31 speech feature vector may be generated by Fourier-transform-based filter-banks, which may include, for 32 example, 40 coefficients distributed on a Mel-scale and energy, along with their first and second temporal AppIn no. CA2,810,457 Agent ref: 103-005CAP
Amendment dated 2017/08/14 I derivatives. All speech data may be normalized by averaging over all training cases so that each 2 coefficient or first derivative or second derivative all has zero mean and unit variance. An n-gram 3 language model may be applied in decoding to generate the output label sequence.
4 I 00411 in exemplary embodiments, for network training, a learning rate, annealing and early stopping strategics may be applied. The NN input layer may include a context window of IS frames, for 6 example. The input of the CNN may be divided into 40 bands, for example.
In this example, each band 7 may include one of the 40 filter-bank coefficients along the 15 frames context window including their 8 first and second derivatives. Moreover, all bands of the first convolution layer may receive the energy as 9 an extra input because it may not be suitable to treat it as a frequency band. Moreover the inputs of .. convolution layers may be padded as previously mentioned. Exemplary pooling sizes may be from 1 to 8, 11 .. for example. Around 80 to 97 filters may be provided per band, for example.
12 1,0042,1 Although the above has been described with reference to certain specific embodiments, 13 various modifications thereof will be apparent to those skilled in the art without departing from the scope 14 of the claims appended hereto.
14 [0028] As previously mentioned, speech spectrum includes many local structures and these local structures are distributed over a range of the frequency axis, where each local structure typically appears 16 to center around one particular frequency that can vary within a limited range. For example, central 17 frequencies of formants for the same phoneme may vary within a limited range and typically differ 18 between different speakers and sometimes between different utterances from the same speaker.
19 [0029] A pooling layer may be operable to mitigate the foregoing variability. In block 208, a pooling layer is applied to the output of the convolution layer. The pooling layer activations may be divided into 21 Mbands. Each band of the pooling layer receives input from r convolution layer neighbouring bands to 22 generate Jvalues corresponding to the convolution Jkernels. The jth value represents the result of the 23 pooling function on the corresponding activations of the jth convolution kernel along the r bands of the 24 convolution layer, as shown in Fig. 3. The pooling layer may generate a lower resolution version of the convolution layer by applying this pooling operation every n convolution layer bands, where n is the sub-26 sampling factor. As a result, a smaller number of bands may be obtained in the pooling layer that provide 27 lower frequency resolution features that may contain more useful information that may be further 28 processed by higher layers in the CNN hierarchy.
1 [0030] The activations of the mth band of the pooling layer may be denoted as 2 Pm = [Pm,i Pan,2 Pm,11T. Each activation may be computed as:
r f Pm,]
k=1 3 where r may be referred to as pooling size n may be smaller than r to have some overlap between 4 adjacent pooling bands. p is the pooling function. Examples of this pooling function are the maximum, sum, and average but may be any summary function that can compute a single value from an arbitrary set 6 of values, or it may be learned. The example shown in Fig. 3 has a pooling layer with a sub-sampling 7 factor of 2 and a pooling size of 3.
8 [0031] Referring now to Fig. 4, an exemplary CNN with limited weight sharing is shown. In a 9 standard CNN, a full weight sharing scheme is used where the local filter weights are tied and shared for all positions or bands within the whole input space, as in Fig. 3. In this case, computation of all filters' 11 activations may be a convolution of the filter weights and the input signals.
12 [0032] In speech signals, however, different local patterns appear at different frequencies. Therefore, 13 it may be more effective to have a limited weight sharing scheme. In a limited weight sharing scheme, 14 weight sharing is limited to those local filters that are close to one another and are pooled together in the pooling layer. This weight sharing strategy is depicted in Fig. 4, where one set of kernels weights is used 16 for each pooling band. For example, in Fig. 4, W(1) represents the weights matrix shared between bands 17 1-41), q=), and 141), where 1111) receives input from bands 1-4 in input layer, ie receives input from 18 bands 2-5, and so on.
19 [0033] As a result, the convolution layer may be divided into a number of convolution sections, where all convolution bands in each section are pooled together into one pooling layer band and are 21 computed by convolving section kernels with a small number of the input layer bands. In this case, the 22 pooling layer activations may be computed as:
=
k=1 23 with (P s-1 hrd) = II im(Tri) 'W,TI'mxn+b+km + a jcm) p=1 b=1 24 where 147) is the activation of the jth kernel of the mth section of the convolution layer applied at the kth band position. In this context, n may be referred to as a band shift in the pooling layer.
1 [0034] It should be understood that the full weight sharing implementation as described herein is 2 distinct from the limited weight sharing implementation described earlier. In the case of full weight 3 sharing, the sets of weights in different bands are configured to be the same. However, this configuration 4 does not constrain the choice of value for M, the number of bands into which the pooling layer activations may be divided. In the case of limited weight sharing as described above, however, the sets of weights in 6 different bands are configured to be the same when the convolution layer consists of only one convolution 7 section and, thus, when there is only a single pooling band for the entire corresponding convolution layer.
8 In a general case, there may be multiple convolution sections, and there may be a different number of 9 bands into which the pooling layer activations of each section may be divided.
[0035] This type of limited weight sharing may be applied only in the topmost convolution layer 11 because the filters in different bands are not related and cannot be convoluted and pooled afterwards.
12 [0036] In another aspect, the CNN may comprise one or more pairs of convolution and pooling 13 layers, where the lowest layers process a small number of input frequency bands independently to 14 generate higher level representation with lower frequency resolution.
The number of bands may decrease in higher layers. The input to each convolution layer may be padded to ensure that the first and last input 16 bands are processed by a suitable number of kernels in the convolution layer. For example, each input 17 may be padded by adding a number of dummy bands before and after the first and last bands, 18 respectively, so that the number of bands is consistent between the original input and convolution layers.
19 [0037] In embodiments, the top layers of the CNN are fully connected to combine different local structures extracted in the lower layers for the final recognition.
21 [0038] In block 210, the output from the pooling layer is fed to a number of fully connected hidden 22 layers. The posterior probabilities of output labels may be computed using a top softmax layer (120). The 23 CNN may process each input speech utterance by generating all output label probabilities for each frame.
24 In block 212, a decoder, such as a Vitetbi decoder for example, may then be applied to obtain the sequence of labels corresponding to the input utterance.
26 [0039] In the training stage, the CNN may, for example, be estimated using a back-propagation 27 technique to minimize cross entropy of targets and output layer activations. The training targets may be 28 obtained from forced alignments generated from a trained HMM component.
29 [0040] In exemplary embodiments, in feature extraction, speech may be analyzed using, for example, a 25-ms frame length multiplied by a Hamming function with a 10-ms fixed frame rate. The 31 speech feature vector may be generated by Fourier-transform-based filter-banks, which may include, for 32 example, 40 coefficients distributed on a Mel-scale and energy, along with their first and second temporal AppIn no. CA2,810,457 Agent ref: 103-005CAP
Amendment dated 2017/08/14 I derivatives. All speech data may be normalized by averaging over all training cases so that each 2 coefficient or first derivative or second derivative all has zero mean and unit variance. An n-gram 3 language model may be applied in decoding to generate the output label sequence.
4 I 00411 in exemplary embodiments, for network training, a learning rate, annealing and early stopping strategics may be applied. The NN input layer may include a context window of IS frames, for 6 example. The input of the CNN may be divided into 40 bands, for example.
In this example, each band 7 may include one of the 40 filter-bank coefficients along the 15 frames context window including their 8 first and second derivatives. Moreover, all bands of the first convolution layer may receive the energy as 9 an extra input because it may not be suitable to treat it as a frequency band. Moreover the inputs of .. convolution layers may be padded as previously mentioned. Exemplary pooling sizes may be from 1 to 8, 11 .. for example. Around 80 to 97 filters may be provided per band, for example.
12 1,0042,1 Although the above has been described with reference to certain specific embodiments, 13 various modifications thereof will be apparent to those skilled in the art without departing from the scope 14 of the claims appended hereto.
Claims (16)
1. A method for applying a convolutional neural network to a speech signal to mitigate acoustic variation in speech, the convolutional neural network comprising at least one processor, the method comprising:
obtaining an acoustic signal comprising speech;
pre-processing the acoustic signal to:
transform the acoustic signal to its frequency domain representation; and divide the frequency domain representation into a plurality of frequency bands;
providing the plurality of frequency bands to a convolution layer of the convolutional neural network, the convolution layer comprising a plurality of convolution units each receiving input from a different subset of neighbouring frequency bands of the plurality of frequency bands, each subset relating to a limited frequency bandwidth; and providing an output of the convolution layer to a pooling layer of the convolutional neural network, the pooling layer comprising a plurality of pooling units each receiving input from at least one of the convolution units, the output of the pooling layer being a representation of the acoustic signal mitigating acoustic variation.
obtaining an acoustic signal comprising speech;
pre-processing the acoustic signal to:
transform the acoustic signal to its frequency domain representation; and divide the frequency domain representation into a plurality of frequency bands;
providing the plurality of frequency bands to a convolution layer of the convolutional neural network, the convolution layer comprising a plurality of convolution units each receiving input from a different subset of neighbouring frequency bands of the plurality of frequency bands, each subset relating to a limited frequency bandwidth; and providing an output of the convolution layer to a pooling layer of the convolutional neural network, the pooling layer comprising a plurality of pooling units each receiving input from at least one of the convolution units, the output of the pooling layer being a representation of the acoustic signal mitigating acoustic variation.
2. The method of claim 1, further comprising applying the pre-processing for frames of the acoustic signal and providing the frequency domain representation for each of the frames to the convolution layer.
3. The method of claim 1, further comprising providing an output of a topmost pooling layer to a fully connected layer.
4. The method of claim 3, wherein an output of the fully connected layer comprises probabilities of the speech belonging to certain output labels.
5. The method of claim 4, further comprising providing the probabilities to a decoder operable to determine a desired sequence of labels.
6. The method of claim 1, further comprising providing an output of the pooling layer to a further convolution layer.
7. The method of claim 1, wherein the convolution units apply a set of kernels that operate on local bands of the acoustic signal.
8. The method of claim 1, wherein the convolution units linked to a common pooling unit in an upper layer share a same weight.
9. A system for mitigating acoustic variation in speech comprising:
a preprocessor operable to:
obtain an acoustic signal comprising speech; and preprocess the acoustic signal to:
transform the acoustic signal to its frequency domain representation; and divide the frequency domain representation into a plurality of frequency bands;
a convolutional neural network, the convolutional neural network comprising at least one pair of:
a convolution layer comprising a plurality of convolution units each receiving input from a different subset of neighbouring frequency bands of the plurality of frequency bands, each subset relating to a limited frequency bandwidth;
a pooling layer comprising a plurality of pooling units each receiving input from at least one of the convolutional units, an output of the pooling layer being a representation of the acoustic signal mitigating acoustic variation.
a preprocessor operable to:
obtain an acoustic signal comprising speech; and preprocess the acoustic signal to:
transform the acoustic signal to its frequency domain representation; and divide the frequency domain representation into a plurality of frequency bands;
a convolutional neural network, the convolutional neural network comprising at least one pair of:
a convolution layer comprising a plurality of convolution units each receiving input from a different subset of neighbouring frequency bands of the plurality of frequency bands, each subset relating to a limited frequency bandwidth;
a pooling layer comprising a plurality of pooling units each receiving input from at least one of the convolutional units, an output of the pooling layer being a representation of the acoustic signal mitigating acoustic variation.
10. The system of claim 9, further comprising the preprocessor applying the preprocessing for frames of the acoustic signal and operable to provide the frequency domain representation for each of the frames to the convolution layer.
11. The system of claim 9, further comprising at least one fully connected neural network layer operable to obtain an output of a topmost pooling layer for speech recognition.
12. The system of claim 11, wherein a topmost fully connected layer is linked to a softmax layer operable to output probabilities of the speech belonging to certain output labels.
13. The system of claim 12, further comprising a decoder operable to determine a most probable sequence of labels based on the probabilities.
14. The system of claim 9, wherein a higher convolution layer-pooling layer pair is operable to obtain an output of a lower convolution layer-pooling layer pair.
15. The system of claim 9, wherein the convolution units apply a set of kernels that operate on local bands of the acoustic signal.
16. The system of claim 9, wherein the convolution units that are attached to the same pooling unit in an upper layer share the same kernels weights.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3022052A CA3022052C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
CA2810457A CA2810457C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2810457A CA2810457C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3022052A Division CA3022052C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2810457A1 CA2810457A1 (en) | 2014-09-25 |
CA2810457C true CA2810457C (en) | 2018-11-20 |
Family
ID=51610613
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3022052A Active CA3022052C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
CA2810457A Active CA2810457C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3022052A Active CA3022052C (en) | 2013-03-25 | 2013-03-25 | System and method for applying a convolutional neural network to speech recognition |
Country Status (1)
Country | Link |
---|---|
CA (2) | CA3022052C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11869530B2 (en) | 2016-09-06 | 2024-01-09 | Deepmind Technologies Limited | Generating audio using neural networks |
US11948066B2 (en) | 2016-09-06 | 2024-04-02 | Deepmind Technologies Limited | Processing sequences using convolutional neural networks |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3497630B1 (en) * | 2016-09-06 | 2020-11-04 | Deepmind Technologies Limited | Processing sequences using convolutional neural networks |
US10403268B2 (en) | 2016-09-08 | 2019-09-03 | Intel IP Corporation | Method and system of automatic speech recognition using posterior confidence scores |
WO2018081089A1 (en) | 2016-10-26 | 2018-05-03 | Deepmind Technologies Limited | Processing text sequences using neural networks |
US20190365342A1 (en) * | 2018-06-04 | 2019-12-05 | Robert Bosch Gmbh | Method and system for detecting abnormal heart sounds |
CN109147826B (en) * | 2018-08-22 | 2022-12-27 | 平安科技(深圳)有限公司 | Music emotion recognition method and device, computer equipment and computer storage medium |
CN111210815B (en) * | 2019-11-28 | 2023-01-06 | 赵铭 | Deep neural network construction method for voice command word recognition, and recognition method and device |
CN112489687B (en) * | 2020-10-28 | 2024-04-26 | 深兰人工智能芯片研究院(江苏)有限公司 | Voice emotion recognition method and device based on sequence convolution |
CN112225026B (en) * | 2020-10-30 | 2022-05-24 | 江苏蒙哥马利电梯有限公司 | Elevator maintenance method on demand based on acoustic signal |
US20220309328A1 (en) * | 2021-03-29 | 2022-09-29 | Infineon Technologies LLC | Compute-in-memory devices, systems and methods of operation thereof |
CN113921041B (en) * | 2021-10-11 | 2024-10-29 | 山东省计算中心(国家超级计算济南中心) | Recording equipment identification method and system based on grouped convolution attention network |
-
2013
- 2013-03-25 CA CA3022052A patent/CA3022052C/en active Active
- 2013-03-25 CA CA2810457A patent/CA2810457C/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11869530B2 (en) | 2016-09-06 | 2024-01-09 | Deepmind Technologies Limited | Generating audio using neural networks |
US11948066B2 (en) | 2016-09-06 | 2024-04-02 | Deepmind Technologies Limited | Processing sequences using convolutional neural networks |
Also Published As
Publication number | Publication date |
---|---|
CA2810457A1 (en) | 2014-09-25 |
CA3022052C (en) | 2021-05-18 |
CA3022052A1 (en) | 2014-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9734824B2 (en) | System and method for applying a convolutional neural network to speech recognition | |
CA2810457C (en) | System and method for applying a convolutional neural network to speech recognition | |
Abdel-Hamid et al. | Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition | |
Chang et al. | Robust CNN-based speech recognition with Gabor filter kernels. | |
Schwarz et al. | Hierarchical structures of neural networks for phoneme recognition | |
Narayanan et al. | Investigation of speech separation as a front-end for noise robust speech recognition | |
Abdel-Hamid et al. | Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code | |
Abdel-Hamid et al. | Rapid and effective speaker adaptation of convolutional neural network based models for speech recognition. | |
Li et al. | Factorized adaptation for deep neural network | |
Bengio et al. | Global optimization of a neural network-hidden Markov model hybrid | |
Prasad et al. | Improved cepstral mean and variance normalization using Bayesian framework | |
Saon et al. | Unfolded recurrent neural networks for speech recognition. | |
Zhao et al. | Domain and speaker adaptation for cortana speech recognition | |
Tüske et al. | Speaker adaptive joint training of gaussian mixture models and bottleneck features | |
Tsakalidis et al. | Discriminative linear transforms for feature normalization and speaker adaptation in HMM estimation | |
Wang et al. | Unsupervised speaker adaptation of batch normalized acoustic models for robust ASR | |
Alam et al. | Use of multiple front-ends and i-vector-based speaker adaptation for robust speech recognition | |
Paulik | Lattice-based training of bottleneck feature extraction neural networks. | |
Feng et al. | On using heterogeneous data for vehicle-based speech recognition: A DNN-based approach | |
Tomashenko et al. | GMM-derived features for effective unsupervised adaptation of deep neural network acoustic models | |
Zhang et al. | Comparison on Neural Network based acoustic model in Mongolian speech recognition | |
Chen et al. | Deep neural network acoustic modeling for native and non-native Mandarin speech recognition | |
Cui et al. | Stereo hidden Markov modeling for noise robust speech recognition | |
Malekzadeh et al. | Persian vowel recognition with MFCC and ANN on PCVC speech dataset | |
Zhao et al. | Time-frequency kernel-based CNN for speech recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |
Effective date: 20170814 |