EP1886303B1 - Verfahren zum anpassen eines neuronalen netzwerks einer automatischen spracherkennungseinrichtung - Google Patents

Verfahren zum anpassen eines neuronalen netzwerks einer automatischen spracherkennungseinrichtung Download PDF

Info

Publication number
EP1886303B1
EP1886303B1 EP05747980A EP05747980A EP1886303B1 EP 1886303 B1 EP1886303 B1 EP 1886303B1 EP 05747980 A EP05747980 A EP 05747980A EP 05747980 A EP05747980 A EP 05747980A EP 1886303 B1 EP1886303 B1 EP 1886303B1
Authority
EP
European Patent Office
Prior art keywords
stage
lhn
neural network
phoneme
linear stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP05747980A
Other languages
English (en)
French (fr)
Other versions
EP1886303A1 (de
Inventor
Roberto LOQUENDO S.p.A. GEMELLO
Franco LOQUENDO S.p.A. MANA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Loquendo SpA
Original Assignee
Loquendo SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Loquendo SpA filed Critical Loquendo SpA
Publication of EP1886303A1 publication Critical patent/EP1886303A1/de
Application granted granted Critical
Publication of EP1886303B1 publication Critical patent/EP1886303B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Definitions

  • the present invention relates to the field of automatic speech recognition. More particularly, the present invention relates to a method of adapting a neural network of an automatic speech recognition device, a corresponding adapted neural network and a corresponding automatic speech recognition device.
  • An automatic speech recognition device is an apparatus which is able to recognise voice signals such as words or sentences uttered in a predefined language.
  • An automatic speech recognition device may be employed for instance in devices for converting voice signals into written text or for detecting a keyword allowing a user to access a service. Further, an automatic speech recognition device may be employed in telephone systems supporting particular services, such as providing a user with the telephone number of a given telephone subscriber.
  • an automatic speech recognition device performs steps, which will be briefly described herein after.
  • the automatic speech recognition device receives the voice signal to be recognised through a phonic channel.
  • phonic channels are a channel of a fixed telephone network, of a mobile telephone network, or the microphone of a computer.
  • the voice signal is firstly converted into a digital signal.
  • the digital signal is periodically sampled with a certain sampling period, typically of a few milliseconds.
  • Each sample is commonly termed "frame”.
  • successively, each frame is associated to a set of spectral parameters describing the voice spectrum of the frame.
  • the pattern matching block calculates the probability that the frame associated to the set of spectral parameters corresponds to that phoneme.
  • a phoneme is the smallest portion of a voice signal such that, replacing a first phoneme with a second phoneme in a voice signal in a certain language, two different signifiers of the language may be obtained.
  • a voice signal comprises a sequence of phonemes and transitions between successive phonemes.
  • phoneme will comprise both phonemes as defined above and transitions between successive phonemes.
  • the pattern matching block calculates a high probability for the phoneme corresponding to an input frame, a low probability for phonemes with voice spectrum similar to the voice spectrum of the input frame, and a zero probability for phonemes with a voice spectrum different from the voice spectrum of the input frame.
  • frames corresponding to the same phoneme may be associated to different sets of spectral parameters. This is due to the fact that the voice spectrum of a phoneme depends on different factors, such as the characteristics of the phonic channel, of the speaker and of the noise affecting the voice signal.
  • Phoneme probabilities associated to successive frames are employed, together with other language data (such, for instance, vocabulary, grammar rules, and/or syntax rules) to reconstruct words or sentences corresponding to the sequence of frames.
  • other language data such as, for instance, vocabulary, grammar rules, and/or syntax rules
  • the step of calculating phoneme probabilities of an input frame is performed by a pattern matching block.
  • the pattern matching block may be implemented through a neural network.
  • a neural network is a network comprising at least one computation unit, which is called "neuron”.
  • a neuron is a computation unit adapted to compute an output value as a function of a plurality of input values (also called "pattern").
  • the neuron transforms the linear sum in [1] according to an activation function g(.).
  • the activation function may be of different types. For instance, it may be either a Heaviside function (threshold function), or a sigmoid function.
  • This type of sigmoid function is an increasing, [0;1]-limited function; thus, it is adapted to represent a probability function.
  • a neural network employed in an automatic speech recognition device is a multi-layer neural network.
  • a multi-layer neural network comprises a plurality of neurons, which are grouped in two or more cascaded stages. Typically, neurons of a same stage have the same activation function.
  • a multi-layer neural network typically comprises an input stage, comprising a buffer for storing an input pattern.
  • an input pattern comprises a set of spectral parameters of an input frame, and sets of spectral parameters of a few frames preceding and following the input frame.
  • a pattern typically comprises sets of spectral parameters of seven or nine consecutive frames.
  • the input stage is typically connected to an intermediate (or "hidden") stage, comprising a plurality of neurons.
  • Each input connection of each intermediate stage neuron is adapted to receive from the input stage a respective spectral parameter.
  • Each intermediate stage neuron computes a respective output value according to formulas [1] and [2].
  • the intermediate stage is typically connected to an output stage, also comprising a plurality of neurons.
  • Each output stage neuron has a number of input connections which is equal to the number of intermediate stage neurons.
  • Each input connection of each output stage neuron is connected to a respective intermediate stage neuron.
  • Each output stage neuron computes a respective output value as a function of the intermediate stage output values.
  • each output stage neuron is associated to a respective phoneme.
  • the number of output stage neurons is equal to the number of phonemes.
  • the output value computed by each output stage neuron is the probability that the frame associated to the input pattern corresponds to the phoneme associated to the output stage neuron.
  • a multi-layer network with a single intermediate stage has been described above.
  • a multi-layer network may comprise a higher number of cascaded intermediate stages (typically two or three) between the input stage and the output stage.
  • a neural network acquires the ability of computing, for each input frame, the phoneme probabilities, a training of the neural network is required.
  • Training is typically performed through a training set, i.e. a set of sentences that, once uttered, comprise all the phonemes of the language.
  • Such sentences are usually uttered by different speakers, so that the network is trained in recognizing voice signals uttered with different voice tones, accents, or the like.
  • different phonic channels are usually employed, such as different fixed or mobile telephones, or the like.
  • the sentences are uttered in different environments (car, street, train, or the like), so that the neural network is trained in recognising voice signals affected by different types of noise.
  • a network through such a training set results in a "generalist” neural network, i.e. a neural network whose performance, expressed as a word (or phoneme) recognition percentage, is substantially homogeneous and independent from the speaker, the phonic channel, the environment, or the like.
  • an "adapted" neural network may be desirable, i.e. a neural network whose performance is improved when recognising a predefined set of voice signals.
  • a neural network may be:
  • adaptation set will refer to a predetermined set of voice signals for which a neural network is adapted.
  • An adaptation set comprises voice signals with common features, such as voice signals uttered by a certain speaker, as well as voice signals comprising a certain set of words, as well as voice signals affected by a certain noise type, or the like.
  • LIN Linear Input Network
  • a further adaptation technique presented in this paper is the Retrained Speaker-Independent (RSI) adaptation, wherein, starting from a SI system, the full connectionist component is adapted to the new speaker.
  • RSI Retrained Speaker-Independent
  • PPN Parallel Hidden Network
  • PNN Parallel Hidden Network
  • speaker adaptation weights connecting to/from these units are adapted while keeping all other parameters fixed.
  • GAMMA a GAMMA approach, wherein the speaker-dependent input vectors are mapped to the SI system (as in the LIN technique) using a gamma filter.
  • the Applicant has noticed that the performance of an adapted neural network can be improved over the performance of the neural networks adapted according to the above cited known methods.
  • the object of the present invention as claimed in claim 1 is providing a method of adapting a neural network of an automatic speech recognition device allowing to obtain an adapted neural network with improved performance, for a given adaptation set.
  • the present invention provides a method of adapting a multi-layer neural network of an automatic speech recognition device, the method comprising the steps of: providing a neural network comprising an input stage for storing at least one voice signal sample, an intermediate stage having input connections associated to a first weight matrix and an output stage having input connections associated to a second weight matrix, said output stage outputting phoneme probabilities; providing a linear stage in said neural network after said intermediate stage, said linear stage having the same number of nodes as said intermediate stage; and training said linear stage by means of an adaptation set, said first weight matrix and said second weight matrix being kept fixed during said training.
  • the method of the present invention allows to obtain an adapted neural network with improved performance over a neural network adapted according to the prior art, in particular according to the above cited LIN technique.
  • Adaptation according to the present invention is more effective, thus resulting in an increased word/phoneme recognition percentage.
  • the step of training said linear stage comprises training the linear stage so that the phoneme probability of a phoneme belonging to voice signals not comprised in said adaptation set is equal to the phoneme probability of said phoneme calculated by said neural network before the step of providing a linear stage.
  • a conservative adaptation training advantageously allows to prevent a neural network adapted according to the present invention from loosing its ability in recognising phonemes absent from the adaptation set.
  • the adapted neural networks exhibit good performance also in recognising voice signals which are not fully comprised into the adaptation set.
  • the further linear stage training is carried out by means of an Error Back-propagation algorithm.
  • an equivalent stage could be provided, such an equivalent stage being obtained by combining the further linear stage and either the following intermediate stage or the output stage.
  • the present invention as claimed in claim 9 provides a multilayer neural network comprising an input stage for storing at least one voice signal sample, an intermediate stage having input connections associated to a first weight matrix, an output stage having input connections associated to a second weight matrix, and a linear stage which is adapted to be trained by means of an adaptation set, the first weight matrix and the second weight matrix being kept fixed while the linear stage is trained, said output stage being adapted to output phoneme probabilities, wherein said linear stage is provided after said intermediate stage, said linear stage having the same number of nodes as said intermediate stage.
  • the present invention as claimed in claim 17 provides an automatic speech recognition device comprising a pattern matching block comprising a neural network as set forth above.
  • the present invention as claimed in claim 18 provides a computer program comprising computer program code means adapted to perform all the steps of the above method when the program is run on a computer.
  • the present invention as claimed in claim 19 provides a computer readable medium having a program recorded thereon, the computer readable medium comprising computer program code means adapted to perform all the steps of the above method when the program is run on a computer.
  • FIG. 1 schematically shows an automatic speech recognition device ASR.
  • the automatic speech recognition device ASR comprises a cascade of a front-end block FE, a pattern matching block PM and a decoder DEC.
  • the decoder DEC is further connected to a database G, comprising vocabulary, grammar rules and/or syntax rules of the language for which the device ASR is intended.
  • the automatic speech recognition device ASR receives from a phonic channel PC a voice signal VS.
  • the front-end block FE digitalizes and samples the voice signal VS, thus generating a sequence of frames, and it associates to each frame a respective set of n spectral parameters SP1, ... SPi, ... SPn.
  • the spectral parameters SP1, ... SPi, ... SPn are sent to the pattern matching block PM, which in turn outputs phoneme probabilities p(f1), ... p(fk), ... p(fC).
  • the phonemes probabilities are sent to the decoder DEC which, according to the information stored into the database G, recognizes the voice signal.
  • the pattern matching block PM may comprise a multi-layer neural network.
  • Figure 2 schematically shows a three-stage multi-layer neural network.
  • the neural network NN of Figure 2 comprises an input stage InS, an intermediate (hidden) stage IntS and an output stage OutS.
  • the input stage InS comprises a buffer B, which is adapted to store the pattern SP1, ... SPi, ... SPD of an input frame, which comprises, as already mentioned above, the set of spectral parameters SP1, ... SPi, ... SPn associated to the input frame and sets of spectral parameters associated to a number of frames preceding and following the input frame.
  • the intermediate stage IntS comprises a number M of neurons IN1, ... INj, ... INM..
  • Each input connection of each neuron IN1, ... INj, ... INM is adapted to receive a respective spectral parameter of the pattern SP1, ... SPi, ... SPD.
  • each input connection of each neuron IN1, ... INj, ... INM is associated to a respective weight.
  • w ji refers to the weight of the i-th input connection of the j-th intermediate stage neuron. For simplicity, as already mentioned, it is assumed that the bias is zero.
  • the output stage OutS comprises a number C of neurons ON1, ... ONk, ... ONC, wherein C is the number of phonemes.
  • Each neuron ON1, ... ONk, ... ONC has M input connections.
  • Each of the M input connections of each neuron ON1, ... ONk, ... ONC is connected to a respective intermediate stage neuron IN1, ... INj, ... INM.
  • each input connection of each neuron ON1, ... ONk, ... ONC is associated to a respective weight.
  • w' kj refers to the weight of the j-th input connection of the k-th output stage neuron. Also in this case, for simplicity, it is assumed that the bias is zero.
  • the output value computed by each output stage neuron ON1, ... ONk, ... ONC is the probability p(f1), ...p(fk), ... p(fC) according to which the frame associated to the pattern SP1, ... SPi, ... SPD corresponds respectively to the phoneme f1, ... fk, ... fC.
  • FIG 3 shows a simplified representation of the three-stage neural network NN of Figure 2 .
  • the three stages of the network are represented as rectangles, each rectangle corresponding to a respective stage (InS, IntS, OutS).
  • W ⁇ W ⁇ 11 ... W ⁇ 1 ⁇ M ... W ⁇ kj ... W ⁇ CM ... W ⁇ CM .
  • Figure 4 shows a known four-stage neural network.
  • the neural network of Figure 4 comprises an input stage comprising a buffer (not shown), a first intermediate (hidden) stage IntS1 comprising neurons (not shown), a second intermediate (hidden) layer IntS2 comprising neurons (not shown), and an output stage OutS comprising neurons (not shown).
  • the input connections of the first intermediate stage neurons are associated to a weight matrix W.
  • the input connections of the second intermediate stage neurons are associated to a weight matrix W'.
  • the input connections of the output stage neurons are associated to a weight matrix W".
  • Figure 5 shows the three-stage neural network of Figure 3 , adapted according to the present invention.
  • the present invention provides for inserting an additional linear stage LHN after an intermediate stage of a neural network.
  • Such an additional linear stage LHN comprises a plurality of linear neurons, i.e. neurons with linear activation function.
  • the input connections of the additional stage LHN are associated to a weight matrix W LHN , as it will be shown in further details herein after.
  • the additional linear stage LHN is placed between the intermediate stage IntS and the output stage OutS.
  • the spectral parameters SP1, ... SPi, ... SPD are firstly processed by the weight matrix W and the activation function of the intermediate stage IntS.
  • the additional stage LHN performs a linear transform by means of the weight matrix W LHN and the linear activation function.
  • the output values estimated by the additional stage LHN are processed by the weight matrix W and the activation function of the output stage OutS, thus resulting in the phoneme probabilities p(f1), ... p(fk), ... p(fC).
  • the linear transform performed by the additional linear stage LHN is performed not on the input spectral coefficients, but on the spectral coefficient processed by the intermediate stage. This advantageously increases the impact of the linear transform on the overall neural network operation, thus allowing to obtain an adapted neural network with improved performance.
  • the additional stage LHN has a number of neurons which is equal to the number of intermediate stage neurons (M).
  • the weight matrix W LHN associated to the input connections of the additional linear stage neurons is optimised by performing an adaptation training by means of an adaptation set. During such an adaptation training, the weight matrixes W and W' are kept fixed.
  • the adaptation training is performed through a so-called Error Back-Propagation algorithm as disclosed, for instance, in C. M. Bishop “Neural networks for pattern recognition", Oxford University Press, 1995, pages 140-148 .
  • Such an Error Back-Propagation algorithm consists in computing an error function as the difference between the set of computed phoneme probabilities and a set of target phoneme probabilities.
  • Such an error function is "back-propagated" through the neural network, in order to compute correction values to be applied to the weights of the weight matrixes. According to the present invention, such correction values are applied only to the weights of the weight matrix W LHN .
  • the weight matrix W LHN is a square MxM matrix.
  • Both figures 8 and 9 show the four-stage neural network of Figure 4 , which is adapted according to the present invention.
  • the additional linear stage LHN is inserted between the first intermediate stage IntS1 and the second intermediate stage IntS2.
  • the additional linear stage LHN is inserted between the second intermediate stage IntS2 and the output stage OutS.
  • the Applicant has verified that the adapted neural network of Figure 7 has better performance in comparison with the adapted neural network of Figure 6 , as in the network of Figure 7 the additional linear stage LHN performs a linear transform on data which has already been subjected to a greater number of processing operations.
  • the weights w LHN pq of the weight matrix W LHN are optimised by performing an adaptation training by means of an adaptation set. During such an adaptation training, the weight matrixes W, W' and W" are kept fixed.
  • the adaptation training is performed through an Error Back-propagation algorithm, as described above with reference to Figure 5 .
  • the adaptation training of a neural network induces a neural network to compute always a phoneme probability equal to zero for the absent class phonemes.
  • the adapted neural network is not able to perform such a task, as the input connection weights optimised through the adaptation training always induce the network to associate a zero probability to that phoneme.
  • M.F. BenZeghiba describes a method for overcoming this problem, by adding some examples of phonemes that did not appear in the adaptation data. However, the Applicant has observed that such a method can be improved.
  • the additional linear stage weight matrix W LHN is optimised by performing an adaptation training which allows to preserve the performance of the adapted neural network in recognising absent class phonemes.
  • the target phoneme probabilities are chosen as follows:
  • the absent class phonemes are associated to a target probability which is different from zero, even if it is known a priori that none of the adaptation set frames corresponds to any of these absent class phonemes.
  • the target probabilities are preferably chosen so that the target probability of the phoneme corresponding to the frame is substantially higher than the target probability of the absent class phonemes, so that the decoder is induced to consider unlikely that the frame corresponds to an absent class phoneme.
  • the weights w LHN pq after the adaptation training are such that the adapted neural network still has the capability of recognising absent class phonemes.
  • the additional linear stage LHN may be "absorbed" with the successive stage. More in particular, after computing the optimum weights w LHN pq through an adaptation training, the additional linear stage LHN and the successive stage are optionally replaced by a single equivalent stage.
  • the additional linear stage LHN and the output stage OutS may be replaced by a single equivalent stage.
  • the Applicant has performed a number of comparative tests between a generalist neural network (i.e. before adaptation), the generalist neural network adapted according to the known LIN technique, and the generalist network adapted according to two different embodiments of the present invention.
  • the generalist neural network has been adapted by inserting an additional linear stage (LHN).
  • the generalist neural network has been adapted by inserting an additional linear stage which has been trained through conservative adaptation training (LHN+CT)
  • the generalist neural network was a four-layer neural network of the type shown in Figure 4 .
  • the buffer B size was 273.
  • the first intermediate stage comprised 315 neurons, whose activation function g(a) is the sigmoid function defined by equation [2].
  • the second intermediate stage comprised 300 neurons, whose activation function g(a) is the sigmoid function defined by equation [2].
  • the output stage comprised 683 neurons (for Italian language), whose activation function g(a) is a so-called softmax function, which is a sigmoid function ensuring that the sum of the phoneme probabilities is equal to 1.
  • the generalist neural network has been adapted using different adaptation sets, such as:
  • Each adaptation set is associated to a respective test set.
  • the ensemble of a training set and its respective test set is usually termed "corpus".
  • the WSJ0 corpus which has been defined by DARPA Spoken Language Program, has a vocabulary comprising 5000-20000 English words.
  • a 5000 word vocabulary has been used.
  • a Sennheiser HMD414 microphone has been used, both during the adaptation training and during the tests.
  • the WSJ1 Spoke-3 corpus which has been defined by DARPA Spoken Language Program, has a vocabulary comprising 5000 English words.
  • the test set comprised 40x8 320 test sentences, uttered by the same ten different non-native speakers.
  • the Aurora3 corpus which has been defined by European Union funded SpeechDat-Car project, has a vocabulary comprising 2200 Italian connected digit utterances, divided into training utterances and test utterances. These utterances are affected by different noise types inside a car (high speed good road, low speed rough road, car stopped with motor running, and town traffic).
  • the adaptation set used by the Applicant comprised 2951 connected digits utterances, while the test set comprised 1309 connected digits utterances.
  • the Consi-12 corpus which has been defined by the Applicant, has a vocabulary comprising 9325 Italian town names.
  • the adaptation set used by the Applicant comprised 53713 adaptation utterances, while the test set comprised 3917 test utterances.
  • the AppWord corpus which has been defined by the Applicant, has a vocabulary comprising applicative Italian words such as "avanti”, “indietro”, “fine”, or the like.
  • the adaptation set used by the Applicant comprised 6189 adaptation utterances, while the test set comprised 3094 test utterances.
  • the Digcon corpus which has been defined by the Applicant, is a subset of the SpeechDat corpora.
  • the adaptation set used by the Applicant comprised 10998 adaptation utterances, while the test set comprised 1041 test utterances.
  • Table 1 reported below shows the results of the tests. Performance is expressed as word recognition percentage. For each adapted network, the performance is evaluated by referring to the test set coherent with the respective adaptation set. For the generalist neural network, performance is evaluated for all the above reported test sets. Table 1 Application Vocabulary Channel Speaker adaptation method Consi-12 Appl. Words Digcon Aurora 3 WSJ0 WSJ1 Spoke-3 none 85.4 96.2 98.6 87.9 82.8 49.7 LIN 88.8 96.6 98.5 94.2 85.2 57.4 LHN 90.4 97.9 99.1 95.0 86.4 70.2 LHN+CT 89.9 97.7 99.0 94.6 87.4 71.6
  • Neural network adapted through the known LIN technique has shown improved performance for each adaptation set, except the adaptation set Digcon.
  • Performance has been further improved by adapting the generalist network according to the first embodiment of the present invention (LHN).
  • LHN generalist network according to the first embodiment of the present invention
  • a neural network adapted according to the present invention exhibits better word recognition performance in comparison with neural networks adapted according to the prior art.
  • Table 2 shows the results of a further comparative test of Italian continuous speech recognition for some of the above cited adaptation tests. Performance is expressed as speech recognition accuracy, which is obtained by subtracting from the recognised word percentage both the word insertion percentage and the word deletion percentage.
  • Table 2 adaptation method Consi-12 App. Words Digcon Aurora3 (4%) (48%) (86%) (86%) none 70.7 LIN 63.7 57.3 23.3 -8.6 LHN 59.4 36.3 -47.3 -52.1 LHN+CT 59.3 54.7 60.6 55.8
  • the voice signal comprises both phonemes comprised into the adaptation sets, and absent class phonemes.
  • the generalist neural network exhibits a speech recognition accuracy equal to 70.7%.
  • the conservative adaptation training advantageously allows to improve the performance.
  • the speech recognition accuracy increases from -47.3% (LHN) to 60.6% (LHN-CT), while for the adaptation set Aurora3 the speech recognition accuracy increases from -52.1 % to 55.8%.
  • the present invention advantageously allows to obtain, for most of the considered adaptation sets, improved performance in word recognition test performed through test sets coherent with the respective adaptation sets. Besides, an improvement in speech recognition accuracy can be obtained by performing a conservative adaptation training according to a preferred embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Machine Translation (AREA)
  • Computer And Data Communications (AREA)
  • Character Discrimination (AREA)
  • Feedback Control In General (AREA)
  • Telephonic Communication Services (AREA)

Claims (19)

  1. Verfahren zum Anpassen eines mehrschichtigen neuronalen Netzwerks (NN) einer automatischen Spracherkennungsvorrichtung (ASR), wobei das Verfahren die folgenden Schritte umfasst:
    - Bereitstellen eines neuronalen Netzwerks (NN), das eine Eingangsstufe (Ins) zum Speichern von mindestens einer Sprachsignalprobe, eine Zwischenstufe (IntS, IntS1, IntS2), die Eingangsverbindungen hat, welche mit einer ersten Gewichtungsmatrix (W) verbunden sind, und eine Ausgangsstufe (OutS) umfasst, die Eingangsverbindungen hat, welche mit einer zweiten Gewichtungsmatrix (W') verbunden sind, wobei die Ausgangsstufe (OutS) Phonemwahrscheinlichkeiten ausgibt;
    - Bereitstellen einer linearen Stufe (LHN) im neuronalen Netzwerk (NN) nach der Zwischenstufe (IntS, IntS1, IntS2), wobei die lineare Stufe (LHN) dieselbe Zahl von Knoten wie die Zwischenstufe (IntS, IntS1, IntS2) hat; und
    - Trainieren der linearen Stufe (LHN) mittels eines Anpassungssets, wobei die erste Gewichtungsmatrix (W) und die zweite Gewichtungsmatrix (W') während des Trainings unverändert gelassen werden.
  2. Verfahren nach Anspruch 1, wobei der Schritt des Trainierens der linearen Stufe (LHN) das Training der linearen Stufe (LHN) derart umfasst, dass die Phonemwahrscheinlichkeit eines Phonems, das zu den Sprachsignalen gehört, die nicht im Anpassungsset enthalten sind, gleich der Phonemwahrscheinlichkeit des Phonems ist, das vom neuronalen Netzwerk (NN) vor dem Schritt des Bereitstellens einer linearen Stufe (LHN) berechnet wurde.
  3. Verfahren nach Anspruch 2, wobei der Schritt des Trainierens der linearen Stufe (LHN) das Trainieren der linearen Stufe (LHN) derart umfasst, dass die Phonemwahrscheinlichkeit des Phonems, das einer Sprachsignalprobe des Anpassungssets entspricht, durch Subtrahieren der Phonemwahrscheinlichkeiten aller Phoneme, die zu den Sprachsignalen gehören, welche nicht im Anpassungsset von 1 enthalten sind, berechnet ist.
  4. Verfahren nach Anspruch 3, wobei der Schritt des Trainierens der linearen Stufe (LHN) das Trainieren der linearen Stufe (LHN) derart umfasst, dass die Phonemwahrscheinlichkeit der restlichen Phoneme gleich null gesetzt ist.
  5. Verfahren nach einem der Ansprüche 1 bis 4, wobei der Schritt des Bereitstellens der linearen Stufe (LHN) den Schritt des Bereitstellens der linearen Stufe (LHN) zwischen der Zwischenstufe (IntS) und der Ausgangsstufe (OutS) umfasst.
  6. Verfahren nach einem der Ansprüche 1 bis 4, wobei der Schritt des Bereitstellens des neuronalen Netzwerks (NN) den Schritt des Bereitstellens eines neuronalen Netzwerks (NN) umfasst, das zwei Zwischenstufen (Int1, Int2) umfasst, und wobei der Schritt des Bereitstellens der linearen Stufe (LHN) das Bereitstellen der linearen Stufe (LHN) zwischen den zwei Zwischenstufen (IntS1, IntS2) umfasst.
  7. Verfahren nach einem der vorherigen Ansprüche, wobei der Schritt des Trainierens der linearen Stufe (LHN) den Schritt des Trainierens der linearen Stufe (LHN) mittels eines Fehlerrückführungsalgorithmus umfasst.
  8. Verfahren nach einem der vorherigen Ansprüche, das ferner einen Schritt des Bereitstellens einer äquivalenten Stufe umfasst, die durch Kombinieren der linearen Stufe (LHN) und entweder der folgenden Zwischenstufe (IntS2) oder der Ausgangsstufe (OutS) erhalten ist.
  9. Berechnungsmodul für mehrschichtige neuronale Netzwerke (NN), das eine Eingangsstufe (Ins) zum Speichern von mindestens einer Sprachsignalprobe, eine Zwischenstufe (IntS, IntS1, IntS2), die Eingangsverbindungen hat, welche mit einer ersten Gewichtungsmatrix (W) verbunden sind, eine Ausgangsstufe (OutS), die Eingangsverbindungen hat, welche mit einer zweiten Gewichtungsmatrix (W') verbunden sind, und eine lineare Stufe (LHN) umfasst, welche dafür ausgelegt ist, mittels eines Anpassungssets trainiert zu werden, wobei die erste Gewichtungsmatrix (W) und die zweite Gewichtungsmatrix (W') unverändert gelassen werden, während die lineare Stufe (LHN) trainiert wird, wobei die Ausgangsstufe (OutS) dafür ausgelegt ist, Phonemwahrscheinlichkeiten auszugeben, wobei die lineare Stufe (LHN) nach der Zwischenstufe (IntS, IntS1, IntS2) bereitgestellt ist, wobei die lineare Stufe (LHN) dieselbe Zahl von Knoten wie die Zwischenstufe (IntS, IntS1, IntS2) hat.
  10. Neuronales Netzwerk nach Anspruch 9, wobei die lineare Stufe (LHN) dafür ausgelegt ist, so trainiert zu werden, dass die Phonemwahrscheinlichkeit eines Phonems, das zu Sprachsignalen gehört, die nicht im Anpassungsset enthalten sind, gleich der Phonemwahrscheinlichkeit des Phonems ist, das vom neuronalen Netzwerk (NN) vor dem Bereitstellen einer linearen Stufe (LHN) berechnet wurde.
  11. Neuronales Netzwerk nach Anspruch 10, wobei die lineare Stufe (LHN) dafür ausgelegt ist, so trainiert zu werden, dass die Phonemwahrscheinlichkeit des Phonems, das einer Sprachsignalprobe des Anpassungssets entspricht, durch Subtrahieren der Phonemwahrscheinlichkeiten aller Phoneme, die zu den Sprachsignalen gehören, welche nicht im Anpassungsset von 1 enthalten sind, berechnet ist.
  12. Neuronales Netzwerk nach Anspruch 11, wobei die lineare Stufe (LHN) dafür ausgelegt ist, so trainiert zu werden, dass die Phonemwahrscheinlichkeit der restlichen Phoneme gleich null gesetzt ist.
  13. Neuronales Netzwerk nach einem der Ansprüche 9 oder 12, wobei die lineare Stufe (LHN) zwischen der Zwischenstufe (IntS) und der Ausgangsstufe (OutS) bereitgestellt ist.
  14. Neuronales Netzwerk nach einem der Ansprüche 9 oder 12, wobei das neuronale Netzwerk (NN) zwei Zwischenstufen (Int1, Int2) umfasst und die lineare Stufe (LHN) zwischen den zwei Zwischenstufen (IntS1, IntS2) bereitgestellt ist.
  15. Neuronales Netzwerk nach einem der Ansprüche 9 bis 14, wobei die lineare Stufe (LHN) dafür ausgelegt ist, mittels eines Fehlerrückführungsalgorithmus trainiert zu werden.
  16. Neuronales Netzwerk nach einem der Ansprüche 9 bis 15, wobei das neuronale Netzwerk (NN) eine äquivalente Stufe umfasst, die durch Kombinieren der linearen Stufe (LHN) und entweder der folgenden Zwischenstufe (IntS2) oder der Ausgangsstufe (OutS) erhalten ist.
  17. Automatische Spracherkennungsvorrichtung (ASR), die einen Mustererkennungsblock (PM) umfasst, der ein neuronales Netzwerk (NN) nach einem der Ansprüche 9 bis 16 umfasst.
  18. Computerprogramm, das Computerprogrammcodemittel umfasst, welche zum Ausführen aller Schritte jeder der Ansprüche 1 bis 8 ausgelegt sind, wenn das Programm auf einem Computer ausgeführt ist.
  19. Computerlesbares Medium, das ein Programm hat, welches darauf aufgezeichnet ist, wobei das computerlesbare Medium Computerprogrammcodemittel umfasst, welche zum Ausführen aller Schritte jeder der Ansprüche 1 bis 8 ausgelegt sind, wenn das Programm auf einem Computer ausgeführt ist.
EP05747980A 2005-06-01 2005-06-01 Verfahren zum anpassen eines neuronalen netzwerks einer automatischen spracherkennungseinrichtung Active EP1886303B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2005/052510 WO2006128496A1 (en) 2005-06-01 2005-06-01 Method of adapting a neural network of an automatic speech recognition device

Publications (2)

Publication Number Publication Date
EP1886303A1 EP1886303A1 (de) 2008-02-13
EP1886303B1 true EP1886303B1 (de) 2009-12-23

Family

ID=35643212

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05747980A Active EP1886303B1 (de) 2005-06-01 2005-06-01 Verfahren zum anpassen eines neuronalen netzwerks einer automatischen spracherkennungseinrichtung

Country Status (7)

Country Link
US (1) US8126710B2 (de)
EP (1) EP1886303B1 (de)
AT (1) ATE453183T1 (de)
CA (1) CA2610269C (de)
DE (1) DE602005018552D1 (de)
ES (1) ES2339130T3 (de)
WO (1) WO2006128496A1 (de)

Families Citing this family (132)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7884727B2 (en) 2007-05-24 2011-02-08 Bao Tran Wireless occupancy and day-light sensing
US8249731B2 (en) * 2007-05-24 2012-08-21 Alexander Bach Tran Smart air ventilation system
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
DE102010043680A1 (de) 2010-11-10 2012-05-10 Siemens Aktiengesellschaft Wasserfahrzeug mit einem Querstrahlantrieb
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US9235799B2 (en) 2011-11-26 2016-01-12 Microsoft Technology Licensing, Llc Discriminative pretraining of deep neural networks
US8700552B2 (en) 2011-11-28 2014-04-15 Microsoft Corporation Exploiting sparseness in training deep neural networks
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US8442821B1 (en) 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
US9240184B1 (en) * 2012-11-15 2016-01-19 Google Inc. Frame-level combination of deep neural network and gaussian mixture models
US9477925B2 (en) 2012-11-20 2016-10-25 Microsoft Technology Licensing, Llc Deep neural networks training for speech and pattern recognition
US9230550B2 (en) * 2013-01-10 2016-01-05 Sensory, Incorporated Speaker verification and identification using artificial neural network-based sub-phonetic unit discrimination
DE112014000709B4 (de) 2013-02-07 2021-12-30 Apple Inc. Verfahren und vorrichtung zum betrieb eines sprachtriggers für einen digitalen assistenten
US9454958B2 (en) 2013-03-07 2016-09-27 Microsoft Technology Licensing, Llc Exploiting heterogeneous data in deep neural network-based speech recognition systems
US9466292B1 (en) * 2013-05-03 2016-10-11 Google Inc. Online incremental adaptation of deep neural networks using auxiliary Gaussian mixture models in speech recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
EP3008641A1 (de) 2013-06-09 2016-04-20 Apple Inc. Vorrichtung, verfahren und grafische benutzeroberfläche für gesprächspersistenz über zwei oder mehrere instanzen eines digitaler assistenten
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US20150154002A1 (en) * 2013-12-04 2015-06-04 Google Inc. User interface customization based on speaker characteristics
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9620108B2 (en) 2013-12-10 2017-04-11 Google Inc. Processing acoustic sequences using long short-term memory (LSTM) neural networks that include recurrent projection layers
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
AU2015266863B2 (en) 2014-05-30 2018-03-15 Apple Inc. Multi-command single utterance input method
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10373711B2 (en) 2014-06-04 2019-08-06 Nuance Communications, Inc. Medical coding system with CDI clarification request notification
US10754925B2 (en) 2014-06-04 2020-08-25 Nuance Communications, Inc. NLU training with user corrections to engine annotations
US9627532B2 (en) * 2014-06-18 2017-04-18 Nuance Communications, Inc. Methods and apparatus for training an artificial neural network for use in speech recognition
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9824684B2 (en) 2014-11-13 2017-11-21 Microsoft Technology Licensing, Llc Prediction-based sequence recognition
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10515301B2 (en) 2015-04-17 2019-12-24 Microsoft Technology Licensing, Llc Small-footprint deep neural network
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US11113596B2 (en) 2015-05-22 2021-09-07 Longsand Limited Select one of plurality of neural networks
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US9818409B2 (en) * 2015-06-19 2017-11-14 Google Inc. Context-dependent modeling of phonemes
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10304440B1 (en) * 2015-07-10 2019-05-28 Amazon Technologies, Inc. Keyword spotting using multi-task configuration
CN105139864B (zh) * 2015-08-17 2019-05-07 北京眼神智能科技有限公司 语音识别方法和装置
US10366158B2 (en) * 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10366687B2 (en) * 2015-12-10 2019-07-30 Nuance Communications, Inc. System and methods for adapting neural network acoustic models
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10229672B1 (en) 2015-12-31 2019-03-12 Google Llc Training acoustic models using connectionist temporal classification
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
WO2018057639A1 (en) 2016-09-20 2018-03-29 Nuance Communications, Inc. Method and system for sequencing medical billing codes
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
WO2018106971A1 (en) * 2016-12-07 2018-06-14 Interactive Intelligence Group, Inc. System and method for neural network based speaker classification
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. USER INTERFACE FOR CORRECTING RECOGNITION ERRORS
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770427A1 (en) 2017-05-12 2018-12-20 Apple Inc. LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
CN110892409B (zh) * 2017-06-05 2023-09-22 西门子股份公司 用于分析图像的方法和装置
US11133091B2 (en) 2017-07-21 2021-09-28 Nuance Communications, Inc. Automated analysis system and method
US10706840B2 (en) 2017-08-18 2020-07-07 Google Llc Encoder-decoder models for sequence to sequence mapping
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
WO2019078885A1 (en) * 2017-10-20 2019-04-25 Google Llc PARALLEL EXECUTION OF OPERATIONS OF ACTIVATION UNITS WITH RELEASE
US11024424B2 (en) 2017-10-27 2021-06-01 Nuance Communications, Inc. Computer assisted coding systems and methods
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. VIRTUAL ASSISTANT OPERATION IN MULTI-DEVICE ENVIRONMENTS
DK179822B1 (da) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US11076039B2 (en) 2018-06-03 2021-07-27 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11494612B2 (en) * 2018-10-31 2022-11-08 Sony Interactive Entertainment Inc. Systems and methods for domain adaptation in neural networks using domain classifier
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11183219B2 (en) * 2019-05-01 2021-11-23 Sony Interactive Entertainment Inc. Movies with user defined alternate endings
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. USER ACTIVITY SHORTCUT SUGGESTIONS
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5285522A (en) * 1987-12-03 1994-02-08 The Trustees Of The University Of Pennsylvania Neural networks for acoustical pattern recognition
US5168524A (en) * 1989-08-17 1992-12-01 Eliza Corporation Speech-recognition circuitry employing nonlinear processing, speech element modeling and phoneme estimation
JP2964507B2 (ja) * 1989-12-12 1999-10-18 松下電器産業株式会社 Hmm装置
JP2979711B2 (ja) * 1991-04-24 1999-11-15 日本電気株式会社 パターン認識方式および標準パターン学習方式
WO1994010635A2 (en) * 1992-11-02 1994-05-11 Boston University Neural networks with subdivision
US5794197A (en) * 1994-01-21 1998-08-11 Micrsoft Corporation Senone tree representation and evaluation
JPH08227410A (ja) * 1994-12-22 1996-09-03 Just Syst Corp ニューラルネットワークの学習方法、ニューラルネットワークおよびニューラルネットワークを利用した音声認識装置
US6324510B1 (en) * 1998-11-06 2001-11-27 Lernout & Hauspie Speech Products N.V. Method and apparatus of hierarchically organizing an acoustic model for speech recognition and adaptation of the model to unseen domains
US6253181B1 (en) * 1999-01-22 2001-06-26 Matsushita Electric Industrial Co., Ltd. Speech recognition and teaching apparatus able to rapidly adapt to difficult speech of children and foreign speakers
GB0028277D0 (en) * 2000-11-20 2001-01-03 Canon Kk Speech processing system

Also Published As

Publication number Publication date
EP1886303A1 (de) 2008-02-13
US8126710B2 (en) 2012-02-28
DE602005018552D1 (de) 2010-02-04
ATE453183T1 (de) 2010-01-15
CA2610269C (en) 2016-02-02
ES2339130T3 (es) 2010-05-17
CA2610269A1 (en) 2006-12-07
US20090216528A1 (en) 2009-08-27
WO2006128496A1 (en) 2006-12-07

Similar Documents

Publication Publication Date Title
EP1886303B1 (de) Verfahren zum anpassen eines neuronalen netzwerks einer automatischen spracherkennungseinrichtung
O’Shaughnessy Automatic speech recognition: History, methods and challenges
Young A review of large-vocabulary continuous-speech
Loizou et al. High-performance alphabet recognition
US6208964B1 (en) Method and apparatus for providing unsupervised adaptation of transcriptions
Thomas et al. Cross-lingual and multi-stream posterior features for low resource LVCSR systems.
US8090581B2 (en) Frame erasure concealment technique for a bitstream-based feature extractor
EP0700031A1 (de) Spracherkennung mit Erkennung falscher Wörter
US20010051871A1 (en) Novel approach to speech recognition
CN1199488A (zh) 模式识别
JPH075892A (ja) 音声認識方法
Revathi et al. Speaker independent continuous speech and isolated digit recognition using VQ and HMM
AU776919B2 (en) Robust parameters for noisy speech recognition
Boite et al. A new approach towards keyword spotting.
Rabiner et al. Historical Perspective of the Field of ASR/NLU
EP0782127A2 (de) Zeitvariable Merkmalsraumvorverarbeitungsprozedur für Spracherkennung bei Telefonen
EP1505572B1 (de) Spracherkennungsverfahren
Artières et al. Connectionist and conventional models for free-text talker identification tasks
JPH10254473A (ja) 音声変換方法及び音声変換装置
Li Combination and generation of parallel feature streams for improved speech recognition
Sankar et al. Noise-resistant feature extraction and model training for robust speech recognition
Beaufays et al. Using speech/non-speech detection to bias recognition search on noisy data
Dumitru et al. Vowel, Digit and Continuous Speech Recognition Based on Statistical, Neural and Hybrid Modelling by Using ASRS_RL
Deligne et al. On the use of lattices for the automatic generation of pronunciations
Rose et al. A user-configurable system for voice label recognition

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20071211

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: GEMELLO, ROBERTO,LOQUENDO S.P.A.

Inventor name: MANA, FRANCO,LOQUENDO S.P.A.

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LOQUENDO S.P.A.

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20090202

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602005018552

Country of ref document: DE

Date of ref document: 20100204

Kind code of ref document: P

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20091223

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2339130

Country of ref document: ES

Kind code of ref document: T3

LTIE Lt: invalidation of european patent or patent extension

Effective date: 20091223

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100423

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100323

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100423

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100324

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20100924

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100624

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091223

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20170623

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CZ

Payment date: 20170425

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180601

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20190913

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180602

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230516

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230523

Year of fee payment: 19

Ref country code: DE

Payment date: 20230523

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230523

Year of fee payment: 19