US7181388B2 - Method for compressing dictionary data - Google Patents

Method for compressing dictionary data Download PDF

Info

Publication number
US7181388B2
US7181388B2 US10/292,122 US29212202A US7181388B2 US 7181388 B2 US7181388 B2 US 7181388B2 US 29212202 A US29212202 A US 29212202A US 7181388 B2 US7181388 B2 US 7181388B2
Authority
US
United States
Prior art keywords
units
sequence
phoneme
character
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/292,122
Other versions
US20030120482A1 (en
Inventor
Jilei Tian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Assigned to NOKIA CORPORATION reassignment NOKIA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TIAN, JILEI
Publication of US20030120482A1 publication Critical patent/US20030120482A1/en
Application granted granted Critical
Publication of US7181388B2 publication Critical patent/US7181388B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/12Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Definitions

  • the invention relates to speaker-independent speech recognition, and more precisely to the compression of a pronunciation dictionary.
  • pronunciation of many words can be represented by rules, or even models, the pronunciation of some words can still not be correctly generated by these rules or models.
  • the pronunciation cannot be represented by general pronunciation rules, but each word has a specific pronunciation.
  • speech recognition relies on the use of so-called pronunciation dictionaries in which a written form of each word of the language and the phonetic representation of its pronunciation are stored in a list-like structure.
  • the object of the invention is to provide a more efficient compression method for compressing a pronunciation dictionary.
  • the object of the invention is achieved with a method, electronic devices, a system and a computer program product that are characterized by what is disclosed in the independent claims.
  • the preferred embodiments of the invention are set forth in the dependent claims.
  • the pronunciation dictionary is pre-processed before the compression.
  • the pre-processing can be used together with any method for compressing a dictionary.
  • each entry in the pronunciation dictionary is aligned using a statistical algorithm.
  • a sequence of character units and a sequence of phoneme units are modified to have an equal number of units in the sequences.
  • the aligned sequences of character units and phoneme units are then interleaved so that each phoneme unit is inserted at a predetermined location relative to the corresponding character unit.
  • a sequence of character units is typically a text sequence containing letters.
  • the alphabetical set can be extended to include more letters or symbols than the conventional English alphabet.
  • a sequence of phoneme units represents the pronunciation of the word and it usually contains letters and symbols, e.g. ‘@’, ‘A:’, ‘ ⁇ ’ in SAMPA (Speech Assessment Methods Phonetic Alphabet) notation.
  • the phonetic alphabet can also contain non-printable characters. Because one phoneme can be represented with more than one letter or symbol, the phonemes are separated by a whitespace character.
  • an electronic device is configured to convert a text string input into a sequence of phoneme units.
  • a pre-processed pronunciation dictionary comprising entries, the entries comprising a first set of units comprising character units and a second set of units comprising phoneme units, wherein the units of the first set and the units of the second set are aligned and interleaved by inserting each phoneme unit at a predetermined location relative to the corresponding character unit, is stored into the memory of the device.
  • a matching entry for the text string input is found from the pre-processed pronunciation dictionary by using the units of the first set of units of the entry form the predetermined locations. From the matching entry units of the second set of units are selected and concatenated into a sequence of phoneme units. Also the empty spaces are removed from the sequence of phoneme units.
  • an electronic device is configured to convert a speech information input into a sequence of character units.
  • a pre-processed pronunciation dictionary comprising entries, the entries comprising a first set of units comprising character units and a second set of units comprising phoneme units, wherein the units of the first set and the units of the second set are aligned and interleaved by inserting each phoneme unit at a predetermined location relative to the corresponding character unit, is stored into the memory of the device.
  • Pronunciation models for each entry's phonemic representation are either stored into the memory together with the pronunciation dictionary or created during the process.
  • a matching entry for the speech information is found by comparing the speech information to the pronunciation models and selecting the most corresponding entry. From the matching entry units of the first set of units are selected and concatenated into a sequence of character units. Finally the empty spaces are removed from the sequence of character units.
  • One advantage of the invention is that with the described pre-processing, the entropy (H) of the dictionary is lowered.
  • a low entropy rate (H) indicates that a more effective compression can be achieved, since the entropy rate determines the lower limit for compression (the compression ratio with the best possible non-lossy compression). This enables better compression, and the memory requirement is smaller.
  • the pronunciation dictionary is relatively simple and fast to apply for speech recognition.
  • the HMM-Viterbi algorithm is adapted to be used for the alignment.
  • the HMM-Viterbi algorithm ensures that the alignment is performed in an optimal manner in the statistical sense, and therefore minimizes the leftover entropy of the dictionary entry. Furthermore, an advantage of using the HMM-Viterbi algorithm in the alignment is that a more optimal alignment in the statistical sense can be reached.
  • a mapping step is added to the pre-processing.
  • the mapping can be done either before or after the alignment.
  • each phoneme unit is mapped into one symbol and instead of the phoneme units being represented by multiple characters, a single symbol is used to denote the phoneme units.
  • the whitespace characters can be removed from the entry, and yet decoding of the interleaved sequence is still possible. The removal of whitespace characters further improves the compression ratio. Additionally, an advantage of the mapping is that the method can be adapted to multiple languages, or even a large mapping table for all the languages in the device can be used.
  • FIG. 1 is a block diagram illustrating a data processing device, which supports the pre-processing and compression of the pronunciation dictionary according to one preferred embodiment of the invention
  • FIG. 2 is a flow chart of a method according to a preferred embodiment of the invention.
  • FIG. 3 illustrates the use of the HMM algorithm for the alignment of the pronunciation dictionary
  • FIG. 4 shows the pre-processing steps for one dictionary entry
  • FIG. 5 is a block diagram illustrating an electronic device, which uses the pre-processed pronunciation dictionary
  • FIG. 6 is a flow chart illustrating the use of the preprocessed pronunciation dictionary when a text string is converted into a pronunciation model according to a preferred embodiment of the invention.
  • FIG. 7 is a flow chart illustrating the use of the preprocessed pronunciation dictionary when speech information is converted into a sequence of text units according to a preferred embodiment of the invention
  • FIG. 1 illustrates a data processing device (TE) only for the parts relevant to a preferred embodiment of the invention.
  • the data processing device (TE) can be, for example, a personal computer (PC) or a mobile terminal.
  • the data processing unit (TE) comprises I/O means (I/O), a central processing unit (CPU) and memory (MEM).
  • the memory (MEM) comprises a read-only memory ROM portion and a rewriteable portion, such as a random access memory RAM and FLASH memory.
  • the information used to communicate with different external parties, e.g. a CD-rom, other devices and the user, is transmitted through the I/O means (I/O) to/from the central processing unit (CPU).
  • the central processing unit provides a pre-processing block (PRE) and a compression block (COM).
  • the functionality of these blocks is typically implemented by executing a software code in a processor, but it can also be implemented with a hardware solution (e.g. an ASIC) or as a combination of these two.
  • the pre-processing block (PRE) provides the pre-processing steps of a preferred embodiment illustrated in detail in FIG. 2 .
  • the compression block (COM) provides the compression of the pronunciation dictionary, for which purpose several different compression methods, e.g. LZ77, LZW or arithmetic coding, can be used.
  • the pre-processing can be combined with any of the other compression methods to improve the compression efficiency.
  • the pronunciation dictionary that needs to be pre-processed and compressed is stored in the memory (MEM).
  • the dictionary can also be downloaded from an external memory device, e.g. from a CD-ROM or a network, using the I/O means (I/O).
  • the pronunciation dictionary comprises entries that, in turn, each include a word in a sequence of character units (text sequence) and in a sequence of phoneme units (phoneme sequence).
  • the sequence of phoneme units represents the pronunciation of the sequence of character units.
  • the representation of the phoneme units is dependent on the phoneme notation system used. Several different phoneme notation systems can be used, e.g. SAMPA and IPA.
  • SAMPA Sound Assessment Methods Phonetic Alphabet
  • the International Phonetic Association provides a notational standard, the International Phonetic Alphabet (IPA), for the phonetic representation of numerous languages.
  • a dictionary entry using the SAMPA phoneme notation system could be for example:
  • Entropy denoted by H
  • H is a basic attribute, which characterises the data content of the signal. It is possible to find the shortest way to present a signal (compress it) without losing any data. The length of the shortest representation is indicated by the entropy of the signal. Instead of counting the exact entropy value individually for each signal, a method to estimate it has been established by Shannon (see, for example, C. E. Shannon, A Mathematical Theory of Communication, The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948). This will be described briefly in the following.
  • pre-processing of the text is used to lower its entropy.
  • FIG. 2 illustrates a method according to a preferred embodiment of the invention. The method concentrates on the pre-processing of the pronunciation dictionary to lower the entropy rate (H).
  • Each entry is aligned (200), i.e. the text and phoneme sequences are modified in order to have as many phoneme units in the phoneme sequence as there are character units in the text sequence.
  • a letter may correspond to zero, one, or two phonemes.
  • the alignment is obtained by inserting graphemic or phonemic epsilons (nulls) between the letters in the text string, or between the phonemes in the phoneme sequence.
  • graphemic epsilons can be avoided by introducing a short list of pseudophonemes that are obtained by concatenating two phonemes that are known to correspond to a single letter, for example, “x->k s”.
  • the phoneme list includes the pseudophonemes for the letter and the possible phonemic epsilon.
  • the general principle is to insert a graphemic null (defined as epsilon) into the text sequence and/or a phonemic null (also called an epsilon) into the phoneme sequence when needed. Below is the word used above as an example after alignment.
  • the word ‘father’ has 6 units and after aligning there are 6 phonemes in the phoneme sequence; ‘f A: D ⁇ ⁇ @’.
  • the aligning can be done in several different ways. According to one embodiment of the invention the alignment is done with the HMM-Viterbi algorithm. The principle of the alignment is illustrated and described in more detail in FIG. 3 .
  • each phoneme used in the phoneme notation system is preferably mapped ( 202 ) into a single symbol, for example, one byte ASCII code.
  • mapping is not necessary to achieve the benefits of the invention, but can further improve them.
  • the mapping can be represented, for example, in a mapping table. Below is an example of how the phonemes in the word used as an example could be mapped:
  • the spaces between the units can be removed. Also the space between the text sequence and the mapped and aligned phoneme sequence can be removed because there is an equal number of units in both sequences and it is clear which characters belong to the text and which to the phonetic representation.
  • Mapping the phoneme units to single symbols ( 202 ) is an important step for interleaving, since the whitespace characters can be avoided. Mapping also further enhances the end result in itself, since single characters take less space compared to, for example, two-character combinations, and the correlation to the corresponding text character is increased.
  • the order of aligning ( 200 ) and mapping ( 202 ) does not affect the end result, the mapping ( 202 ) can be carried out before aligning as well.
  • mapping table is only dependent on the phoneme notation method used in the pronunciation dictionary. It can be implemented to be language-independent so that different systems or implementations are not needed for different dialects or languages. If a plurality of pronunciation dictionaries use in a different phoneme notation methods were used, there would be a need for separate mapping tables for each phoneme notation method.
  • the entries are interleaved ( 204 ). Since the character ->phoneme pattern has a higher probability (lower entropy) than the consecutive letter pattern, especially if the alignment has been carried out optimally, redundancy is increased. This can be done by inserting pronunciation phonemes between the letters of the word to form a single word. In other words, the phoneme units are inserted next to the corresponding character units.
  • aligning ( 200 ) the text sequence and the phoneme sequence have an equal number of symbols and the character-phoneme pair is easy to find. For example:
  • the compression ( 206 ) of the preprocessed phoneme dictionary can be carried out.
  • FIG. 3 illustrates the grapheme HMM for aligning the textual and phonetic representations of an entry.
  • HMM Hidden Markov Model
  • the model in the underlying stochastic process is not directly observable (it is hidden) but can be seen only through another set of stochastic processes that produce the sequence of observations.
  • the HMM is composed of hidden states with transition between the states.
  • the mathematical representation includes three items: state transition probability between the states, observation probability of each state and initial state distribution. Given HMM and observation, the Viterbi algorithm is used to give the observation state alignment through following the best path.
  • the HMM can be used to solve the problem of optimal alignment of an observed sequence to the states of the Hidden Markov Model.
  • the Viterbi algorithm can be used in connection with the HMM to find the optimal alignment. More information about the Hidden Markov Models and their applications can be found e.g. from the book “Speech Recognition System Design and Implementation Issues”, pp. 322–342.
  • l) are initialised with zero if the phoneme f can be found in the list of the allowed phonemes of the letter l, otherwise they are initialised with large positive values.
  • the dictionary is aligned in two steps. In the first step, all possible alignments are generated for each entry in the dictionary. Based on all the aligned entries, the penalty values are then re-scored. In the second step, only a single best alignment is found for each entry.
  • the grapheme HMM has entry (ES), exit (EXS) and letter states (S 1 , S 2 and S 3 ).
  • the letters that may map to pseudophonemes are handled by having a duration state (EPS).
  • the states 1 to 3 (S 1 , S 2 , S 3 ) are the states that correspond to the letters in the word.
  • State 2 (S 2 ) corresponds to a letter that may produce a pseudophoneme. Skips from all previous states to the current state are allowed in order to support the phonemic epsilons.
  • Each state and the duration state hold a token that contains a cumulative penalty (as a sum of logarithmic probabilities) of aligning the phoneme sequence against the grapheme HMM and the state sequences that correspond to the cumulative score.
  • the phoneme sequence is aligned against letters by going through the phoneme sequence from the beginning to the end one phoneme at a time.
  • token passing is carried out. As the tokens pass from one state to another, they gather the penalty from each state. Token passing may also involve splitting tokens and combining or selecting tokens to enter the next state. The token that in the end has the lowest cumulative penalty is found over all the states of the HMM. Based on the state sequence of the token, the alignment between the letters of the word and the phonemes can be determined.
  • FIG. 4 illustrates in more detail the pre-processing of the entry used as an example according to a preferred embodiment of the invention.
  • the original entry ( 400 ) has the two parts, a text sequence ‘father’ and a phoneme sequence ‘f A: D @’. These two sequences are separated with a whitespace character and also the phoneme units are separated with whitespace characters.
  • mapping ( 404 ) of the phoneme units into one symbol representation changes only the phoneme sequence. After mapping the phoneme sequence of the example word is ‘f A D —— @’.
  • the last step is interleaving ( 408 ) and the example entry is ‘ffaAtDh_e_r@’. Now the entry can be processed further, for instance, it can be compressed.
  • the experiment was carried out using the Carnegie Mellon University Pronouncing Dictionary, which is a pronunciation dictionary for North American English that contains more than 100,000 words and their transcriptions.
  • the performance was evaluated first by using typical dictionary-based compression methods, LZ77 and LZW, and a statistical based compression method, the 2nd order arithmetic compression.
  • the performance was then tested with the preprocessing method together with the compression methods (LZ77, LZW and arithmetic).
  • Table 1 the results, given in kilobytes, show that the preprocessing method performs better in all cases. In general, it can be used with any compression algorithms.
  • the pre-processing improves the compression with all compression methods. Combined with the LZ77 compression method, the pre-processing improved the compression by over 20%. The improvement is even larger when the pre-processing was combined with the LZW method or with the Arithmetic method, providing about 40% better compression.
  • the invention can be applied to any general-purpose dictionary that is used in speech recognition and speech synthesis or all the applications when a pronunciation dictionary needs to be stored with efficient memory usage. It is also possible to apply the invention to the compression of any other lists comprising groups of textual entries that have a high correlation on the character level, for example, common dictionaries showing all the forms of a word and spell-checker programs.
  • FIG. 5 illustrates an electronic device (ED) only for the parts relevant to a preferred embodiment of the invention.
  • the electronic device (ED) can be e.g. a PDA device, a mobile terminal, a personal computer (PC) or even any accessory device intended to be used with these, e.g. an intelligent head-set or a remote control device.
  • the electronic device (ED) comprises I/O means (IO), a central processing unit (PRO) and memory (ME).
  • the memory (ME) comprises a read-only memory ROM portion and a rewriteable portion, such as a random access memory RAM and FLASH memory.
  • the information used for communicating with different external parties e.g.
  • a pre-processed pronunciation dictionary can be downloaded from the data processing device (TE) into the electronic device (ED) through the I/O means (IO), for example, as a download from the network. The dictionary is then stored into the memory (ME) for further usage.
  • the steps shown in FIGS. 6 and 7 may be implemented with a computer program code executed in the central processing unit (PRO) of the electronic device (ED).
  • the computer program can be loaded into the central processing unit (PRO) through the I/O means (IO).
  • the implementation can also be done with a hardware solution (e.g. ASIC) or with a combination of these two.
  • the phoneme dictionary stored in the memory (ME) of the device (ED) is pre-processed as described in FIG. 2 .
  • the central processing unit (PRO) of the electronic device (ED) receives a text string input that needs to be converted into a pronunciation model.
  • the input text string may be for instance a name the user has added using I/O means (IO) to a contact database of the electronic device (ED).
  • the character units of the entry can be found by selecting odd units, starting from the first.
  • the comparison is made with the original character string of the entry, and therefore empty spaces, e.g. graphemic epsilons, are ignored.
  • the character units exactly match to the units of the input text string, the matching entry is found.
  • the phoneme units of the entry are selected ( 602 ). Because of the interleaving (done according to the preferred embodiment described in FIG. 2 ), every second unit of the entry string is used. In order to determine the phoneme units, the selection is started from the second unit. The selected units can then be concatenated to create the sequence of phonemic units.
  • the sequence of phoneme units may include empty spaces, e.g. phonemic epsilons.
  • the empty spaces are removed in order to create a sequence consisting only of phonemes ( 604 ).
  • a reversed mapping is needed ( 606 ).
  • the reversed mapping can be carried out using a similar mapping table as the one used during the pre-processing, but in a reverse order.
  • This step changes the first representation method, e.g. one character representation, of the phonemic units into the second representation method, e.g. SAMPA, that is used in the system.
  • a pronunciation model of the sequence is created.
  • a pronunciation model is created for each phoneme using e.g. HMM-algorithm.
  • the phoneme pronunciation models are stored in the memory (ME).
  • a pronunciation model for each phoneme of the phoneme sequence is retrieved from the memory ( 608 ).
  • These phoneme models are then concatenated ( 610 ) and the pronunciation model for the phoneme sequence is created.
  • the converting of a text string input into a pronunciation model described above can also be distributed between two electronic devices.
  • the pre-processed dictionary is stored in the first electronic device, e.g. in the network, where the finding of a matching entry ( 600 ) is performed.
  • the matching entry is then distributed to the second electronic device, e.g. a mobile terminal, where the rest of the process (steps 602 – 610 ) is performed.
  • FIG. 7 illustrates one preferred embodiment of converting a speech information into a sequence of character units in an electronic device (ED) that utilises a pre-processed pronunciation dictionary.
  • the central processing unit (PRO) of the electronic device (ED) receives a speech information input through the I/O means (IO). This speech information needs to be converted into a sequence of character units for further usage e.g. to show it as text on the display or to compare it with a text string of a pre-determined speech command of a speech controlled device.
  • Finding a matching entry ( 702 ) is based on comparing the input speech information to the pronunciation models of each entry in the pronunciation dictionary. Therefore, before the comparison, the pronunciation of each entry is modelled ( 700 ).
  • the models are created in the electronic device (ED).
  • the phoneme dictionary is already interleaved and aligned, therefore the modelling can be done as described in FIG. 6 , following the steps 602 – 610 .
  • the modelling is done in the electronic device (ED) the need for processing capacity and working memory is increased. Instead the memory consumption for storing the pronunciation dictionary can be kept low.
  • the models are created before the pre-processing of the pronunciation dictionary in the data processing device (TE).
  • the modelling can be done as described in FIG. 6 , following the steps 608 and 610 . Because the modelling is done before the pre-processing and the dictionary is not yet interleaved, aligned or mapped, the steps 602 – 606 are not needed.
  • the pronunciation model is then stored into the memory (MEM) together with the entry.
  • the dictionary is transferred to the electronic device (ED) also the models are transferred. In this solution, less processing capacity and working memory is needed for converting speech information into a text sequence. Instead the memory consumption of the storage memory (ME) is increased.
  • the finding of a match entry is done using the input speech information and the pronunciation models of the entries stored in the memory (ME).
  • the speech information is compared with each entry and a probability of how well the input speech information matches with each entry's pronunciation model is computed. After computing the probabilities the match entry can be found by selecting the entry with the highest probability.
  • the character units are then selected from the matching entry ( 704 ). Because of the interleaving, done as described in FIG. 2 , every second unit of the entry string is used. The selecting must start from the first unit to obtain the character units. These selected units can then be concatenated to form a sequence of graphemic units.
  • the sequence of the graphemic units may include empty spaces, e.g. graphemic epsilons.
  • the empty spaces are removed ( 706 ).
  • An electronic device e.g. a mobile phone with a car user interface, has a speaker-independent voice recognition for voice commands.
  • Each voice command is an entry in the pronunciation dictionary.
  • the user wants to make a phone call while driving.
  • the voice recognition is active the user says ‘CALL’.
  • the phone receives the voice command with a microphone and transmits the speech information through the I/O means to the central processing unit.
  • the central processing unit converts the speech input into a text sequence as described in FIG. 7 .
  • the text sequence is transmitted through the I/O means to the display to give the user feedback of what the device is doing. Besides the text on the screen, the device also gives audio feedback.
  • the pronunciation model of the match entry which was created as a part of the speech-to-text conversion process, is transferred through the I/O means to the loudspeaker.
  • the phone then makes a phone call to the number that the user has selected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Machine Translation (AREA)
  • Document Processing Apparatus (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Auxiliary Devices For Music (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention relates to pre-processing of a pronunciation dictionary for compression in a data processing device, the pronunciation dictionary comprising at least one entry, the entry comprising a sequence of character units and a sequence of phoneme units. According to one aspect of the invention the sequence of character units and the sequence of phoneme units are aligned using a statistical algorithm. The aligned sequence of character units and aligned sequence of phoneme units are interleaved by inserting each phoneme unit at a predetermined location relative to the corresponding character unit.

Description

BACKGROUND OF THE INVENTION
The invention relates to speaker-independent speech recognition, and more precisely to the compression of a pronunciation dictionary.
Different speech recognition applications have been developed during recent years for instance for car user interfaces and mobile terminals, such as mobile phones, PDA devices and portable computers. Known methods for mobile terminals include methods for calling a particular person by saying aloud his/her name into the microphone of the mobile terminal and by setting up a call to the number according to the name said by the user. However, present speaker-dependent methods usually require that the speech recognition system is trained to recognize the pronunciation for each name. Speaker-independent speech recognition improves the usability of a speech-controlled user interface, because the training stage can be omitted. In speaker-independent name selection, the pronunciation of names can be stored beforehand, and the name spoken by the user can be identified with the pre-defined pronunciation, such as a phoneme sequence. Although in many languages pronunciation of many words can be represented by rules, or even models, the pronunciation of some words can still not be correctly generated by these rules or models. However, in many languages, the pronunciation cannot be represented by general pronunciation rules, but each word has a specific pronunciation. In these languages, speech recognition relies on the use of so-called pronunciation dictionaries in which a written form of each word of the language and the phonetic representation of its pronunciation are stored in a list-like structure.
In mobile phones the memory size is often limited due to reasons of cost and hardware size. This imposes limitations also on speech recognition applications. In a device capable of having multiple user interface languages, the speaker-independent speech recognition solution often uses pronunciation dictionaries. Because a pronunciation dictionary is usually large, e.g. 37 KB for two thousand names, it needs to be compressed for storage. Broadly speaking, most text compression methods fall into two classes: dictionary-based and statistics-based. There are several different implementations at the dictionary-based compression, e.g. LZ77/78 and LZW (Lempel-Ziv-Welch). By combining a statistical method, e.g. arithmetic coding, with powerful modelling techniques, a better performance can be achieved than with dictionary-based methods alone. However, the problem with the statistical based method is that it requires a large working memory (buffer) during the decompression process. Therefore this solution is not suitable for use in small portable electronic devices such as mobile terminals.
Although the existing compression methods are good in general, the compression of the pronunciation dictionaries is not efficient enough for portable devices.
BRIEF DESCRIPTION OF THE INVENTION
The object of the invention is to provide a more efficient compression method for compressing a pronunciation dictionary. The object of the invention is achieved with a method, electronic devices, a system and a computer program product that are characterized by what is disclosed in the independent claims. The preferred embodiments of the invention are set forth in the dependent claims.
According to a first aspect of the invention, the pronunciation dictionary is pre-processed before the compression. The pre-processing can be used together with any method for compressing a dictionary. In the pre-processing each entry in the pronunciation dictionary is aligned using a statistical algorithm. During the alignment, a sequence of character units and a sequence of phoneme units are modified to have an equal number of units in the sequences. The aligned sequences of character units and phoneme units are then interleaved so that each phoneme unit is inserted at a predetermined location relative to the corresponding character unit.
A sequence of character units is typically a text sequence containing letters. Depending on the language, the alphabetical set can be extended to include more letters or symbols than the conventional English alphabet.
A sequence of phoneme units represents the pronunciation of the word and it usually contains letters and symbols, e.g. ‘@’, ‘A:’, ‘{’ in SAMPA (Speech Assessment Methods Phonetic Alphabet) notation. The phonetic alphabet can also contain non-printable characters. Because one phoneme can be represented with more than one letter or symbol, the phonemes are separated by a whitespace character.
According to a second aspect of the invention, an electronic device is configured to convert a text string input into a sequence of phoneme units. A pre-processed pronunciation dictionary comprising entries, the entries comprising a first set of units comprising character units and a second set of units comprising phoneme units, wherein the units of the first set and the units of the second set are aligned and interleaved by inserting each phoneme unit at a predetermined location relative to the corresponding character unit, is stored into the memory of the device. A matching entry for the text string input is found from the pre-processed pronunciation dictionary by using the units of the first set of units of the entry form the predetermined locations. From the matching entry units of the second set of units are selected and concatenated into a sequence of phoneme units. Also the empty spaces are removed from the sequence of phoneme units.
According to a third aspect of the invention, an electronic device is configured to convert a speech information input into a sequence of character units. A pre-processed pronunciation dictionary comprising entries, the entries comprising a first set of units comprising character units and a second set of units comprising phoneme units, wherein the units of the first set and the units of the second set are aligned and interleaved by inserting each phoneme unit at a predetermined location relative to the corresponding character unit, is stored into the memory of the device. Pronunciation models for each entry's phonemic representation are either stored into the memory together with the pronunciation dictionary or created during the process. A matching entry for the speech information is found by comparing the speech information to the pronunciation models and selecting the most corresponding entry. From the matching entry units of the first set of units are selected and concatenated into a sequence of character units. Finally the empty spaces are removed from the sequence of character units.
One advantage of the invention is that with the described pre-processing, the entropy (H) of the dictionary is lowered. According to information theory, a low entropy rate (H) indicates that a more effective compression can be achieved, since the entropy rate determines the lower limit for compression (the compression ratio with the best possible non-lossy compression). This enables better compression, and the memory requirement is smaller. Furthermore, the pronunciation dictionary is relatively simple and fast to apply for speech recognition.
In one embodiment of the invention the HMM-Viterbi algorithm is adapted to be used for the alignment. The HMM-Viterbi algorithm ensures that the alignment is performed in an optimal manner in the statistical sense, and therefore minimizes the leftover entropy of the dictionary entry. Furthermore, an advantage of using the HMM-Viterbi algorithm in the alignment is that a more optimal alignment in the statistical sense can be reached.
In another embodiment of the invention a mapping step is added to the pre-processing. The mapping can be done either before or after the alignment. In this step, each phoneme unit is mapped into one symbol and instead of the phoneme units being represented by multiple characters, a single symbol is used to denote the phoneme units. By using the mapping technique, the whitespace characters can be removed from the entry, and yet decoding of the interleaved sequence is still possible. The removal of whitespace characters further improves the compression ratio. Additionally, an advantage of the mapping is that the method can be adapted to multiple languages, or even a large mapping table for all the languages in the device can be used.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, the invention will be described in further detail by means of preferred embodiments and with reference to the accompanying drawings, in which
FIG. 1 is a block diagram illustrating a data processing device, which supports the pre-processing and compression of the pronunciation dictionary according to one preferred embodiment of the invention;
FIG. 2 is a flow chart of a method according to a preferred embodiment of the invention;
FIG. 3 illustrates the use of the HMM algorithm for the alignment of the pronunciation dictionary;
FIG. 4 shows the pre-processing steps for one dictionary entry;
FIG. 5 is a block diagram illustrating an electronic device, which uses the pre-processed pronunciation dictionary;
FIG. 6 is a flow chart illustrating the use of the preprocessed pronunciation dictionary when a text string is converted into a pronunciation model according to a preferred embodiment of the invention; and
FIG. 7 is a flow chart illustrating the use of the preprocessed pronunciation dictionary when speech information is converted into a sequence of text units according to a preferred embodiment of the invention;
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a data processing device (TE) only for the parts relevant to a preferred embodiment of the invention. The data processing device (TE) can be, for example, a personal computer (PC) or a mobile terminal. The data processing unit (TE) comprises I/O means (I/O), a central processing unit (CPU) and memory (MEM). The memory (MEM) comprises a read-only memory ROM portion and a rewriteable portion, such as a random access memory RAM and FLASH memory. The information used to communicate with different external parties, e.g. a CD-rom, other devices and the user, is transmitted through the I/O means (I/O) to/from the central processing unit (CPU). The central processing unit (CPU) provides a pre-processing block (PRE) and a compression block (COM). The functionality of these blocks is typically implemented by executing a software code in a processor, but it can also be implemented with a hardware solution (e.g. an ASIC) or as a combination of these two. The pre-processing block (PRE) provides the pre-processing steps of a preferred embodiment illustrated in detail in FIG. 2. The compression block (COM) provides the compression of the pronunciation dictionary, for which purpose several different compression methods, e.g. LZ77, LZW or arithmetic coding, can be used. The pre-processing can be combined with any of the other compression methods to improve the compression efficiency.
The pronunciation dictionary that needs to be pre-processed and compressed is stored in the memory (MEM). The dictionary can also be downloaded from an external memory device, e.g. from a CD-ROM or a network, using the I/O means (I/O). The pronunciation dictionary comprises entries that, in turn, each include a word in a sequence of character units (text sequence) and in a sequence of phoneme units (phoneme sequence). The sequence of phoneme units represents the pronunciation of the sequence of character units. The representation of the phoneme units is dependent on the phoneme notation system used. Several different phoneme notation systems can be used, e.g. SAMPA and IPA. SAMPA (Speech Assessment Methods Phonetic Alphabet) is a machine-readable phonetic alphabet. The International Phonetic Association provides a notational standard, the International Phonetic Alphabet (IPA), for the phonetic representation of numerous languages. A dictionary entry using the SAMPA phoneme notation system could be for example:
Text Sequence Phoneme Sequence Entry
Father F A: D @ Father f A: D @
Entropy, denoted by H, is a basic attribute, which characterises the data content of the signal. It is possible to find the shortest way to present a signal (compress it) without losing any data. The length of the shortest representation is indicated by the entropy of the signal. Instead of counting the exact entropy value individually for each signal, a method to estimate it has been established by Shannon (see, for example, C. E. Shannon, A Mathematical Theory of Communication, The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948). This will be described briefly in the following.
Let P(lj| li) be the conditional probability that the present character is the jth letter in the alphabet, given that the previous character is the ith letter, and P(li) the probability that the previous character is the ith letter of the alphabet. The entropy rate H2 of the second order statistics is
H 2 = - i = 1 m P ( l i ) · j = 1 m P ( l j | l i ) · log 2 P ( l j | l i ) ( 1 )
The entropy rate H in a general case is given by
H = lim n - 1 n p ( B n ) · log 2 p ( B n ) ( 2 )
where Bn represents the first characters. It is virtually impossible to calculate the entropy rate according to the above equation (2). Using this prediction method of equation (1), it is possible to estimate that the entropy rate of an English text of 27 characters is approximately 2.3 bits/character.
To improve the compression of a pronunciation dictionary, pre-processing of the text is used to lower its entropy.
FIG. 2 illustrates a method according to a preferred embodiment of the invention. The method concentrates on the pre-processing of the pronunciation dictionary to lower the entropy rate (H).
Each entry is aligned (200), i.e. the text and phoneme sequences are modified in order to have as many phoneme units in the phoneme sequence as there are character units in the text sequence. In the English language, for example, a letter may correspond to zero, one, or two phonemes. The alignment is obtained by inserting graphemic or phonemic epsilons (nulls) between the letters in the text string, or between the phonemes in the phoneme sequence. The use of graphemic epsilons can be avoided by introducing a short list of pseudophonemes that are obtained by concatenating two phonemes that are known to correspond to a single letter, for example, “x->k s”. In order to align the entries, the set of allowed phonemes has to be defined for each letter. The phoneme list includes the pseudophonemes for the letter and the possible phonemic epsilon. The general principle is to insert a graphemic null (defined as epsilon) into the text sequence and/or a phonemic null (also called an epsilon) into the phoneme sequence when needed. Below is the word used above as an example after alignment.
Text Sequence Phoneme Sequence Aligned Entry
father f A: D @ father f A: D ε ε @
Here, the word ‘father’ has 6 units and after aligning there are 6 phonemes in the phoneme sequence; ‘f A: D ε ε @’. The aligning can be done in several different ways. According to one embodiment of the invention the alignment is done with the HMM-Viterbi algorithm. The principle of the alignment is illustrated and described in more detail in FIG. 3.
After aligning (200) each phoneme used in the phoneme notation system is preferably mapped (202) into a single symbol, for example, one byte ASCII code. However, mapping is not necessary to achieve the benefits of the invention, but can further improve them. The mapping can be represented, for example, in a mapping table. Below is an example of how the phonemes in the word used as an example could be mapped:
Phoneme Symbol ASCII number ASCII symbol
f 0x66 f
A: 0x41 A
D 0x44 D
@ 0x40 @
ε 0x5F
By representing each phoneme with one symbol, the two characters representing one phoneme unit can be replaced with just one 8-bit ASCII symbol. As a result, the example is:
Phoneme Mapped Sequence Mapped Sequence
Sequence (ASCII numbers) (symbols)
f A: D ε ε @ 0x66 0x41 0x44 0x5F 0x5F 0x40 f A D — —@
After representing the phonemes with one symbol the spaces between the units can be removed. Also the space between the text sequence and the mapped and aligned phoneme sequence can be removed because there is an equal number of units in both sequences and it is clear which characters belong to the text and which to the phonetic representation.
Aligned and Mapped Entry FatherfAD_@
Mapping the phoneme units to single symbols (202) is an important step for interleaving, since the whitespace characters can be avoided. Mapping also further enhances the end result in itself, since single characters take less space compared to, for example, two-character combinations, and the correlation to the corresponding text character is increased. The order of aligning (200) and mapping (202) does not affect the end result, the mapping (202) can be carried out before aligning as well.
The mapping table is only dependent on the phoneme notation method used in the pronunciation dictionary. It can be implemented to be language-independent so that different systems or implementations are not needed for different dialects or languages. If a plurality of pronunciation dictionaries use in a different phoneme notation methods were used, there would be a need for separate mapping tables for each phoneme notation method.
After aligning (200) and mapping (202), the entries are interleaved (204). Since the character ->phoneme pattern has a higher probability (lower entropy) than the consecutive letter pattern, especially if the alignment has been carried out optimally, redundancy is increased. This can be done by inserting pronunciation phonemes between the letters of the word to form a single word. In other words, the phoneme units are inserted next to the corresponding character units. After aligning (200), the text sequence and the phoneme sequence have an equal number of symbols and the character-phoneme pair is easy to find. For example:
Text Sequence Phoneme Sequence Interleaved Entry
father FAD— —@ ffaAtDh_e_r@
where italic and bold symbols stand for pronunciation phonemes. It is obvious from the example that composing and decomposing an entry between the original and new formats are uniquely defined, since the text sequence and the phoneme sequence, that are interleaved, contain an equal number of units.
After the pre-processing, the compression (206) of the preprocessed phoneme dictionary can be carried out.
FIG. 3 illustrates the grapheme HMM for aligning the textual and phonetic representations of an entry.
The Hidden Markov Model (HMM) is a well-known and widely used statistical method that has been applied for example in speech recognition. These models are also referred to as Markov sources or probabilistic functions of the Markov chain. The underlying assumption of the HMM is that a signal can be well characterized as a parametric random process, and that the parameters of the stochastic process can be determined/estimated in a precise, well-defined manner. The HMMs can be classified into discrete models and continuous models according to whether observable events assigned to each state are discrete, such as codewords, or whether they are continuous. In either case, the observation is probabilistic. The model in the underlying stochastic process is not directly observable (it is hidden) but can be seen only through another set of stochastic processes that produce the sequence of observations. The HMM is composed of hidden states with transition between the states. The mathematical representation includes three items: state transition probability between the states, observation probability of each state and initial state distribution. Given HMM and observation, the Viterbi algorithm is used to give the observation state alignment through following the best path.
It is acknowledged in the current invention that the HMM can be used to solve the problem of optimal alignment of an observed sequence to the states of the Hidden Markov Model. Furthermore, the Viterbi algorithm can be used in connection with the HMM to find the optimal alignment. More information about the Hidden Markov Models and their applications can be found e.g. from the book “Speech Recognition System Design and Implementation Issues”, pp. 322–342.
First, for a given letter-phoneme pair, the penalties p(f|l) are initialised with zero if the phoneme f can be found in the list of the allowed phonemes of the letter l, otherwise they are initialised with large positive values. With the initial penalty values, the dictionary is aligned in two steps. In the first step, all possible alignments are generated for each entry in the dictionary. Based on all the aligned entries, the penalty values are then re-scored. In the second step, only a single best alignment is found for each entry.
For each entry, the optimal alignment is found with the Viterbi algorithm on the grapheme HMM. The grapheme HMM has entry (ES), exit (EXS) and letter states (S1, S2 and S3). The letters that may map to pseudophonemes are handled by having a duration state (EPS). The states 1 to 3 (S1, S2, S3) are the states that correspond to the letters in the word. State 2 (S2) corresponds to a letter that may produce a pseudophoneme. Skips from all previous states to the current state are allowed in order to support the phonemic epsilons.
Each state and the duration state hold a token that contains a cumulative penalty (as a sum of logarithmic probabilities) of aligning the phoneme sequence against the grapheme HMM and the state sequences that correspond to the cumulative score. The phoneme sequence is aligned against letters by going through the phoneme sequence from the beginning to the end one phoneme at a time. In order to find the Viterbi alignment between the letters and the phonemes, token passing is carried out. As the tokens pass from one state to another, they gather the penalty from each state. Token passing may also involve splitting tokens and combining or selecting tokens to enter the next state. The token that in the end has the lowest cumulative penalty is found over all the states of the HMM. Based on the state sequence of the token, the alignment between the letters of the word and the phonemes can be determined.
The alignment works properly for most entries, but there are some special entries that cannot be aligned. In such cases, another simple alignment is applied: graphemic or phonemic epsilons are added to the end of the letter or phoneme sequences.
FIG. 4 illustrates in more detail the pre-processing of the entry used as an example according to a preferred embodiment of the invention.
The original entry (400) has the two parts, a text sequence ‘father’ and a phoneme sequence ‘f A: D @’. These two sequences are separated with a whitespace character and also the phoneme units are separated with whitespace characters.
In aligning (402) the phonemic and graphemic epsilons are added to have an equal number of units in both sequences. In the example word two phonemic epsilons are needed and the result of the phoneme sequence is ‘f A: D ε ε @’.
The mapping (404) of the phoneme units into one symbol representation changes only the phoneme sequence. After mapping the phoneme sequence of the example word is ‘f A D——@’.
When the entry is mapped (404) it is possible to remove the white space characters (406). As a result there is one string ‘fatherfAD——@’.
The last step is interleaving (408) and the example entry is ‘ffaAtDh_e_r@’. Now the entry can be processed further, for instance, it can be compressed.
All these steps are described in more detail in FIG. 2.
The pre-processing method described above, including also mapping (202), was tested experimentally. The experiment was carried out using the Carnegie Mellon University Pronouncing Dictionary, which is a pronunciation dictionary for North American English that contains more than 100,000 words and their transcriptions. In the experiment the performance was evaluated first by using typical dictionary-based compression methods, LZ77 and LZW, and a statistical based compression method, the 2nd order arithmetic compression. The performance was then tested with the preprocessing method together with the compression methods (LZ77, LZW and arithmetic). In Table 1 the results, given in kilobytes, show that the preprocessing method performs better in all cases. In general, it can be used with any compression algorithms.
TABLE 1
Compression performance comparison, tested using the
CMU English pronunciation dictionary. The results are in kilobytes.
Before com- Compr. with- Compr. with
Method pression out pre-proc. pre-proc. Improvement
LZ77 2580 1181 940 20.4%
LZW 2580 1315 822 37.5%
Arithmetic 2580  899 501 44.3%
As we can see from Table 1, the pre-processing improves the compression with all compression methods. Combined with the LZ77 compression method, the pre-processing improved the compression by over 20%. The improvement is even larger when the pre-processing was combined with the LZW method or with the Arithmetic method, providing about 40% better compression.
It should be understood that the invention can be applied to any general-purpose dictionary that is used in speech recognition and speech synthesis or all the applications when a pronunciation dictionary needs to be stored with efficient memory usage. It is also possible to apply the invention to the compression of any other lists comprising groups of textual entries that have a high correlation on the character level, for example, common dictionaries showing all the forms of a word and spell-checker programs.
FIG. 5 illustrates an electronic device (ED) only for the parts relevant to a preferred embodiment of the invention. The electronic device (ED) can be e.g. a PDA device, a mobile terminal, a personal computer (PC) or even any accessory device intended to be used with these, e.g. an intelligent head-set or a remote control device. The electronic device (ED) comprises I/O means (IO), a central processing unit (PRO) and memory (ME). The memory (ME) comprises a read-only memory ROM portion and a rewriteable portion, such as a random access memory RAM and FLASH memory. The information used for communicating with different external parties, e.g. the network, other devices or the user, is transmitted through the I/O means (IO) to/from the central processing unit (PRO). The user interface, such as a microphone or a keypad enabling a character sequence to be fed into the device, is thus part of the I/O means (IO). A pre-processed pronunciation dictionary can be downloaded from the data processing device (TE) into the electronic device (ED) through the I/O means (IO), for example, as a download from the network. The dictionary is then stored into the memory (ME) for further usage.
The steps shown in FIGS. 6 and 7 may be implemented with a computer program code executed in the central processing unit (PRO) of the electronic device (ED). The computer program can be loaded into the central processing unit (PRO) through the I/O means (IO). The implementation can also be done with a hardware solution (e.g. ASIC) or with a combination of these two. According to one preferred embodiment, the phoneme dictionary stored in the memory (ME) of the device (ED) is pre-processed as described in FIG. 2.
In FIG. 6 the central processing unit (PRO) of the electronic device (ED) receives a text string input that needs to be converted into a pronunciation model. The input text string may be for instance a name the user has added using I/O means (IO) to a contact database of the electronic device (ED). First a matching entry needs to be found (600) from the pre-processed pronunciation dictionary that is stored in the memory (ME). Finding the matching entry is based on comparing the input text string to the character units of the entries. Because the entries are interleaved, an entry string is a combination of character and phoneme units. If the interleaving is done according to the preferred embodiment described in FIG. 2, when comparing the input string to the entry, only every second unit is used. The character units of the entry can be found by selecting odd units, starting from the first. The comparison is made with the original character string of the entry, and therefore empty spaces, e.g. graphemic epsilons, are ignored. There are several methods and algorithms for finding the match entry known to a skilled person as such, and there is no need to describe them here, since they are not a part of the invention. When the character units exactly match to the units of the input text string, the matching entry is found. However, it should be understood that in some applications it might be advantageous to use a non-exact matching algorithm instead, for example one utilizing so-called wildcards.
When the matching entry is found, the phoneme units of the entry are selected (602). Because of the interleaving (done according to the preferred embodiment described in FIG. 2), every second unit of the entry string is used. In order to determine the phoneme units, the selection is started from the second unit. The selected units can then be concatenated to create the sequence of phonemic units.
As the entries are aligned, the sequence of phoneme units may include empty spaces, e.g. phonemic epsilons. The empty spaces are removed in order to create a sequence consisting only of phonemes (604).
If the pre-processing of the phoneme dictionary also included mapping, a reversed mapping is needed (606). The reversed mapping can be carried out using a similar mapping table as the one used during the pre-processing, but in a reverse order. This step changes the first representation method, e.g. one character representation, of the phonemic units into the second representation method, e.g. SAMPA, that is used in the system.
When the sequence of phoneme units is created, it is typically further processed, e.g. a pronunciation model of the sequence is created. According to one embodiment a pronunciation model is created for each phoneme using e.g. HMM-algorithm. The phoneme pronunciation models are stored in the memory (ME). To create a pronunciation model of an entry, a pronunciation model for each phoneme of the phoneme sequence is retrieved from the memory (608). These phoneme models are then concatenated (610) and the pronunciation model for the phoneme sequence is created.
The converting of a text string input into a pronunciation model described above can also be distributed between two electronic devices. For instant, the pre-processed dictionary is stored in the first electronic device, e.g. in the network, where the finding of a matching entry (600) is performed. The matching entry is then distributed to the second electronic device, e.g. a mobile terminal, where the rest of the process (steps 602610) is performed.
FIG. 7 illustrates one preferred embodiment of converting a speech information into a sequence of character units in an electronic device (ED) that utilises a pre-processed pronunciation dictionary. The central processing unit (PRO) of the electronic device (ED) receives a speech information input through the I/O means (IO). This speech information needs to be converted into a sequence of character units for further usage e.g. to show it as text on the display or to compare it with a text string of a pre-determined speech command of a speech controlled device.
Finding a matching entry (702) is based on comparing the input speech information to the pronunciation models of each entry in the pronunciation dictionary. Therefore, before the comparison, the pronunciation of each entry is modelled (700). According to one preferred embodiment, the models are created in the electronic device (ED). The phoneme dictionary is already interleaved and aligned, therefore the modelling can be done as described in FIG. 6, following the steps 602610. When the modelling is done in the electronic device (ED) the need for processing capacity and working memory is increased. Instead the memory consumption for storing the pronunciation dictionary can be kept low.
According to a second preferred embodiment, the models are created before the pre-processing of the pronunciation dictionary in the data processing device (TE). The modelling can be done as described in FIG. 6, following the steps 608 and 610. Because the modelling is done before the pre-processing and the dictionary is not yet interleaved, aligned or mapped, the steps 602606 are not needed. The pronunciation model is then stored into the memory (MEM) together with the entry. When the dictionary is transferred to the electronic device (ED) also the models are transferred. In this solution, less processing capacity and working memory is needed for converting speech information into a text sequence. Instead the memory consumption of the storage memory (ME) is increased.
The finding of a match entry (702) is done using the input speech information and the pronunciation models of the entries stored in the memory (ME). The speech information is compared with each entry and a probability of how well the input speech information matches with each entry's pronunciation model is computed. After computing the probabilities the match entry can be found by selecting the entry with the highest probability.
The character units are then selected from the matching entry (704). Because of the interleaving, done as described in FIG. 2, every second unit of the entry string is used. The selecting must start from the first unit to obtain the character units. These selected units can then be concatenated to form a sequence of graphemic units.
Because of the aligning, the sequence of the graphemic units may include empty spaces, e.g. graphemic epsilons. To create a sequence that has only graphemes, the empty spaces are removed (706). As a result we have a text string that can be used further in the system.
An electronic device, e.g. a mobile phone with a car user interface, has a speaker-independent voice recognition for voice commands. Each voice command is an entry in the pronunciation dictionary. The user wants to make a phone call while driving. When the voice recognition is active the user says ‘CALL’. The phone receives the voice command with a microphone and transmits the speech information through the I/O means to the central processing unit. The central processing unit converts the speech input into a text sequence as described in FIG. 7. The text sequence is transmitted through the I/O means to the display to give the user feedback of what the device is doing. Besides the text on the screen, the device also gives audio feedback. The pronunciation model of the match entry, which was created as a part of the speech-to-text conversion process, is transferred through the I/O means to the loudspeaker. The phone then makes a phone call to the number that the user has selected.
The accompanying drawings and the description pertaining to them are only intended to illustrate the present invention. Different variations and modifications to the invention will be apparent to those skilled in the art, without departing from the scope and spirit of the invention defined in the appended claims.

Claims (15)

1. A method for pre-processing a pronunciation dictionary for compression in a data processing device, the pronunciation dictionary comprising at least one entry, the entry comprising a sequence of character units and a sequence of phoneme units,
the method comprising:
aligning said sequence of character units and said sequence of phoneme units using a statistical algorithm so that the alignment between said character units and said phoneme units is determined; and
interleaving said aligned sequence of character units and said aligned sequence of phoneme units by inserting each phoneme unit at a predetermined location relative to the corresponding character unit.
2. The method of claim 1, wherein said alignment is determined by employing the statistical algorithm, a HMM-Viterbi algorithm.
3. The method of claim 1, wherein said phoneme units are located next to corresponding character units.
4. The method of claim 1, wherein said aligned sequence of character units and said aligned sequence of phoneme units are made to include an equal number of units by at least one of the following insertions:
inserting graphemic epsilons to said sequence of character units; and
inserting phonemic epsilons into said sequence of phoneme units.
5. The method of claim 1, wherein said character units are letters or white space characters.
6. The method of claim 1, wherein said phoneme units are letters or whitespace characters representing a single phoneme or a phonemic epsilon and one said unit is denoted by at least one character.
7. The method of claim 1, the method further comprising:
mapping each phoneme unit into one symbol.
8. A computer program product loadable into the memory of a data processing device, comprising a code which is executable in the data processing device causing the data processing device to:
retrieve from the memory a pronunciation dictionary comprising at least one entry, the entry comprising a sequence of character units and a sequence of phoneme units;
align said sequence of character units and said sequence of phoneme units using a statistical algorithm; and
interleave said aligned sequence of character units and said aligned sequence of phoneme units by inserting each phoneme unit at a predetermined location relative to the corresponding character unit.
9. A data processing device comprising memory for storing a pronunciation dictionary comprising at least one entry, the entry comprising a sequence of character units and a sequence of phoneme units, wherein
the data processing device is configured to retrieve from the memory a pronunciation dictionary comprising at least one entry;
the data processing device is configured to align said sequence of character units and said sequence of phoneme units using a statistical algorithm; and
the data processing device is configured to interleave said aligned sequence of character units and said aligned sequence of phoneme units by inserting each phoneme unit at a predetermined location relative to the corresponding character unit.
10. The data processing device of claim 9, wherein the data processing device is configured to determine said alignment by employing the statistical algorithm, a HMM-Viterbi algorithm.
11. The data processing device of claim 9, wherein the data processing device is configured to locate said phoneme units next to corresponding character units.
12. The data processing device of claim 9, wherein the data processing device is configured to cause said aligned sequence of character units and said aligned sequence of phoneme units include an equal number of units by at least one of the following insertions:
inserting graphemic epsilons to said sequence of character units; and
inserting phonemic epsilons into said sequence of phoneme units.
13. The data processing device of claim 9, wherein said character units are letters or whitespace characters.
14. The data processing device of claim 9, wherein said phoneme units are letters or whitespace characters representing a single phoneme or a phonemic epsilon and one said unit is denoted by at least one character.
15. The data processing device of claim 9, wherein the data processing device is configured to map each phoneme unit into one symbol.
US10/292,122 2001-11-12 2002-11-11 Method for compressing dictionary data Expired - Fee Related US7181388B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20012193 2001-11-12
FI20012193A FI114051B (en) 2001-11-12 2001-11-12 Procedure for compressing dictionary data

Publications (2)

Publication Number Publication Date
US20030120482A1 US20030120482A1 (en) 2003-06-26
US7181388B2 true US7181388B2 (en) 2007-02-20

Family

ID=8562237

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/292,122 Expired - Fee Related US7181388B2 (en) 2001-11-12 2002-11-11 Method for compressing dictionary data
US11/605,655 Abandoned US20070073541A1 (en) 2001-11-12 2006-11-29 Method for compressing dictionary data

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/605,655 Abandoned US20070073541A1 (en) 2001-11-12 2006-11-29 Method for compressing dictionary data

Country Status (12)

Country Link
US (2) US7181388B2 (en)
EP (1) EP1444685B1 (en)
JP (1) JP2005509905A (en)
KR (1) KR100597110B1 (en)
CN (1) CN1269102C (en)
AT (1) ATE361523T1 (en)
BR (1) BR0214042A (en)
CA (1) CA2466652C (en)
DE (1) DE60219943T2 (en)
ES (1) ES2284932T3 (en)
FI (1) FI114051B (en)
WO (1) WO2003042973A1 (en)

Cited By (130)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192793A1 (en) * 2004-02-27 2005-09-01 Dictaphone Corporation System and method for generating a phrase pronunciation
US20070203701A1 (en) * 2006-02-14 2007-08-30 Intellectual Ventures Fund 21 Llc Communication Device Having Speaker Independent Speech Recognition
US20070239634A1 (en) * 2006-04-07 2007-10-11 Jilei Tian Method, apparatus, mobile terminal and computer program product for providing efficient evaluation of feature transformation
US20090070380A1 (en) * 2003-09-25 2009-03-12 Dictaphone Corporation Method, system, and apparatus for assembly, transport and display of clinical data
US20090089048A1 (en) * 2007-09-28 2009-04-02 Microsoft Corporation Two-Pass Hash Extraction of Text Strings
US20100082349A1 (en) * 2008-09-29 2010-04-01 Apple Inc. Systems and methods for selective text to speech synthesis
US20100082327A1 (en) * 2008-09-29 2010-04-01 Apple Inc. Systems and methods for mapping phonemes for text to speech synthesis
US20100082344A1 (en) * 2008-09-29 2010-04-01 Apple, Inc. Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
US20100214136A1 (en) * 2009-02-26 2010-08-26 James Paul Schneider Dictionary-based compression
US20100228549A1 (en) * 2009-03-09 2010-09-09 Apple Inc Systems and methods for determining the language to use for speech generated by a text to speech engine
US8543378B1 (en) * 2003-11-05 2013-09-24 W.W. Grainger, Inc. System and method for discerning a term for an entry having a spelling error
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US9135912B1 (en) * 2012-08-15 2015-09-15 Google Inc. Updating phonetic dictionaries
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050190895A1 (en) * 2004-03-01 2005-09-01 Lloyd Ploof Remotely programmable messaging apparatus and method thereof
JP2006047866A (en) * 2004-08-06 2006-02-16 Canon Inc Electronic dictionary device and control method thereof
GB0704772D0 (en) * 2007-03-12 2007-04-18 Mongoose Ventures Ltd Aural similarity measuring system for text
US20090299731A1 (en) * 2007-03-12 2009-12-03 Mongoose Ventures Limited Aural similarity measuring system for text
DE102012202407B4 (en) * 2012-02-16 2018-10-11 Continental Automotive Gmbh Method for phonetizing a data list and voice-controlled user interface
WO2014203370A1 (en) * 2013-06-20 2014-12-24 株式会社東芝 Speech synthesis dictionary creation device and speech synthesis dictionary creation method
US10127904B2 (en) * 2015-05-26 2018-11-13 Google Llc Learning pronunciations from acoustic sequences
KR102443087B1 (en) 2015-09-23 2022-09-14 삼성전자주식회사 Electronic device and voice recognition method thereof
US10387543B2 (en) * 2015-10-15 2019-08-20 Vkidz, Inc. Phoneme-to-grapheme mapping systems and methods
CN105893414A (en) * 2015-11-26 2016-08-24 乐视致新电子科技(天津)有限公司 Method and apparatus for screening valid term of a pronunciation lexicon
US10706840B2 (en) 2017-08-18 2020-07-07 Google Llc Encoder-decoder models for sequence to sequence mapping
CN109982111B (en) * 2017-12-28 2020-05-22 贵州白山云科技股份有限公司 Text content transmission optimization method and device based on live broadcast network system
US10943580B2 (en) * 2018-05-11 2021-03-09 International Business Machines Corporation Phonological clustering
US11210465B2 (en) * 2019-08-30 2021-12-28 Microsoft Technology Licensing, Llc Efficient storage and retrieval of localized software resource data
CN113707137B (en) * 2021-08-30 2024-02-20 普强时代(珠海横琴)信息技术有限公司 Decoding realization method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4759068A (en) 1985-05-29 1988-07-19 International Business Machines Corporation Constructing Markov models of words from multiple utterances
US5845238A (en) 1996-06-18 1998-12-01 Apple Computer, Inc. System and method for using a correspondence table to compress a pronunciation guide
US5861827A (en) * 1996-07-24 1999-01-19 Unisys Corporation Data compression and decompression system with immediate dictionary updating interleaved with string search
US5930754A (en) 1997-06-13 1999-07-27 Motorola, Inc. Method, device and article of manufacture for neural-network based orthography-phonetics transformation
US6789066B2 (en) * 2001-09-25 2004-09-07 Intel Corporation Phoneme-delta based speech compression
US7080005B1 (en) * 1999-07-19 2006-07-18 Texas Instruments Incorporated Compact text-to-phone pronunciation dictionary

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6233553B1 (en) * 1998-09-04 2001-05-15 Matsushita Electric Industrial Co., Ltd. Method and system for automatically determining phonetic transcriptions associated with spelled words
US6208968B1 (en) * 1998-12-16 2001-03-27 Compaq Computer Corporation Computer method and apparatus for text-to-speech synthesizer dictionary reduction
US6363342B2 (en) * 1998-12-18 2002-03-26 Matsushita Electric Industrial Co., Ltd. System for developing word-pronunciation pairs
DE19942178C1 (en) * 1999-09-03 2001-01-25 Siemens Ag Method of preparing database for automatic speech processing enables very simple generation of database contg. grapheme-phoneme association

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4759068A (en) 1985-05-29 1988-07-19 International Business Machines Corporation Constructing Markov models of words from multiple utterances
US5845238A (en) 1996-06-18 1998-12-01 Apple Computer, Inc. System and method for using a correspondence table to compress a pronunciation guide
US6178397B1 (en) 1996-06-18 2001-01-23 Apple Computer, Inc. System and method for using a correspondence table to compress a pronunciation guide
US5861827A (en) * 1996-07-24 1999-01-19 Unisys Corporation Data compression and decompression system with immediate dictionary updating interleaved with string search
US5930754A (en) 1997-06-13 1999-07-27 Motorola, Inc. Method, device and article of manufacture for neural-network based orthography-phonetics transformation
US7080005B1 (en) * 1999-07-19 2006-07-18 Texas Instruments Incorporated Compact text-to-phone pronunciation dictionary
US6789066B2 (en) * 2001-09-25 2004-09-07 Intel Corporation Phoneme-delta based speech compression

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Decision Tree Based Text-to-Phoneme Mapping for Speech Recognition: (presented Oct. 16-20, 2000 Beijing).
Jul. Oct. 1948, C.E. Shannon, A Mathematical Theory of Communication, The Bell System Technical Journal, vol. 27, p. 379-423, 623-656.
Speech Recognition System Design and Implementation Issues, p. 322-342.

Cited By (183)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US20090070380A1 (en) * 2003-09-25 2009-03-12 Dictaphone Corporation Method, system, and apparatus for assembly, transport and display of clinical data
US8543378B1 (en) * 2003-11-05 2013-09-24 W.W. Grainger, Inc. System and method for discerning a term for an entry having a spelling error
US20050192793A1 (en) * 2004-02-27 2005-09-01 Dictaphone Corporation System and method for generating a phrase pronunciation
US20090112587A1 (en) * 2004-02-27 2009-04-30 Dictaphone Corporation System and method for generating a phrase pronunciation
US7783474B2 (en) * 2004-02-27 2010-08-24 Nuance Communications, Inc. System and method for generating a phrase pronunciation
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US20070203701A1 (en) * 2006-02-14 2007-08-30 Intellectual Ventures Fund 21 Llc Communication Device Having Speaker Independent Speech Recognition
US7480641B2 (en) * 2006-04-07 2009-01-20 Nokia Corporation Method, apparatus, mobile terminal and computer program product for providing efficient evaluation of feature transformation
US20070239634A1 (en) * 2006-04-07 2007-10-11 Jilei Tian Method, apparatus, mobile terminal and computer program product for providing efficient evaluation of feature transformation
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8078454B2 (en) 2007-09-28 2011-12-13 Microsoft Corporation Two-pass hash extraction of text strings
US20090089048A1 (en) * 2007-09-28 2009-04-02 Microsoft Corporation Two-Pass Hash Extraction of Text Strings
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US8712776B2 (en) 2008-09-29 2014-04-29 Apple Inc. Systems and methods for selective text to speech synthesis
US20100082349A1 (en) * 2008-09-29 2010-04-01 Apple Inc. Systems and methods for selective text to speech synthesis
US20100082327A1 (en) * 2008-09-29 2010-04-01 Apple Inc. Systems and methods for mapping phonemes for text to speech synthesis
US20100082344A1 (en) * 2008-09-29 2010-04-01 Apple, Inc. Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
US8352268B2 (en) 2008-09-29 2013-01-08 Apple Inc. Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US20100214136A1 (en) * 2009-02-26 2010-08-26 James Paul Schneider Dictionary-based compression
US7872596B2 (en) * 2009-02-26 2011-01-18 Red Hat, Inc. Dictionary-based compression
US20100228549A1 (en) * 2009-03-09 2010-09-09 Apple Inc Systems and methods for determining the language to use for speech generated by a text to speech engine
US8751238B2 (en) 2009-03-09 2014-06-10 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
US8380507B2 (en) 2009-03-09 2013-02-19 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US12087308B2 (en) 2010-01-18 2024-09-10 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10984327B2 (en) 2010-01-25 2021-04-20 New Valuexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en) 2010-01-25 2021-04-20 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US11410053B2 (en) 2010-01-25 2022-08-09 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9135912B1 (en) * 2012-08-15 2015-09-15 Google Inc. Updating phonetic dictionaries
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services

Also Published As

Publication number Publication date
CA2466652A1 (en) 2003-05-22
KR100597110B1 (en) 2006-07-04
ATE361523T1 (en) 2007-05-15
DE60219943D1 (en) 2007-06-14
FI114051B (en) 2004-07-30
EP1444685A1 (en) 2004-08-11
ES2284932T3 (en) 2007-11-16
US20070073541A1 (en) 2007-03-29
WO2003042973A1 (en) 2003-05-22
FI20012193A (en) 2003-05-13
CA2466652C (en) 2008-07-22
BR0214042A (en) 2004-10-13
JP2005509905A (en) 2005-04-14
EP1444685B1 (en) 2007-05-02
CN1269102C (en) 2006-08-09
US20030120482A1 (en) 2003-06-26
CN1585968A (en) 2005-02-23
FI20012193A0 (en) 2001-11-12
KR20050044399A (en) 2005-05-12
DE60219943T2 (en) 2008-01-17

Similar Documents

Publication Publication Date Title
US7181388B2 (en) Method for compressing dictionary data
US6684185B1 (en) Small footprint language and vocabulary independent word recognizer using registration by word spelling
CA2130218C (en) Data compression for speech recognition
US7299179B2 (en) Three-stage individual word recognition
US7574411B2 (en) Low memory decision tree
US20080126093A1 (en) Method, Apparatus and Computer Program Product for Providing a Language Based Interactive Multimedia System
US20070078653A1 (en) Language model compression
EP1291848A2 (en) Multilingual pronunciations for speech recognition
EP1668628A1 (en) Method for synthesizing speech
WO2004036939A1 (en) Portable digital mobile communication apparatus, method for controlling speech and system
US7676364B2 (en) System and method for speech-to-text conversion using constrained dictation in a speak-and-spell mode
EP0562138A1 (en) Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary
Mérialdo Multilevel decoding for very-large-size-dictionary speech recognition
JP2002221989A (en) Method and apparatus for text input
JP4230142B2 (en) Hybrid oriental character recognition technology using keypad / speech in adverse environment
EP0423800B1 (en) Speech recognition system
US7865363B2 (en) System and method for computer recognition and interpretation of arbitrary spoken-characters
Kao et al. A low cost dynamic vocabulary speech recognizer on a GPP-DSP system
Tian Efficient compression method for pronunciation dictionaries.
KR20010085219A (en) Speech recognition device including a sub-word memory
Tsai et al. Pronunciation variation analysis with respect to various linguistic levels and contextual conditions for Mandarin Chinese.
KR100677197B1 (en) Voice recognizing dictation method
Meron et al. Compression of exception lexicons for small footprint grapheme-to-phoneme conversion
KR20030080155A (en) Voice recognition unit using dictionary for pronunciation limitation
Georgila et al. Large Vocabulary Search Space Reduction Employing Directed Acyclic Word Graphs and Phonological Rules

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOKIA CORPORATION, FINLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TIAN, JILEI;REEL/FRAME:013731/0001

Effective date: 20030109

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20110220