US20020091511A1 - Mobile terminal controllable by spoken utterances - Google Patents
Mobile terminal controllable by spoken utterances Download PDFInfo
- Publication number
- US20020091511A1 US20020091511A1 US10/013,493 US1349301A US2002091511A1 US 20020091511 A1 US20020091511 A1 US 20020091511A1 US 1349301 A US1349301 A US 1349301A US 2002091511 A1 US2002091511 A1 US 2002091511A1
- Authority
- US
- United States
- Prior art keywords
- network server
- acoustic models
- mobile terminal
- database
- phonetic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000013518 transcription Methods 0.000 claims description 118
- 230000035897 transcription Effects 0.000 claims description 118
- 238000004891 communication Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 9
- 238000003786 synthesis reaction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 description 45
- 238000012549 training Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 238000012790 confirmation Methods 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/26—Devices for calling a subscriber
- H04M1/27—Devices whereby a plurality of signals may be stored simultaneously
- H04M1/271—Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
Definitions
- the invention relates to the field of automatic speech recognition and more particularly to a mobile terminal which is controllable by spoken utterances like proper names and command words.
- the invention further relates to a method for providing acoustic models for automatic speech recognition in such a mobile terminal.
- Many mobile terminals like mobile telephones or personal digital assistants comprise the feature of controlling one or more functions by means of uttering corresponding keywords.
- mobile telephones which allow the answering of a call or the administration of a telephone book by uttering command words.
- mobile telephones allow so-called voice dialling which is initiated by uttering a person's name.
- Controlling a mobile terminal by spoken utterances necessitates employment of automatic speech recognition.
- an automatic speech recognizer compares previously generated acoustic models with a detected spoken utterance.
- the acoustic models can be generated speaker dependent and speaker independent.
- speaker dependent speech recognition and thus speaker dependent acoustic models.
- speaker dependent acoustic models necessitates that an individual user of the mobile terminal has to train a vocabulary based on which automatic speech recognition is performed. The training is usually done by speaking a specific keyword one or several times in order to generate the corresponding speaker dependent acoustic model.
- Speech recognition in mobile terminals based on speaker dependent acoustic models is not always an optimal solution.
- the requirement of a separate training for each keyword which is to be used for controlling the mobile terminal is time demanding and perceived as cumbersome by the user.
- the speaker dependent acoustic models are usually stored in the mobile terminal itself, the speaker dependent acoustic models generated by means of a training process are only available for this single mobile terminal. This means that if the user buys a new mobile terminal, the time demanding training process has to be repeated.
- speaker independent speech recognition i. e., speech recognition based on speaker independent acoustic models.
- the spoken keywords for controlling the mobile terminal constitute a limited set of command words which are predefined, i. e., not defined by the user of the mobile terminal
- the speaker independent references may be generated by averaging the spoken utterances of a large number of different speakers and may be stored in the mobile terminal prior to its sale.
- the present invention satisfies this need by providing a network server for mobile terminals which are controllable by spoken utterances, the network server comprising a unit for providing acoustic models for automatic recognition of the spoken utterances, the unit for providing acoustic models translating a textual transcription of a spoken utterance into sequence of phonetic transcription units and the sequence of phonetic transcription units into a sequence of phonetic recognition units, the sequence of phonetic recognition units forming an acoustic model of the spoken utterance.
- the network server further comprises an interface for transmitting the acoustic models to the mobile terminals.
- the network server's as well as each mobile terminal's interface can be configured as one or more additional hardware components or as a software solution for operating already existing hardware components.
- the invention further provides a mobile terminal which is controllable by spoken utterances like a proper name or a command word and which comprises an interface for receiving from a network server acoustic models which were created on the basis of textual transcriptions of the spoken utterances, the received acoustic models being comprised of a sequence of phonetic recognition units, each phonetic recognition unit being derived from a corresponding phonetic transcription unit.
- the mobile terminal further comprises an automatic speech recognizer for recognizing the spoken utterances based on the phonetic recognition units of the received acoustic models.
- the acoustic models to be used for automatic speech recognition are thus provided by the network server, which transmits the acoustic models to a mobile terminal.
- the mobile terminal recognizes spoken utterances based on the phonetic recognition units of the acoustic models transmitted by and received from the network server.
- the acoustic models are provided centrally and for a plurality of mobile terminals by a single network server.
- the acoustic models provided by the network server can be both speaker dependent and speaker independent.
- the network server may provide the acoustic models e.g. by storing the acoustic models to be downloaded by the mobile terminal in a network server database or by generating the acoustic models to be downloaded on demand.
- the computational and memory resources required for generating the speaker independent acoustic models are located on the side of the network server and shared by a plurality of mobile terminals. Consequently, mobile terminals can be controlled by freely chosen spoken utterances and based on speaker independent speech recognition without a significant increase of the hardware requirements for the mobile terminals. Moreover, the mobile terminals themselves can be kept language independent and country independent since any language dependent resources necessitated by speaker independent voice recognition can be transferred from the mobile terminal to the network server. Additionally, since speaker independent voice recognition is used, the mobile terminal requires no user training prior to controlling the mobile terminal by spoken utterances.
- speaker dependent acoustic models In case speaker dependent acoustic models are used, the speaker dependent acoustic models need only be trained once and can then be stored on the network server. Consequently, the speaker dependent acoustic models can be transmitted from the network server to any mobile terminal a user intends to control by spoken utterances. If, e.g., the user buys a new mobile terminal, no further training is required to control this new mobile terminal by spoken utterances. The user merely needs to e.g. load the speaker dependent acoustic models from his old mobile terminal to the network server and to subsequently re-load these acoustic models from the network server into the new mobile terminal. Of course, this also works with speaker independent acoustic models.
- the invention therefore, allows to reduce the computational requirements of mobile terminals if speaker independent acoustic models are used for automatic speech recognition. If speaker dependent acoustic models are used for automatic speech recognition, only a single training process may be used in order to control a plurality of mobile terminals by automatic speech recognition.
- speaker independent acoustic models are generated based on textual transcriptions (e.g. in the ASCII format) of the spoken utterances.
- the textual transcriptions of the spoken utterances may be contained in a database for textual transcriptions within the mobile terminal.
- the interface of the mobile terminal can be configured such that it allows to transmit the textual transcriptions from the mobile terminal to the network server.
- the interface of the network server on the other hand can be configured such that it allows to receive the textual transcriptions from the mobile terminal.
- the unit for providing acoustic models within the network server can generate speaker independent acoustic models based on the received textual transcriptions.
- the interface of the mobile terminal can be configured such that it allows to transmit speaker dependent or speaker independent acoustic models of the spoken utterances to the network server.
- the interface of the network server can be configured such that it allows to receive the acoustic models from the mobile terminal.
- the unit for providing acoustic models of the network server can store the received acoustic models permanently or temporarily.
- the unit for providing acoustic models may thus be a memory.
- the acoustic models may be transferred from the network server to the mobile terminal from which the acoustic models have been received or to a further mobile terminal.
- the network server may be used as a backup means.
- the network server may perform a backup of the acoustic models or further information like voice prompts stored in the mobile terminal in certain time intervals.
- the mobile terminal may comprise a database for storing textual transcriptions of the spoken utterances.
- the textual transcriptions can be input by the user, e.g. by means of keys of the mobile terminal. This may be done in context with the creation of entries for a personal telephone book or of command words.
- the textual transcriptions can also be pre-defined and pre-stored prior to the sale of the mobile terminal. Pre-defined textual transcriptions may e. g. relate to specific command words.
- the mobile terminal can comprise an acoustic model database for storing acoustic models generated within the mobile terminal or received from the network server.
- both databases are configured such that for each pair of textual transcription and corresponding acoustic model there exists a link between the textual transcription and the corresponding acoustic model.
- the acoustic models are generated by the network server based on phonetic transcriptions of the textual transcriptions.
- the phonetic transcriptions are e. g. created with the help of a pronunciation database which constitutes the network server's vocabulary of phonetic transcription units like phonemes or triphons.
- Single phonetic transcription units are concatenated to form the phonetic transcription of a specific textual transcription.
- the speaker independent or speaker dependent acoustic models are generated by translating the phonetic transcription units into the corresponding speaker independent or speaker dependent phonetic recognition units which are in a format that can be analyzed by the automatic speech recognizer of the mobile terminal.
- the network server's vocabulary of phonetic recognition units may be stored in a recognition database of the network server.
- the network server can further comprise a speech synthesizer for generating a voice prompt of a textual transcription received from a mobile terminal.
- the voice prompt is generated using the same phonetic transcription which is used to build a corresponding acoustic model. Therefore, the pronunciation database can be shared by both the speech synthesizer and the unit for generating the speaker independent acoustic model.
- the voice prompt can be generated by translating the textual transcription into phonetic synthesizing units.
- the network server's vocabulary of phonetic synthesizing units may e. g. be contained in a synthesis database of the network server.
- the voice prompt may be transmitted from the network server to the mobile terminal and may be received from the mobile terminal via its interface.
- the voice prompt received from the network server may then be stored in a voice prompt database of the mobile terminal.
- a recognized user utterance may also form the basis for a voice prompt. Consequently, the voice prompt can be generated within the mobile terminal using the recognized user utterance.
- the speech synthesizer and the synthesis database of the network server can be omitted and the complexity and the cost of the network server can be considerably decreased.
- the interface of the mobile terminal can be configured such that it allows to transmit voice prompts from the mobile terminal to the network server and to receive voice prompts from the network server.
- the interface of the network server can be configured such that it allows to receive voice prompts from the mobile terminal and to transmit voice prompts to the mobile terminal.
- the network server further comprises a voice prompt database for storing the voice prompts permanently or temporarily. Consequently, the voice prompts which have been generated either within the mobile terminal or within the network server can be loaded from the voice prompt database within the network server to a mobile terminal any time it is desired. Thus, a set of voice prompts has to generated only once for a plurality of mobile terminals.
- the voice prompts can be used for generating an acoustic feedback upon recognition of a spoken utterance by the automatic speech recognizer of the mobile terminal. Therefore, the mobile terminal can further comprise components for outputting an acoustic feedback for a recognized utterance.
- the mobile terminal may further comprise components for outputting an visual feedback for a recognized utterance.
- the visual feedback can e. g. consist of displaying the textual transcription which corresponds to the recognized utterance.
- the database for the textual transcriptions is arranged on a physical carrier which is removably connectable to the mobile terminal.
- the physical carrier can e. g. be a subscriber identity module (SIM) card which is also used for storing personal information.
- SIM subscriber identity module
- a mobile terminal can be personalized.
- the SIM card may comprise further databases at least partly like the mobile terminal's database for voice prompts or for acoustic models.
- the invention can be implemented both as a hardware solution and as a computer program product comprising program code portions for performing the individual steps of the method when the computer program product is run on a computer system.
- the computer program product may be stored on a computer readable recording medium like a data carrier attached to or removable from the computer.
- FIG. 1 shows a schematic diagram of a first embodiment of a mobile terminal according to the invention
- FIG. 2 shows a schematic diagram of the mobile terminal according to FIG. 1 in communication with a first embodiment of a network server according to the invention
- FIG. 3 shows a schematic diagram of a second embodiment of a mobile terminal according to the invention.
- FIG. 4 shows a schematic diagram of a second embodiment of a network server according to the invention.
- FIG. 5 shows a schematic diagram of a third embodiment of a network server according to the invention.
- FIG. 1 a schematic diagram of a first embodiment of a mobile terminal in the form of a mobile telephone 100 with voice dialing functionality according to the invention is illustrated.
- the mobile telephone 100 comprises an automatic speech recognizer 110 which receives a signal corresponding to a spoken utterance of a user from a microphone 120 .
- the automatic speech recognizer 110 is further in communication with a database 130 which contains all acoustic models to be compared for automatic speech recognition by the automatic speech recognizer 110 with the spoken utterances received via the microphone 120 .
- the mobile telephone 100 additionally comprises a component 140 for generating an acoustic feedback for a recognized spoken utterance.
- the component 140 for outputting the acoustic feedback is in communication with a voice prompt database 150 for storing voice prompts.
- the component 140 generates an acoustic feedback based on voice prompts contained in the database 150 .
- the component 140 for outputting an acoustic feedback is further in communication with a loudspeaker 160 which plays back the acoustic feedback received from the component 140 for outputting the acoustic feedback.
- the mobile telephone 100 depicted in FIG. 1 also comprises a SIM card 170 on which a further database 180 for storing textual transcriptions is arranged.
- the SIM card 170 is removably connected to the mobile telephone 110 and contains a list with several textual transcriptions of spoken utterances to be recognized by the automatic speech recognizer 110 .
- the database 180 is configured as a telephone book and contains a plurality of telephone book entries in the form of names which are each associated with a specific telephone number. As can be seen from the drawing, the first telephone book entry relates to the name “Tom” and the second telephone book entry relates to the name “Stefan”.
- the textual transcriptions of the database 180 are configured as ASCII character strings.
- the textual transcription of the first telephone book entry consists of the three characters “Ty”, “O” and “M”.
- each textual transcription of the database 180 has an unique index.
- the textual transcription “Tom”, e.g., has the index “1”.
- the database 180 for storing the textual transcriptions is in communication with a component 190 for outputting an optic feedback.
- the component 190 for outputting the visual feedback is configured to display the textual transcription of a spoken utterance recognized by the automatic recognizer 110 .
- the three databases 130 , 150 , 180 of the mobile telephone 100 are in communication with an interface 200 of the mobile telephone 100 .
- the interface 200 serves for transmitting the textual transcriptions contained in the database 180 to a network server and for receiving from the network server an acoustic model as well as a voice prompt for each textual transcription transmitted to the network server.
- the interface 200 in the mobile telephone 100 can be separated internally into two blocks not shown in FIG. 1.
- a first block is responsible to access in a read and write mode the acoustic model database 130 , the voice prompt database 150 and the textual transcription database 180 .
- the second block realizes the transmission of the data comprised within the databases 130 , 150 , 180 to the network server 300 using a protocol description which guarantees a lossfree and fast transmission of the data.
- Another requirement on such a protocol is a certain level of security.
- the protocol should be designed in such a way that it is independent from the underlying physical transmission medium, such as e.g. infraread (IR), Bluetooth, GSM, etc.
- any kind of protocol (proprietary or standardized) fulfilling the above requirements could be used.
- An example for an appropriate protocol is the recently released SyncML protocol which synchronizes information stored on two devices even when the connectivity is not guaranteed. Such a protocol would meet the necessary requirements to exchange voice prompts, acoustic models, etc. for speech driven applications in any mobile terminal.
- Each textual transcription is transmitted from the mobile telephone 100 to the network server together with the corresponding index of the textual transcription.
- each acoustical model and each voice prompt are transmitted from the network server to the mobile telephone 100 together with the index of the corresponding textual transcription.
- the speaker independent references as well as the acoustical models received from the network server are stored in the corresponding databases 130 and 150 together with their indices.
- Each index of the three databases 130 , 150 , 180 can be interpreted as a link between a textual transcription, its corresponding acoustical model and its corresponding voice prompt.
- FIG. 2 a network system comprising the mobile telephone 100 depicted in FIG. 1 and a network server 300 is illustrated.
- the network server 300 is configured to communicate with a plurality of mobile telephones 100 .
- only one mobile telephone 100 is exemplarily shown in FIG. 2.
- the network server 300 depicted in FIG. 2 comprises an interface 310 for receiving the textual transcriptions from the mobile terminal 100 and for transmitting the corresponding acoustic model and the corresponding voice prompt to the mobile telephone 100 .
- the interface 310 is structured in two blocks, a protocol driver block towards the e.g. wireless connection and an access block which transfers data to locations like databases, processing means etc. in the network server 300 .
- the blocks are not shown in FIG. 2.
- the interface 310 of the network server 300 is in communication with a unit 320 for providing acoustic models and a speech synthesizer 330 .
- the unit 320 receives input from a recognition database 340 containing phonetic recognition units and a pronunciation database 350 containing phonetic transcription units.
- the speech synthesizer 330 receives input from the pronunciation database 350 and a synthesis database 360 containing phonetic synthesizing units.
- SIM card 170 with a database 180 containing indexed textual transcriptions like “Tom” and “Stefan”.
- the SIM card 170 further comprises a database containing indexed telephone numbers relating to the textual transcriptions contained in the database 170 .
- the database containing the telephone numbers is not depicted in the drawing.
- the mobile telephone 100 transmits the textual transcriptions contained in the database 180 via the interface 200 to the network server 300 .
- the connection between the mobile telephone 100 and the network server 300 is either wireless connection operated e. g. according to a GSM, a UMTS, a blue-tooth standard or an IR standard or a wired connection.
- the unit 320 for providing reference models and the speech synthesizer 330 of the network server 300 receive the indexed textual transcriptions via the interface 310 .
- the unit 320 then translates each textual transcription into its phonetic transcription.
- the phonetic transcription consists of a sequence of phonetic transcription units like phonems or triphons.
- the phonetic transcription units are loaded into the unit 320 from the pronunciation database 350 .
- the unit 320 Based on the sequence of phonetic transcription units corresponding to a specific textual transcription, the unit 320 then generates a speaker dependent or speaker independent acoustic model corresponding to that textual transcription.
- each phonetic transcription unit of the sequence of phonetic transcription units into its corresponding speaker dependent or speaker independent phonetic recognition units.
- the phonetic recognition units are con 5 tained in the recognition database 340 in a form that can be analyzed by the automatic speech recognizer 110 of the mobile telephone 100 , e. g., in the form of feature vectors.
- An acoustic model is thus generated by concatenation of a plurality of phonetic recognition units in accordance with the sequence of phonetic transcription units.
- the speech synthesizer 330 Concurrently with the generation of an acoustic model, the speech synthesizer 330 generates a voice prompt for each textual transcription received from the mobile telephone 100 . First of all, the speech synthesizer 330 generates a phonetic transcription of each textual transcription. This is done in the same manner as explained above in context with the unit 320 for providing acoustic models. Moreover, the same pronunciation database 350 is used. Due to the fact that the pronunciation database 350 is used both for generating the acoustic models and the voice prompts, synthesis errors during the creation of voice prompts can be avoided. If, e. g., the German word “Bibelried” is synthesized with two vowels “i” and “e” in “Bibel” instead of a long “i”, this could immediately be heard by the user and corrected.
- the speech synthesizer 330 Based on the sequence of phonetic transcription units which constitutes the phonetic transcription, the speech synthesizer 330 generates a voice prompt by loading for each phonetic transcription unit comprised in the sequence of transcription units the corresponding phonetic synthesizing unit from the synthesis database 360 . The thus obtained phonetic synthesizing units are then concatenated to the voice prompt of a textual transcription.
- each acoustic model and each voice prompt is provided with the index of the corresponding textual transcription.
- the indexed speaker independent acoustic model and the indexed voice prompts are then transmitted to the mobile telephone 100 via the interface 310 of the network server 300 .
- the indexed speaker independent acoustic models and indexed voice prompts are received via the interface 200 and are loaded in the corresponding databases 130 , 150 .
- the database 130 for the acoustic models and the database 150 for the voice prompts are filled.
- a telephone call can be set up by means of a spoken utterance.
- a user has to speak an utterance corresponding to a textual transcription contained in the database 180 , e. g. “Stefan”. This spoken utterance is converted by the microphone 120 into a signal which is fed into the automatic speech recognizer 110 .
- the acoustic models are stored in the database 130 as a sequence of feature vectors.
- the automatic speech recognizer 110 analyzes the signal from the microphone 120 corresponding to the spoken utterance in order to obtain the feature vectors thereof. This process is called feature extraction.
- the automatic speech recognizer 110 matches the reference vectors of the spoken utterance “Stefan” with the reference vectors stored in the database 130 for each textual transcription. Thus, pattern matching takes place.
- the database 130 contains an acoustic model corresponding to the spoken utterance “Stefan”, a recognition result in the form of the index “ 2 ”, which corresponds to the textual transcription “Stefan”, is output from the automatic speech recognizer 110 to both the component 140 for outputting an acoustic feedback and the component 190 for outputting a visual feedback.
- the component 140 for outputting an acoustic feedback loads the voice prompt corresponding to the index “2” from the database 150 and generates an acoustic feedback corresponding to the synthesized word “Stefan”.
- the acoustic feedback is played back by the loudspeaker 160 .
- the component 190 for outputting a visual feedback loads the textual transcription corresponding to the index “2” from the database 180 and outputs a visual feedback by displaying the character sequence “Stefan”.
- the user may now confirm the acoustic and visual feedback and a call may be set up based on the telephone number which has the index “2”.
- the acoustic and the visual feedback can be confirmed e. g. by pressing a confirmation key of the mobile telephone 100 or by speaking a further utterance relating to a confirmation command word like “yes” or “call”.
- Acoustic models and voice prompts for the confirmation command word and for other command words can be generated in the same manner as described above in respect to creating speaker dependent and speaker independent acoustic models and as will be described bellow in respect to creating speaker dependent acoustic models.
- the voice prompts stored in the database 150 are not generated by the network server 300 but within the mobile telephone 100 .
- the computational and memory resources of the network server 300 can thus be considerably decreased since the speech synthesizer 330 and the synthesis database 360 can be omitted.
- a voice prompt for a specific textual transcription can be generated within the mobile telephone 100 based on a spoken utterance recognized by the automatic speech recognizer 110 .
- the first recognized utterance corresponding to the specific textual transcription is used for generating the corresponding voice prompt for the database 150 .
- a voice prompt generated for a specific textual transcription is permanently stored in the database 150 for voice prompts only if the automatic speech recognizer 110 can find a corresponding acoustic model and if the user confirms this recognition result e.g. by setting up a call. Otherwise, the voice prompt is discarded.
- the recognition database 340 and the synthesis database 360 may be provided on the side of the network server 300 , in the case of speaker independent acoustic models the mobile telephone 100 can be kept language and country independent.
- the network server 300 comprises a plurality of pronunciation databases, recognition databases and synthesis databases, each database being language specific.
- a user of the mobile telephone 100 may select a specific language code within the mobile telephone 100 .
- This language code is transmitted together with the textual transcriptions to the network server 300 which can thus generate language dependent and speaker independent acoustic models and voice prompts based on the language code received from the mobile telephone 100 .
- the language code received by the network server 300 may be used to download language specific acoustic or visual user guidances from the network server 300 to the mobile 100 .
- the user guidance may e.g. inform a user how to operate the mobile telephone 100 .
- the acoustic models have been generated by the network server 300 in a speaker dependent or speaker independent manner and the voice prompts have been either synthesized speaker independently within the network server 300 or recorded speaker dependently within the mobile telephone 100 .
- the database 130 for acoustic models may also comprise both speaker independent and speaker dependent acoustic models.
- Speaker independent acoustic models may e.g. be generated by the network server 300 or be pre-defined and pre-stored in the mobile telephone 100 .
- Speaker dependent acoustic models may be generated as will be described below in more detail.
- the database 150 for voice prompts may comprise both speaker independent voice prompts generated e.g. within the network server 300 and speaker dependent voice prompts generated using the first recognized utterance corresponding to a specific textual transcription as described above.
- the databases 340 and 350 of the network server 300 can be configured as speaker dependent databases.
- FIG. 3 a second embodiment of a mobile telephone 100 according to the invention is illustrated.
- the mobile telephone 100 depicted in FIG. 3 has a similar construction like the mobile telephone 100 depicted in FIG. 1.
- the mobile telephone 100 comprises an interface 200 for communicating with a network server.
- the mobile telephone 100 depicted in FIG. 3 further comprises a training unit 400 in communication with both the automatic speech recognizer 110 and the database 130 for acoustic models.
- the mobile telephone 100 of FIG. 3 comprises a coding unit 410 in communication with both the microphone 120 and the database 150 for voice prompts and a decoding unit 420 in communication with both the database 150 for voice prompts and the component 140 for generating an acoustic feedback.
- the training unit 400 and the coding unit 410 of the mobile telephone 100 depicted in FIG. 3 are controlled by a central controlling unit not depicted in FIG. 3 to create speaker dependent acoustic models and speaker dependent voice prompts as follows.
- the mobile telephone 100 is controlled such that a user is prompted to utter each keyword like each proper name or each command word to be used for voice controlling the mobile telephone 100 one or several times.
- the automatic speech recognizer 100 inputs each training utterance to the training unit 400 which works as a voice activity detector suppressing silence or noise intervals at the beginning and at the end of each utterance.
- the thus filtered utterance is then acoustically output to the user for confirmation. If the user confirms the filtered utterance, the training unit 400 stores a corresponding speaker dependent acoustic model in the database 130 for acoustic models in the form of a sequence of reference vectors.
- one training utterance selected by the user is input from the microphone 120 to the coding unit 410 for coding this utterance in accordance with a format that allocates few memory resources in the database 150 for voice prompts.
- the utterance is then stored in the database 150 for voice prompts.
- the voice prompt database 150 is filled with speaker dependent voice prompts.
- a coded voice prompt loaded from the database 150 is decoded by the decoding unit 420 and passed on in a decoded format to the component 140 for generating an acoustic feedback.
- the mobile telephone 100 depicted in FIG. 3 can be controlled by spoken utterances as described above in context with the mobile telephone 100 depicted in FIG. 1.
- the lifecycle of a mobile telephone 100 is rather short. If a user buys a new mobile telephone, he usually simply removes the SIM card 170 with the database 180 for textual transcriptions from the old mobile telephone and inserts it into the new mobile telephone. Thus, the textual transcriptions, e.g. a telephone book, are immediately available in the new mobile telephone. However, the database 130 for acoustic models and the database 150 for voice prompts remain empty.
- the user thus has to repeat the same time consuming training process he already encountered with the old mobile telephone in order to fill the database 130 for acoustic models and the database 150 for voice prompts.
- the time consuming training process for filling the databases 130 , 150 can be omitted. This is due to the provision of the interface 200 for transmitting contents of the database 130 for acoustic models and the database 150 for voice prompts to a network server and for receiving the corresponding contents from the network server later on.
- a network server 300 configured to communicate with the mobile telephone 100 depicted in FIG. 3 is illustrated in FIG. 4.
- the network server 300 of FIG. 4 processes the same components and the same functionality like the network server 300 of FIG. 2.
- the network server 300 of FIG. 4 comprises three databases 370 , 380 , 390 in communication with the interface 310 .
- the database 370 works as a unit for providing acoustic models and is adapted to temporarily store acoustic models.
- the database 380 is adapted to temporarily store voice prompts and the database 390 is adapted to temporarily store textual transcriptions.
- the user of the mobile telephone 100 initiates a transfer process upon which the speaker dependent acoustic models and the speaker dependent voice prompts generated within the mobile terminal 100 are transferred by means of the interface 200 to the network server 300 .
- the acoustic models and the voice prompts from the mobile terminal 100 are received from the network server 300 via the interface 310 . Thereafter, the received acoustic models are stored in the database 370 and the received voice prompts are stored in the database 380 of the network server 300 . Again, as already mentioned in context with the network system depicted in FIG. 2, the acoustic models and the voice prompts are transmitted from the mobile telephone 100 together with their respective indices and are stored in the databases 370 , 380 of the network server 300 in an indexed manner. This allows to assign each acoustic model and each voice prompt stored in the network server 300 a corresponding textual transcription later on.
- the database 130 for acoustic models and the database 150 for voice prompts will first be empty. However, the user of the new mobile telephone 100 may initiate a transfer process upon which the empty database 130 for acoustic models and the empty database 150 for voice prompts are filled with the indexed contents of the corresponding databases 370 and 380 in the network server 300 .
- the indexed acoustic models in the database 370 for acoustic models and the indexed voice prompts in the database 380 for voice prompts are transmitted from the interface 310 of the network server to the new mobile terminal 100 and transferred via the interface 200 of the mobile terminal 100 into the corresponding databases 130 , 150 of the mobile terminal 100 .
- the time consuming process of newly training speaker dependent acoustic models and speaker dependent voice prompts for a new mobile telephone can thus be omitted if the training process has been conducted for the old mobile telephone.
- the textual transcriptions of the database 180 for textual transcriptions of the mobile telephone 100 can likewise be transferred from the mobile telephone 100 to the network server 300 and stored at least temporarily in the further database 390 for textual transcriptions of the network server 300 . Consequently, if a user buys a new mobile telephone with a new SIM card 170 , i.e., with a SIM card 170 having an empty database 180 for textual transcriptions, the user need not to create the database 180 for textual transcriptions anew. He may simply fill the database 180 for textual transcriptions of the mobile telephone 100 with the contents of the corresponding database 390 of the network server 300 as outlined above.
- the network server 300 depicted in FIG. 4 can be used both with the mobile terminal 100 of FIG. 1 which preferably operates based on speaker independent acoustic models as well as with the mobile terminal 100 of FIG. 3 which is configured to operate with speaker dependent acoustic models.
- the network server 300 of FIG. 4 may also be configured such that it may only be used with the mobile telephone 100 of FIG. 3.
- the complexity of the network server 300 can be drastically decreased.
- the network server 300 of FIG. 4 need not comprise all the databases 370 , 380 , 390 for storing the acoustic models, the voice prompts, and the textual transcriptions, respectively.
- the network server 300 comprises at least the database 370 for acoustic models.
- the network server 300 of FIG. 4 is part of a Wireless Local Area Network (WLAN) that is installed in a public building.
- the database 370 for acoustic models initially contains a plurality of acoustic models relating to words (utterances) which typically occur in context with the public building. If, for example, the public building is an arts museum, the acoustic models stored in the data base 370 may relate to utterances like “Impressionism”, “Expressionism”, “Picasso”, and the like.
- a visitor carrying a mobile terminal 100 as depicted in FIG. 3 enters the museum, his mobile terminal 100 automatically establishes a connection to the WLAN server 300 .
- This connection may for example be a connection according to the Bluetooth standard.
- the mobile terminal 100 then automatically downloads the specific acoustic models stored in the WLAN server's database 370 in its own corresponding database 130 or in a further database not depicted in FIG. 3.
- the mobile terminal 100 is now configured to recognize spoken utterances relating to specific museum-related terms.
- the mobile terminal 100 automatically forwards the recognition result to the WLAN server 300 .
- the WLAN server 300 transmits specific information relating to the recognition result to the mobile terminal 100 to be displayed at the mobile terminal's display 190 .
- the information received from the WLAN server 300 may for example relate to the place where a specific exhibit is located or to information about a specific exhibit.
- FIG. 5 A third embodiment of a network server 300 according to the invention is depicted in FIG. 5.
- the network server 300 depicted in FIG. 5 allows name dialing even with telephones which have no name dialing capability.
- POTS Packet Old Telephone System
- the user simply dials into the network server 300 via the interface 310 .
- the connection between the POTS telephone and the network server 300 may be a wired or a wireless connection.
- the network server 300 depicted in FIG. 4 comprises three databases 370 , 380 , 390 with the same functionality as the corresponding databases of the network server 300 depicted in FIG. 4.
- the network server 300 of FIG. 5 further comprises an automatic speech recognizer 500 in communication with both the interface 310 and the database 370 for acoustic models and a speech output system 510 in communication with the database 380 for voice prompts.
- the databases 370 and 380 of the network server 300 have been filled with acoustic models and voice prompts as described above in context with the network server 300 of FIG. 4.
- a user now dials with a POTS telephone into the network server 300 depicted in FIG. 5, he has full name dialing capabilities.
- a spoken utterance of the user may be recognized by the automatic speech recognizer 500 based on the acoustic models comprised in the database 370 for acoustic models which constitutes the automatic speech recognizer's 500 vocabulary.
- the speech output system 510 loads the correspondingly indexed voice prompt from the database 380 and outputs this voice prompt via the interface 310 to the POTS telephone. If the user acknowledges that the voice prompt is correct, a call may be set up based on the indexed telephone number which corresponds to the voice prompt and which is stored in the database 390 for textual transcriptions.
- the network server 300 is configured as a backup network server which performs a backup of one or more of a mobile telephone's databases in regular time intervals. It is thus ensured that a user of a POTS telephone has always access to the most recent content of a mobile telephone's databases.
- the POTS telephone can be used for training the network server 300 in regard to the creation of e.g. speaker dependent acoustic models or speaker dependent voice prompts which are to be stored in the corresponding databases 370 , 380 .
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
- Selective Calling Equipment (AREA)
- Telephone Function (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00127467.9 | 2000-12-14 | ||
EP00127467A EP1215661A1 (de) | 2000-12-14 | 2000-12-14 | Sprachgesteuertes tragbares Endgerät |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020091511A1 true US20020091511A1 (en) | 2002-07-11 |
Family
ID=8170674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/013,493 Abandoned US20020091511A1 (en) | 2000-12-14 | 2001-12-13 | Mobile terminal controllable by spoken utterances |
Country Status (6)
Country | Link |
---|---|
US (1) | US20020091511A1 (de) |
EP (2) | EP1215661A1 (de) |
AT (1) | ATE298918T1 (de) |
AU (1) | AU2002233237A1 (de) |
DE (1) | DE60111775T2 (de) |
WO (1) | WO2002049005A2 (de) |
Cited By (150)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040261021A1 (en) * | 2000-07-06 | 2004-12-23 | Google Inc., A Delaware Corporation | Systems and methods for searching using queries written in a different character-set and/or language from the target pages |
US20050137866A1 (en) * | 2003-12-23 | 2005-06-23 | International Business Machines Corporation | Interactive speech recognition model |
US20050149327A1 (en) * | 2003-09-11 | 2005-07-07 | Voice Signal Technologies, Inc. | Text messaging via phrase recognition |
US20050289141A1 (en) * | 2004-06-25 | 2005-12-29 | Shumeet Baluja | Nonstandard text entry |
US20060036438A1 (en) * | 2004-07-13 | 2006-02-16 | Microsoft Corporation | Efficient multimodal method to provide input to a computing device |
US20060053013A1 (en) * | 2002-12-05 | 2006-03-09 | Roland Aubauer | Selection of a user language on purely acoustically controlled telephone |
US20060059192A1 (en) * | 2004-09-15 | 2006-03-16 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US20060085186A1 (en) * | 2004-10-19 | 2006-04-20 | Ma Changxue C | Tailored speaker-independent voice recognition system |
US20060095259A1 (en) * | 2004-11-02 | 2006-05-04 | International Business Machines Corporation | Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment |
US7072686B1 (en) * | 2002-08-09 | 2006-07-04 | Avon Associates, Inc. | Voice controlled multimedia and communications device |
US20060230350A1 (en) * | 2004-06-25 | 2006-10-12 | Google, Inc., A Delaware Corporation | Nonstandard locality-based text entry |
US20080103771A1 (en) * | 2004-11-08 | 2008-05-01 | France Telecom | Method for the Distributed Construction of a Voice Recognition Model, and Device, Server and Computer Programs Used to Implement Same |
US7369988B1 (en) * | 2003-02-24 | 2008-05-06 | Sprint Spectrum L.P. | Method and system for voice-enabled text entry |
US20090043582A1 (en) * | 2005-08-09 | 2009-02-12 | International Business Machines Corporation | Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices |
US20090271106A1 (en) * | 2008-04-23 | 2009-10-29 | Volkswagen Of America, Inc. | Navigation configuration for a motor vehicle, motor vehicle having a navigation system, and method for determining a route |
US20090271200A1 (en) * | 2008-04-23 | 2009-10-29 | Volkswagen Group Of America, Inc. | Speech recognition assembly for acoustically controlling a function of a motor vehicle |
US20100049518A1 (en) * | 2006-03-29 | 2010-02-25 | France Telecom | System for providing consistency of pronunciations |
US20120130709A1 (en) * | 2010-11-23 | 2012-05-24 | At&T Intellectual Property I, L.P. | System and method for building and evaluating automatic speech recognition via an application programmer interface |
US8265930B1 (en) * | 2005-04-13 | 2012-09-11 | Sprint Communications Company L.P. | System and method for recording voice data and converting voice data to a text file |
US20130325452A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US20130325459A1 (en) * | 2012-05-31 | 2013-12-05 | Royce A. Levien | Speech recognition adaptation systems based on adaptation data |
US20130325441A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha Llc | Methods and systems for managing adaptation data |
US20130325450A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US20130325474A1 (en) * | 2012-05-31 | 2013-12-05 | Royce A. Levien | Speech recognition adaptation systems based on adaptation data |
US20130325446A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Speech recognition adaptation systems based on adaptation data |
US20130325449A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US20130325451A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US20130325448A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Speech recognition adaptation systems based on adaptation data |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US8977255B2 (en) * | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US20160133255A1 (en) * | 2014-11-12 | 2016-05-12 | Dsp Group Ltd. | Voice trigger sensor |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9606986B2 (en) | 2014-09-29 | 2017-03-28 | Apple Inc. | Integrated word N-gram and class M-gram language models |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US20180268815A1 (en) * | 2017-03-14 | 2018-09-20 | Texas Instruments Incorporated | Quality feedback on user-recorded keywords for automatic speech recognition systems |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US20190018635A1 (en) * | 2017-07-11 | 2019-01-17 | Roku, Inc. | Controlling Visual Indicators In An Audio Responsive Electronic Device, and Capturing and Providing Audio Using an API, By Native and Non-Native Computing Devices and Services |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10777197B2 (en) | 2017-08-28 | 2020-09-15 | Roku, Inc. | Audio responsive device with play/stop and tell me something buttons |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11062710B2 (en) | 2017-08-28 | 2021-07-13 | Roku, Inc. | Local and cloud speech recognition |
US11062702B2 (en) | 2017-08-28 | 2021-07-13 | Roku, Inc. | Media system with multiple digital assistants |
US11145298B2 (en) | 2018-02-13 | 2021-10-12 | Roku, Inc. | Trigger word detection with multiple digital assistants |
US11153472B2 (en) | 2005-10-17 | 2021-10-19 | Cutting Edge Vision, LLC | Automatic upload of pictures from a camera |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006097598A1 (fr) * | 2005-03-16 | 2006-09-21 | France Telecom | Procede de creation automatique d'etiquettes vocales dans un carnet d'adresses |
DE602005015984D1 (de) * | 2005-11-25 | 2009-09-24 | Swisscom Ag | Verfahren zur Personalisierung eines Dienstes |
DE102013216427B4 (de) * | 2013-08-20 | 2023-02-02 | Bayerische Motoren Werke Aktiengesellschaft | Vorrichtung und Verfahren zur fortbewegungsmittelbasierten Sprachverarbeitung |
DE102013219649A1 (de) * | 2013-09-27 | 2015-04-02 | Continental Automotive Gmbh | Verfahren und System zum Erstellen oder Ergänzen eines benutzerspezifischen Sprachmodells in einem mit einem Endgerät verbindbaren lokalen Datenspeicher |
US9953632B2 (en) * | 2014-04-17 | 2018-04-24 | Qualcomm Incorporated | Keyword model generation for detecting user-defined keyword |
US9959863B2 (en) * | 2014-09-08 | 2018-05-01 | Qualcomm Incorporated | Keyword detection using speaker-independent keyword models for user-designated keywords |
US9836527B2 (en) * | 2016-02-24 | 2017-12-05 | Google Llc | Customized query-action mappings for an offline grammar model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892813A (en) * | 1996-09-30 | 1999-04-06 | Matsushita Electric Industrial Co., Ltd. | Multimodal voice dialing digital key telephone with dialog manager |
US6363348B1 (en) * | 1997-10-20 | 2002-03-26 | U.S. Philips Corporation | User model-improvement-data-driven selection and update of user-oriented recognition model of a given type for word recognition at network server |
US20020065656A1 (en) * | 2000-11-30 | 2002-05-30 | Telesector Resources Group, Inc. | Methods and apparatus for generating, updating and distributing speech recognition models |
US6408272B1 (en) * | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6463413B1 (en) * | 1999-04-20 | 2002-10-08 | Matsushita Electrical Industrial Co., Ltd. | Speech recognition training for small hardware devices |
US6662163B1 (en) * | 2000-03-30 | 2003-12-09 | Voxware, Inc. | System and method for programming portable devices from a remote computer system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19751123C1 (de) * | 1997-11-19 | 1999-06-17 | Deutsche Telekom Ag | Vorrichtung und Verfahren zur sprecherunabhängigen Sprachnamenwahl für Telekommunikations-Endeinrichtungen |
US6195641B1 (en) * | 1998-03-27 | 2001-02-27 | International Business Machines Corp. | Network universal spoken language vocabulary |
US6314165B1 (en) * | 1998-04-30 | 2001-11-06 | Matsushita Electric Industrial Co., Ltd. | Automated hotel attendant using speech recognition |
DE19918382B4 (de) * | 1999-04-22 | 2004-02-05 | Siemens Ag | Erstellen eines Referenzmodell-Verzeichnisses für ein sprachgesteuertes Kommunikationsgerät |
-
2000
- 2000-12-14 EP EP00127467A patent/EP1215661A1/de not_active Withdrawn
-
2001
- 2001-12-10 EP EP01984819A patent/EP1348212B1/de not_active Expired - Lifetime
- 2001-12-10 WO PCT/EP2001/014493 patent/WO2002049005A2/en not_active Application Discontinuation
- 2001-12-10 AU AU2002233237A patent/AU2002233237A1/en not_active Abandoned
- 2001-12-10 AT AT01984819T patent/ATE298918T1/de not_active IP Right Cessation
- 2001-12-10 DE DE60111775T patent/DE60111775T2/de not_active Expired - Lifetime
- 2001-12-13 US US10/013,493 patent/US20020091511A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892813A (en) * | 1996-09-30 | 1999-04-06 | Matsushita Electric Industrial Co., Ltd. | Multimodal voice dialing digital key telephone with dialog manager |
US6363348B1 (en) * | 1997-10-20 | 2002-03-26 | U.S. Philips Corporation | User model-improvement-data-driven selection and update of user-oriented recognition model of a given type for word recognition at network server |
US6408272B1 (en) * | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6463413B1 (en) * | 1999-04-20 | 2002-10-08 | Matsushita Electrical Industrial Co., Ltd. | Speech recognition training for small hardware devices |
US6662163B1 (en) * | 2000-03-30 | 2003-12-09 | Voxware, Inc. | System and method for programming portable devices from a remote computer system |
US20020065656A1 (en) * | 2000-11-30 | 2002-05-30 | Telesector Resources Group, Inc. | Methods and apparatus for generating, updating and distributing speech recognition models |
Cited By (233)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US8706747B2 (en) | 2000-07-06 | 2014-04-22 | Google Inc. | Systems and methods for searching using queries written in a different character-set and/or language from the target pages |
US9734197B2 (en) | 2000-07-06 | 2017-08-15 | Google Inc. | Determining corresponding terms written in different formats |
US20040261021A1 (en) * | 2000-07-06 | 2004-12-23 | Google Inc., A Delaware Corporation | Systems and methods for searching using queries written in a different character-set and/or language from the target pages |
US7072686B1 (en) * | 2002-08-09 | 2006-07-04 | Avon Associates, Inc. | Voice controlled multimedia and communications device |
US20060053013A1 (en) * | 2002-12-05 | 2006-03-09 | Roland Aubauer | Selection of a user language on purely acoustically controlled telephone |
US7369988B1 (en) * | 2003-02-24 | 2008-05-06 | Sprint Spectrum L.P. | Method and system for voice-enabled text entry |
US20050149327A1 (en) * | 2003-09-11 | 2005-07-07 | Voice Signal Technologies, Inc. | Text messaging via phrase recognition |
US8160876B2 (en) * | 2003-12-23 | 2012-04-17 | Nuance Communications, Inc. | Interactive speech recognition model |
US8463608B2 (en) * | 2003-12-23 | 2013-06-11 | Nuance Communications, Inc. | Interactive speech recognition model |
US20120173237A1 (en) * | 2003-12-23 | 2012-07-05 | Nuance Communications, Inc. | Interactive speech recognition model |
US20050137866A1 (en) * | 2003-12-23 | 2005-06-23 | International Business Machines Corporation | Interactive speech recognition model |
US8972444B2 (en) | 2004-06-25 | 2015-03-03 | Google Inc. | Nonstandard locality-based text entry |
US20050289141A1 (en) * | 2004-06-25 | 2005-12-29 | Shumeet Baluja | Nonstandard text entry |
US20060230350A1 (en) * | 2004-06-25 | 2006-10-12 | Google, Inc., A Delaware Corporation | Nonstandard locality-based text entry |
US10534802B2 (en) | 2004-06-25 | 2020-01-14 | Google Llc | Nonstandard locality-based text entry |
US8392453B2 (en) | 2004-06-25 | 2013-03-05 | Google Inc. | Nonstandard text entry |
US20060036438A1 (en) * | 2004-07-13 | 2006-02-16 | Microsoft Corporation | Efficient multimodal method to provide input to a computing device |
KR101183340B1 (ko) * | 2004-07-13 | 2012-09-14 | 마이크로소프트 코포레이션 | 컴퓨팅 장치에 입력을 제공하기 위한 효율적인 멀티모달방법 |
US20080109414A1 (en) * | 2004-09-15 | 2008-05-08 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US20080109449A1 (en) * | 2004-09-15 | 2008-05-08 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US8108449B2 (en) * | 2004-09-15 | 2012-01-31 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US8135695B2 (en) * | 2004-09-15 | 2012-03-13 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US20060059192A1 (en) * | 2004-09-15 | 2006-03-16 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US8473475B2 (en) * | 2004-09-15 | 2013-06-25 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US20080109460A1 (en) * | 2004-09-15 | 2008-05-08 | Samsung Electronics Co., Ltd. | Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata |
US20060085186A1 (en) * | 2004-10-19 | 2006-04-20 | Ma Changxue C | Tailored speaker-independent voice recognition system |
US7533018B2 (en) | 2004-10-19 | 2009-05-12 | Motorola, Inc. | Tailored speaker-independent voice recognition system |
US8311822B2 (en) | 2004-11-02 | 2012-11-13 | Nuance Communications, Inc. | Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment |
US8438025B2 (en) | 2004-11-02 | 2013-05-07 | Nuance Communications, Inc. | Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment |
US20060095259A1 (en) * | 2004-11-02 | 2006-05-04 | International Business Machines Corporation | Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment |
US20080103771A1 (en) * | 2004-11-08 | 2008-05-01 | France Telecom | Method for the Distributed Construction of a Voice Recognition Model, and Device, Server and Computer Programs Used to Implement Same |
US8265930B1 (en) * | 2005-04-13 | 2012-09-11 | Sprint Communications Company L.P. | System and method for recording voice data and converting voice data to a text file |
US20090043582A1 (en) * | 2005-08-09 | 2009-02-12 | International Business Machines Corporation | Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices |
US8239198B2 (en) * | 2005-08-09 | 2012-08-07 | Nuance Communications, Inc. | Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11153472B2 (en) | 2005-10-17 | 2021-10-19 | Cutting Edge Vision, LLC | Automatic upload of pictures from a camera |
US11818458B2 (en) | 2005-10-17 | 2023-11-14 | Cutting Edge Vision, LLC | Camera touchpad |
US20100049518A1 (en) * | 2006-03-29 | 2010-02-25 | France Telecom | System for providing consistency of pronunciations |
US9117447B2 (en) | 2006-09-08 | 2015-08-25 | Apple Inc. | Using event alert text as input to an automated assistant |
US8942986B2 (en) | 2006-09-08 | 2015-01-27 | Apple Inc. | Determining user intent based on ontologies of domains |
US8930191B2 (en) | 2006-09-08 | 2015-01-06 | Apple Inc. | Paraphrasing of user requests and results by automated digital assistant |
US8977255B2 (en) * | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11012942B2 (en) | 2007-04-03 | 2021-05-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US20090271106A1 (en) * | 2008-04-23 | 2009-10-29 | Volkswagen Of America, Inc. | Navigation configuration for a motor vehicle, motor vehicle having a navigation system, and method for determining a route |
US20090271200A1 (en) * | 2008-04-23 | 2009-10-29 | Volkswagen Group Of America, Inc. | Speech recognition assembly for acoustically controlling a function of a motor vehicle |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8903716B2 (en) | 2010-01-18 | 2014-12-02 | Apple Inc. | Personalized vocabulary for digital assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9484018B2 (en) * | 2010-11-23 | 2016-11-01 | At&T Intellectual Property I, L.P. | System and method for building and evaluating automatic speech recognition via an application programmer interface |
US20120130709A1 (en) * | 2010-11-23 | 2012-05-24 | At&T Intellectual Property I, L.P. | System and method for building and evaluating automatic speech recognition via an application programmer interface |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9899026B2 (en) | 2012-05-31 | 2018-02-20 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US20130325448A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Speech recognition adaptation systems based on adaptation data |
US20130325453A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US20130325454A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha Llc | Methods and systems for managing adaptation data |
US20130325446A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Speech recognition adaptation systems based on adaptation data |
US20130325441A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha Llc | Methods and systems for managing adaptation data |
US20130325449A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US20130325451A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US9620128B2 (en) * | 2012-05-31 | 2017-04-11 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US20170069335A1 (en) * | 2012-05-31 | 2017-03-09 | Elwha Llc | Methods and systems for speech adaptation data |
US20130325459A1 (en) * | 2012-05-31 | 2013-12-05 | Royce A. Levien | Speech recognition adaptation systems based on adaptation data |
US10395672B2 (en) * | 2012-05-31 | 2019-08-27 | Elwha Llc | Methods and systems for managing adaptation data |
US9305565B2 (en) * | 2012-05-31 | 2016-04-05 | Elwha Llc | Methods and systems for speech adaptation data |
US20130325474A1 (en) * | 2012-05-31 | 2013-12-05 | Royce A. Levien | Speech recognition adaptation systems based on adaptation data |
US20130325452A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US10431235B2 (en) * | 2012-05-31 | 2019-10-01 | Elwha Llc | Methods and systems for speech adaptation data |
US9899040B2 (en) * | 2012-05-31 | 2018-02-20 | Elwha, Llc | Methods and systems for managing adaptation data |
US9495966B2 (en) * | 2012-05-31 | 2016-11-15 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US20130325450A1 (en) * | 2012-05-31 | 2013-12-05 | Elwha LLC, a limited liability company of the State of Delaware | Methods and systems for speech adaptation data |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9606986B2 (en) | 2014-09-29 | 2017-03-28 | Apple Inc. | Integrated word N-gram and class M-gram language models |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US20160133255A1 (en) * | 2014-11-12 | 2016-05-12 | Dsp Group Ltd. | Voice trigger sensor |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US20180268815A1 (en) * | 2017-03-14 | 2018-09-20 | Texas Instruments Incorporated | Quality feedback on user-recorded keywords for automatic speech recognition systems |
CN110419078A (zh) * | 2017-03-14 | 2019-11-05 | 德克萨斯仪器股份有限公司 | 自动语音识别系统的用户记录关键字的质量反馈 |
US11024302B2 (en) * | 2017-03-14 | 2021-06-01 | Texas Instruments Incorporated | Quality feedback on user-recorded keywords for automatic speech recognition systems |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10599377B2 (en) * | 2017-07-11 | 2020-03-24 | Roku, Inc. | Controlling visual indicators in an audio responsive electronic device, and capturing and providing audio using an API, by native and non-native computing devices and services |
US20190018635A1 (en) * | 2017-07-11 | 2019-01-17 | Roku, Inc. | Controlling Visual Indicators In An Audio Responsive Electronic Device, and Capturing and Providing Audio Using an API, By Native and Non-Native Computing Devices and Services |
US11126389B2 (en) | 2017-07-11 | 2021-09-21 | Roku, Inc. | Controlling visual indicators in an audio responsive electronic device, and capturing and providing audio using an API, by native and non-native computing devices and services |
US10777197B2 (en) | 2017-08-28 | 2020-09-15 | Roku, Inc. | Audio responsive device with play/stop and tell me something buttons |
US11062710B2 (en) | 2017-08-28 | 2021-07-13 | Roku, Inc. | Local and cloud speech recognition |
US11646025B2 (en) | 2017-08-28 | 2023-05-09 | Roku, Inc. | Media system with multiple digital assistants |
US11804227B2 (en) | 2017-08-28 | 2023-10-31 | Roku, Inc. | Local and cloud speech recognition |
US11961521B2 (en) | 2017-08-28 | 2024-04-16 | Roku, Inc. | Media system with multiple digital assistants |
US11062702B2 (en) | 2017-08-28 | 2021-07-13 | Roku, Inc. | Media system with multiple digital assistants |
US11664026B2 (en) | 2018-02-13 | 2023-05-30 | Roku, Inc. | Trigger word detection with multiple digital assistants |
US11145298B2 (en) | 2018-02-13 | 2021-10-12 | Roku, Inc. | Trigger word detection with multiple digital assistants |
US11935537B2 (en) | 2018-02-13 | 2024-03-19 | Roku, Inc. | Trigger word detection with multiple digital assistants |
Also Published As
Publication number | Publication date |
---|---|
EP1348212B1 (de) | 2005-06-29 |
EP1215661A1 (de) | 2002-06-19 |
DE60111775T2 (de) | 2006-05-04 |
DE60111775D1 (de) | 2005-08-04 |
EP1348212A2 (de) | 2003-10-01 |
WO2002049005A2 (en) | 2002-06-20 |
AU2002233237A1 (en) | 2002-06-24 |
WO2002049005A3 (en) | 2002-08-15 |
ATE298918T1 (de) | 2005-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1348212B1 (de) | Sprachgesteuertes tragbares endgerät | |
US7689417B2 (en) | Method, system and apparatus for improved voice recognition | |
KR100804855B1 (ko) | 음성으로 제어되는 외국어 번역기용 방법 및 장치 | |
EP1215660B1 (de) | Sprachgesteuertes tragbares Endgerät | |
TWI281146B (en) | Apparatus and method for synthesized audible response to an utterance in speaker-independent voice recognition | |
JP2927891B2 (ja) | 音声ダイヤル装置 | |
USRE41080E1 (en) | Voice activated/voice responsive item locater | |
US7392184B2 (en) | Arrangement of speaker-independent speech recognition | |
US20060190260A1 (en) | Selecting an order of elements for a speech synthesis | |
JP2002006882A (ja) | 音声入力通信システム、ユーザ端末およびセンターシステム | |
CN101385073A (zh) | 具有不依赖于说话者的语音识别的通信设备 | |
KR100380829B1 (ko) | 에이전트를 이용한 대화 방식 인터페이스 운영 시스템 및방법과 그 프로그램 소스를 기록한 기록 매체 | |
US20020049597A1 (en) | Audio recognition method and device for sequence of numbers | |
JP4049456B2 (ja) | 音声情報利用システム | |
JP3018759B2 (ja) | 特定話者方式音声認識装置 | |
KR200219909Y1 (ko) | 대화식 음성 제어가 가능한 이동전화단말기 | |
KR20030090863A (ko) | 음성인식 모듈 또는 블루투스 모듈을 이용하는 핸즈프리시스템 | |
JP2000184077A (ja) | ドアホンシステム | |
JP3975343B2 (ja) | 電話番号登録システム、電話機、および電話番号登録方法 | |
JPH098894A (ja) | 音声認識コードレス電話機 | |
JPH10276462A (ja) | メッセージ伝送システム及びメッセージ伝送方法 | |
KR20020044629A (ko) | 명령어 갱신이 가능한 음성인식 방법 및 그 시스템 | |
JP2002073605A (ja) | 電話音声の通訳方法 | |
JP2000122692A (ja) | 音声認識装置 | |
JPH11112633A (ja) | 携帯電話 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL), SWEDEN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HELLWIG, KARL;DOBLER, STEFAN;OIJER, FREDRIK;REEL/FRAME:012716/0596;SIGNING DATES FROM 20020211 TO 20020219 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |