WO2011150730A1 - 一种用于英文与另一种文字混合输入的方法和设备 - Google Patents
一种用于英文与另一种文字混合输入的方法和设备 Download PDFInfo
- Publication number
- WO2011150730A1 WO2011150730A1 PCT/CN2011/073551 CN2011073551W WO2011150730A1 WO 2011150730 A1 WO2011150730 A1 WO 2011150730A1 CN 2011073551 W CN2011073551 W CN 2011073551W WO 2011150730 A1 WO2011150730 A1 WO 2011150730A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- input
- matching
- english
- sequence
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/018—Input/output arrangements for oriental characters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
- G06F40/129—Handling non-Latin characters, e.g. kana-to-kanji conversion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/274—Converting codes to words; Guess-ahead of partial word inputs
Definitions
- the present invention relates to the field of computers, and more particularly to techniques for text input by a computer. Background technique
- the existing computer Chinese input method can provide users with Chinese input in full spelling, double spelling, and five-stroke type.
- the user needs to switch the input mode, for example, by pressing on the keyboard of the computer.
- the "caps lock” key or other shortcut key combination is used to switch, making mixed input in both Chinese and English inconvenient and time consuming. Summary of the invention
- a method for allowing a user to perform mixed input in English and another language on a user equipment side comprising the steps of:
- the at least one input term option includes the English word or a translation in the other language
- a method for allowing a user to perform mixed input in English and another language in a network device comprising the steps of: A receiving a user equipment from the user operation The input sequence of the user sent; B matching the input sequence with the network vocabulary in the network device to obtain one or more matching input term options, wherein when the input sequence contains a sequence of English words At least one input term option including the English word or its other language Translation
- a user equipment for allowing a user to perform mixed input in English and another language comprising:
- a first obtaining device configured to acquire a user input sequence
- Querying means configured to perform matching query with the vocabulary to obtain one or more matching input term options, wherein when the input sequence includes a sequence of English words, at least one input term option includes the English words or their translations in the other language;
- a network device for assisting a user to perform mixed input in a user device in English and another language including:
- a second receiving device configured to receive an input sequence of the user sent by the user equipment operated by the user
- Querying means configured to perform matching query with the network vocabulary in the network device to obtain one or more matching input term options, wherein when the input sequence includes a sequence of English words, at least one input
- the term option includes the English word or its translation in the other language
- a second sending device configured to send the one or more matching input term options back to the user equipment.
- a system for allowing a user to perform mixed input in English and another language comprising a user device according to an aspect of the present invention as described above and according to another aspect of the present invention Internet equipment.
- the invention allows the user to input the Chinese input method such as double spell, full spell, and five strokes at random when inputting text, and input directly in English words during the text input, and can obtain all Chinese or Chinese and
- the input option of the English word combination eliminates the need to switch between the Chinese and English input modes, thereby greatly improving the input efficiency.
- FIG. 1 is a schematic diagram of a user equipment that allows a user to perform mixed input in English and another language in accordance with an aspect of the present invention
- FIG. 2 is a schematic diagram showing a user equipment that allows a user to perform mixed input in English and another language in accordance with a preferred embodiment of the present invention
- FIG. 3 is a diagram showing a user equipment and a network device that allow a user to perform mixed input in Chinese and English according to another aspect of the present invention
- FIG. 4 shows a schematic diagram of user equipment and network equipment that allows a user to perform mixed input in both Chinese and English, in accordance with a preferred embodiment of the present invention.
- FIG. 5 is a diagram showing a user equipment and a network device that allow a user to perform mixed input in Chinese and English according to another preferred embodiment of the present invention.
- FIG. 6 shows a flow diagram of allowing a user to perform mixed input in English and another language in a user device in accordance with an aspect of the present invention
- Figure 7 is a flow chart showing a user device and a network device cooperated to allow a user to perform mixed input in English and another language in accordance with another aspect of the present invention
- FIG. 8 illustrates a flow diagram of a user device and a network device cooperating to allow a user to perform mixed input in English and another language in accordance with a preferred embodiment of the present invention
- the user equipment 1 shows a user device 1 for allowing a user to make mixed input in English and another language in accordance with an aspect of the present invention.
- the user equipment 1 can be any electronic product that can interact with the user through a keyboard, a remote controller, a touch panel, or a voice control device, such as a computer, a smart phone, a PDA, a game machine, or an IPTV.
- the user equipment 1 includes a first obtaining means 11, a querying means 12, a providing means 13, and a storage means 14 for storing a local thesaurus (hereinafter referred to as the thesaurus 14 for the sake of brevity).
- the first obtaining device 11 acquires an input sequence that the user is input in real time through any interactive device that can perform human-computer interaction with the user.
- the interactive device can be a keyboard, a remote control, a touch pad or a voice control device, and the like. Taking the keyboard as an example, when the user taps a key in the keyboard to input, the first acquiring device 11 acquires the key sequence of the user tapping in real time.
- the querying device 12 performs a matching query on the user input sequence provided by the first obtaining device 11 with the thesaurus 14 to obtain one or more matching input term options, wherein when the input sequence contains a sequence of English words, at least one input word
- the bar option includes the English word or its translation in the other language. That is, the present invention allows the user to input Chinese characters in full spells, double spells, five strokes, etc., and can also directly input English words.
- the querying device 12 can also assist the user in automatically completing the automatic segmentation of the English words without the user having to switch to the English input state or close the input method. For example, when the user taps the button to input "windowsgame”, the query device 12 recognizes that "windows” and “game” are English words and are adjacent, and the query device 12 inserts a space character "" between “windows” and “game”. , thereby getting “windows game”, and performing the matching query described above to get a combination of terms such as “windows game”, “windows game”, “window game”, “window software game”.
- the query device 12 can perform automatic segmentation of multiple adjacent English words in the user input sequence by a similar mechanism, such as the user input sequence "ilovethisgame", which will provide the user with "i” Love this game” is a combination of multiple terms.
- the query device 12 also supports automatic segmentation of English words in the Chinese-English mixed input process. For example, when the user taps the button to input "woxihuan windowsgame", the query device 12 only outputs “windows” and "game” as adjacent English words, and the query device 12 inserts a space between "windows" and "game”.
- the providing means 13 then provides the user with one or more matching input term options obtained in the order in which they are selected for specific input. For example, by displaying to the user in an input window column of the display, multiple entry options can be displayed in the input sequence column, and multiple entry options can be included in the next column for the user to select. Preferably, only one line item option may be displayed in the term column, and the number of the line item option may be default or user-settable, and the previous line or the next line item option is displayed by the user pressing a specific function key.
- the specific function keys can be, for example, "+" and "-".
- the first obtaining means 11 and the inquiring means 12 and the providing means 13 are continuously operated. Specifically, the first obtaining means 11 acquires the input sequence of the user in real time and continuously supplies it to the querying device 12, for example "w”, “wo” ... “wosh” ... “woshiyong” ... "woshiyongwin” ...
- the query device 12 also performs a matching query on the user input sequence continuously provided by the first obtaining device 11 in real time to continuously acquire the term options corresponding to the above input sequences, such as "w” Corresponding to “1 me, 2 ⁇ , 3 grips, 4 nests”; “wosh” corresponds to “1 my institute, 2 my province, 3 my life, 4 I said, 5 handshake”; “woshiyongwindows” corresponds to "1 I use Windows software 2 I use windows, 3 I use windows, 4 I use windows”.
- continuous refers to the action that is performed until the user finally selects an entry option. For example, the user may pause for a while after tapping the key sequence "woai", such as 0.5 second. Continue to tap the subsequent buttons.
- the query device 12 When the query device 12 performs the matching query according to the user input sequence, when the English input word is included in the recognition input sequence, after the query obtains the rest of the input sequence corresponding to the Chinese language, the association between the Chinese word and the English word is also Get the most appropriate combination of terms, the association can Combination. For example, in the input sequence "woshiyongwindows", the query device 12 obtains various Chinese translations of "wiondows""window”,”window”,”windowsoftware", and obtains “woshiyong” corresponding Chinese “I use””I am Use ", the meaning of the Chinese and English words and their translations to determine the following options to choose the most preferred options: "1 I use Windows software, 2 I use windows, 3 I use windows, 4 I use windows ".
- the querying device 12 also prioritizes each of the input term options when a matching query is made in the lexicon according to the user input sequence to obtain a plurality of input term options.
- the providing means 13 displays the plurality of matched input term options provided by the query means 12 in the order of the priority to the user, wherein the higher the priority, the higher the input term option is displayed.
- the querying device 12 can determine the priority level according to the frequency of selection of each term option in the user input history and the semantic relevance of each word in each term option.
- the query device 12 can also determine the priority according to the input preference set by the user.
- the querying device 12 can also determine the region in which the user is located according to the IP address of the current user equipment, so that the priority of the vocabulary related to the region in the input sequence can be determined, for example, when the user input sequence is "woxihuanbund", where The translation of "bund” has "1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund".
- the user equipment 1 can store the above-mentioned user input history, user setting input preferences, and various associations between vocabularies in the local storage. Preferably, the user equipment 1 may also update information such as saved user input history, input preferences, and inter-vocabulary relevance. As shown in FIG. 2, the user equipment 1 further includes a second acquisition device. Set 15, update device 16. The second obtaining means 15 acquires the selection of the plurality of input term options provided by the providing device 13 by the user through further interaction with the user.
- the updating device 16 updates the thesaurus and the user input history, the association between the vocabularies, and the like according to the user selection provided by the second obtaining device 15, for example, the new term option and the existing term option may be added to the thesaurus 14. Level, user characteristics. More preferably, if the user equipment can access the Internet, the second obtaining device 15 can also search for a new combination of terms on the Internet and update the thesaurus 14 and the like.
- Fig. 2 shows a preferred embodiment of a user equipment according to the invention, wherein the inquiry means 12 further comprises a judging means 121 and a matching inquiry means 122.
- the judging means 121 performs a query in the vocabulary containing the English words according to the input sequence to judge whether or not the English words are included in the input sequence.
- the matching query means 122 is informed by the determining means that the English word is included in the input sequence, the Chinese input corresponding to the rest of the sequence is queried, and then various combinations of the Chinese and English words or various Chinese translations thereof are provided as the entry options.
- the matching query device also obtains the most suitable combination of terms according to the association between the Chinese and the English words, and the association may be a normal semantic association, or may be preset in the thesaurus 14 or the user. Enter a selection combination in the history.
- the matching query means 122 obtains various Chinese translations “windows”, “windows”, “window software” of “windows”, and obtains the Chinese “I use” “I” corresponding to "woshiyong” Is to use ", the Chinese and English words and their translations to determine the following terms to determine the most preferred options: "1 I use Windows software; 2 I use windows; 3 I use Windows; 4 I use window".
- the matching query means 122 obtains the priority of each input term option when the matching query is performed in the thesaurus according to the user input sequence to obtain a plurality of input term options.
- the providing means 13 displays the plurality of matched input term options provided by the matching query means 12 in the order column to the user in a priority order, wherein the higher the priority, the higher the input term option is displayed.
- the matching querying device 122 can determine the priority level of each term option in the user history record and the semantic relevance of each word in each term option.
- the matching query device 122 can also determine the priority level according to the input preference selection set by the user, for example, when the user sets the input preference as: 1) Priority level: Computer Vocabulary>Electronic Vocabulary>Ordinary Vocabulary; 2) Priority: Chinese>English, the input sequence "woshiyongwindows" can judge “I use Windows software” has the highest priority, "I use Windows” second, “I use windows” Again.
- the matching query device 122 can also determine the region in which the user is located according to the IP address of the current user equipment, so that the priority of the vocabulary related to the region in the input sequence can be determined, for example, when the user input sequence is "woxihuanbund",
- the translation of "bund” has "1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund".
- the inquiry device 12 knows that it is currently located in Shanghai, China according to the IP address of the user equipment, it can be determined that "bund” corresponds to the translation in "Shanghai Bund”.
- "Or "Eb Beach” has the highest priority, so you can provide the following input terms: "I like to go to the Bund; 2 I like the Bund; 3 I like the pier; 4 I like the embankment; 5 I like the league.”
- FIG. 3 illustrates an embodiment of a user equipment and a network device that allows a user to perform mixed input in both Chinese and English, in accordance with another aspect of the present invention, wherein the user device 1 is connected to the network device 2 via a network, and the two cooperate to allow the user to be in the user.
- the device is mixed input in Chinese and English.
- the network can be the Internet, an intranet, etc. The following is an example of the Internet:
- the user equipment 1 can be any electronic product that can interact with the user through a keyboard, a remote controller, a touch panel, or a voice control device, such as a computer, a smart phone, a PDA, a game machine, or an IPTV.
- the network device 2 can be a network server, a small host, a mainframe, or the like.
- the user equipment 1 includes a first obtaining device 11, a first transmitting device 17, a first receiving device 18, and a providing device 13.
- the network device 2 includes a second receiving device 21, a querying device 22, a second transmitting device 23, and a storage device 24 for holding a network vocabulary (hereinafter referred to as the network vocabulary 24 for the sake of brevity).
- the first acquisition device 11 in the user device 1 can be a keyboard, a remote controller, a touch panel or a voice control device or the like through any one of the user-accessible devices.
- the keyboard as an example, when the user taps a key in the keyboard to input, the first obtaining device 11 acquires the key sequence of the user tapping in real time.
- the first transmitting device 17 in the user device 1 transmits the user input sequence provided by the first obtaining device 11 to the network device 2 in real time and continuously.
- Second receiving device in network device 2 The input sequence is received and provided to the query device 22.
- the querying device 22 performs a matching query with the network vocabulary 24 to obtain one or more matching input term options, wherein when the input sequence includes a sequence of English words, at least one of the input term options includes the English A word or a translation in the other language. That is, the present invention allows the user to input Chinese characters in full spells, double spells, five strokes, etc., and can also directly input English words.
- the second transmitting device 23 in the network device 2 also transmits the input term option provided by the query device 22 to the user device 1 in real time and continuously.
- the first receiving device 19 in the user device 1 receives the input term option and provides it to the providing device 13 in real time and in association, the providing device 13 then sequentially obtaining the obtained one or more matching input term options in a certain order.
- the format is provided to the user for selection to make specific input. For example, by displaying to the user in an input window bar of the display, multiple entry options can be displayed in the input sequence column, and multiple entry options can be included in the next column for the user to select.
- only one line item option may be displayed in the term column, and the number of the line item option may be default or user-settable, and the previous line or the next line item option is displayed by the user pressing a specific function key.
- the specific function keys can be, for example, "+” and "-".
- the first obtaining means 11, the first transmitting means 17, the first receiving means, and the second receiving means 21, the inquiring means 12 and the second transmitting means 23 of the network device 2 in the user equipment 1 are continuously Constantly cooperate with the work.
- the first obtaining means 11 acquires the input sequence of the user in real time and continuously supplies it to the first transmitting means 17, for example "w”, "wo” ... “wosh” ... “woshiyong” ... "woshiyongwin” ... "woaiwindows", the first transmitting device also transmits various input sequences to the network device 2 in real time and continuously.
- the second receiving device 21 in the network device 2 receives the various input sequences sent by the user device 1 and also provides them to the query device 22 in real time and continuously.
- the query device 22 then continuously continues to the first receiving device 21 in real time.
- Provide a user input sequence to perform a matching query to hold Continue to obtain the entry options corresponding to the above input sequences, for example, "w” corresponds to "1 me, 2 ⁇ , 3 grips, 4 nests";”wosh” corresponds to "1 my institute, 2 provinces, 3 my students, 4 I said, 5 handshakes; "woshiyongwindows” corresponds to "1 I use Windows software, 2 I use windows, 3 I use Windows, 4 I use windows”.
- the query device 22 in the network device 2 performs the matching query in the network vocabulary 22 according to the user input sequence
- the English input word is included in the recognition input sequence
- the query obtains the rest of the input sequence corresponding to the Chinese language
- the association with the English word is used to obtain the most suitable combination of terms, which may be a normal semantic association, or a selection combination in the network lexicon 24 or in the user input history.
- the query device 22 obtains various Chinese translations of “wiondows”, “window”, “window”, “window software”, and obtains “woshiyong” corresponding Chinese “I use” “I am Use ", the meaning of the Chinese and English words and their translations to determine the following options for the most preferred options: "1 I use Windows software, 2 I use windows, 3 I use windows, 4 I use windows ".
- the query device 22 is also capable of assisting the user to automatically perform automatic segmentation of English words without the user having to switch to the English input state or turn off the input method.
- the query device 22 recognizes that "windows” and “game” are English words and are adjacent, and the query device 22 inserts a space character "" between “windows” and “game”. , thereby getting “windows game”, and performing the matching query described above to get a combination of terms such as “windows game”, “windows game”, “window game”, “window software game”.
- the query device 22 can perform automatic segmentation of multiple adjacent English words in the user input sequence by a similar mechanism, such as the user input sequence "ilovethisgame", which will provide the user with "i” Love this game” is a combination of multiple terms.
- the query device 22 also supports automatic segmentation of English words in the Chinese-English mixed input process. For example, when the user taps the button to input "woxihuan windowsgame”, the query device 22 only outputs “windows” and "game” as adjacent English words, and the query device 22 inserts a space between "windows" and "game”.
- the querying means 22 in the network device 2 also obtains the priority of each of the input term options when a matching query is made in the lexicon according to the user input sequence to obtain a plurality of input term options.
- the providing device 13 in the user device 1 receives the input term option from the network device 2 through the first receiving device 19, and displays a plurality of matching input term options in the entry column to the user in a priority order. , where the higher the priority, the more the input term option is displayed.
- the querying device 22 of the network device 2 can acquire the user feature according to the ID of the user login, such as the user input history record, the user-specific user vocabulary, and the user-set Personal preferences, user attribute information, etc.
- the user features may be stored in the network device 2 or in other network devices connected to the network device 2.
- the querying device 22 in the network device 2 can determine the priority level according to the selection frequency of each term option in the user input history record and the semantic relevance between each term in each term option.
- the priority can also be determined according to the input preference set by the user. For example, when the user sets the input preference as: 1) Priority: Computer vocabulary > Electronic vocabulary > Common vocabulary; 2) Priority level: Chinese >English, in the input sequence "woshiyongwindows” can judge “I use Windows software” has the highest priority, "I use Windows” followed by “I use windows” again.
- the querying device 22 can also determine the region in which the user device is located according to the IP address of the current user device 1, so that the priority of the vocabulary related to the region in the input sequence can be determined, for example, when the user input sequence is "woxihuanbund",
- the translation of "bund” has "1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund".
- the inquiry device 12 knows that it is currently located in Shanghai, China according to the IP address of the user equipment, it can be determined that "bund” corresponds to the translation in "Shanghai Bund".
- "Or the Bund” has the highest priority and therefore offers the following losses Enter the entry options "1 I like to go to the Bund; 2 I like the Bund; 3 I like the pier; 4 I like the embankment; 5 I like the league.”
- the network device 2 can also update information such as saved user input history, input preferences, and inter-vocabulary relevance.
- the user equipment 1 further includes a second obtaining device 15 and a third transmitting device 18; the network device 2 further includes a second receiving device 25 and an updating device 26.
- the second obtaining means 15 in the user equipment 1 acquires the selection of the plurality of input term options provided by the providing device 13 by the user through further interaction with the user, and is transmitted by the third transmitting device 18 to the network device.
- the update device 26 updates the thesaurus and the user input history, the association between the vocabularies, and the like according to the user selection received by the second receiving device 25.
- a new term option and an existing term option may be added to the network thesaurus 24.
- Priority user characteristics.
- the network device 2 may further comprise a third acquisition means (not shown) which may also search for new combinations of terms on the Internet and update the network vocabulary 24 and the like.
- Figure 4 shows a preferred embodiment of the invention in which the query means 22 of the network device 2 further comprises a decision means 221 and a matching query means 222.
- the judging device 221 performs a query in the network vocabulary containing the English words according to the input sequence to determine whether the English word is included in the input sequence.
- the matching query means 222 is informed by the determining means that the input sequence includes English words, the query obtains the Chinese corresponding to the rest of the input sequence, and then provides various combinations of the Chinese and English words or their various Chinese translations as the entry options.
- the matching query device also obtains the most suitable combination of terms according to the association between the Chinese and the English words, and the association may be a normal semantic association, or may be preset in the network thesaurus 24 or The user enters a selection combination in the history.
- the matching query means 222 obtains various Chinese translations “window”, “window”, “window software” of “wiondows”, and obtains the Chinese “I use” "I” corresponding to "woshiyong” Is to use ", the Chinese and English words and their translations to determine the following terms to determine the most preferred options: "1 I use Windows software; 2 I use windows; 3 I use Windows; 4 I use window".
- the matching query means 222 also obtains the priority of each input term option when a matching query is made in the thesaurus according to the user input sequence to obtain a plurality of input term options.
- the second sending device 23 sends the multiple input term options and their priority information to the user equipment 1,
- the first receiving device 19 of the user equipment 1 receives and provides to the providing device 13, and the providing device 3 displays the plurality of matched input term options in the order column to the user in priority order, wherein the priority The higher the input term option is displayed, the higher it is.
- the matching query device 222 in the network device 2 can determine the priority level according to the selection frequency of each term option in the user history record and the semantic relevance between each term in each term option.
- the matching query device 222 can also determine the priority according to the input preference set by the user. For example, when the user sets the input preference as: 1) Priority: Computer vocabulary > Electronic vocabulary > Common vocabulary; 2) Priority Level: Chinese> English, in the input sequence "woshiyongwindows” can judge “I use Windows software” has the highest priority, “I use Windows” followed by "I use windows” again.
- the matching query device 222 can also determine the region in which the user is located according to the IP address of the current user equipment, so that the priority of the vocabulary related to the region in the input sequence can be determined, for example, when the user input sequence is "woxihuanbund", The translation of "bund” has "1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund".
- the user equipment 1 itself also includes a query device 12 and a memory 14 (hereinafter referred to as a local vocabulary 14) for storing a local vocabulary.
- the first obtaining device 11 may first provide the user input sequence to the query device 12 of the user equipment 1 for matching query.
- the specific query process is as described above with reference to FIG. 1-2.
- the reference device 11 can also send a user input sequence to the network device through the third sending device 18, and the query device 22 performs a matching query.
- the specific query process is as described above with reference to FIG. 3-4. The content described, the content reference is not mentioned here.
- User equipment 1 also includes a merging device 20 that combines one or more input term options provided by its own query device 12 with one or more input term options provided by query device 22 from network device 2. Deleting the repeating options therein, and determining the priority order of the plurality of term options obtained by the final merge according to certain rules, and then providing them to the providing device 13 for providing them in the corresponding priority order Household.
- the input term option provided by network device 2 should be more accurate, so the priority is higher than the local query for the input term option.
- Figure 6 shows a flow diagram of a method in accordance with one aspect of the present invention showing a process in which a user performs mixed input in English and Chinese through a user device.
- the user equipment 1 can be any electronic product that can interact with the user through a keyboard, a remote controller, a touch panel, or a voice control device, such as a computer, a smart phone, a PDA, a game machine, or an IPTV.
- the local vocabulary is saved in user device 1 (for the sake of brevity, hereinafter referred to as the lexicon).
- the user equipment 1 can perform a human machine with the user by any one of them, such as a keyboard, a remote controller, a touch panel or a voice control device.
- a keyboard when the user taps a button in the keyboard to input, the user device 1 acquires the key sequence of the user's tap in real time.
- step s2 the user input sequence acquired by the user equipment 1 performs a matching query in the thesaurus to obtain one or more matching input term options, wherein when the input sequence contains a sequence of English words, at least one input word
- the bar option includes the English word or its translation in the other language. That is, the present invention allows the user to input Chinese words in full spelling, double spelling, five strokes, etc., and can also directly input English words.
- the user equipment 1 can also help the user to automatically complete the automatic segmentation of the English words without the user having to switch to the English input state or turn off the input method.
- the user device 1 recognizes that "windows” and “game” are English words and are adjacent, and a space character is inserted between “windows” and “game”. "”, thereby getting “windows game”, and performing the matching query described above to get a combination of terms such as “windows game”, “windows game”, “window game”, “window software game”.
- the user equipment 1 can perform automatic segmentation of a plurality of adjacent English words in the user input sequence by a similar mechanism, such as the user input sequence "ilovethisgame", and the user device 1 will provide the user with "i love this game”. Multiple combinations of terms within.
- the user equipment 1 also supports automatic segmentation of English words in the Chinese-English mixed input process. For example, when the user taps the button to input "woxihuanwindowsgame”, in step 2, the user device 1 recognizes “windows” and “game” as adjacent English words, and inserts a space character between "windows" and "game”.
- step s2 the user equipment 1 first makes a query in the thesaurus containing the English words according to the input sequence to determine whether the English words are included in the input sequence.
- the user equipment 1 judges that the user input sequence includes an English word, the Chinese corresponding to the rest of the user input sequence is further queried at this moment, and then various combinations of the Chinese and English words or various Chinese translations thereof are provided as the entry options.
- the user equipment 1 provides one or more matching input term options obtained by the query to the user in a certain order and format for selection to make a specific input. For example, by displaying to the user in an input window bar of the display, multiple entry options can be displayed in the input sequence, and multiple entry options can be included in the next column for the user to select.
- the specific function keys can be, for example, "+" and "-".
- steps si through S3 are continuously cycled.
- the user equipment 1 acquires the input sequence continuously input by the user in real time and continuously queries locally, for example, the user continuously inputs "w", "wo” ... “wosh” ... “woshiyong” ... “woshiyongwin” ... "woaiwindows”
- the user equipment 1 also performs the user input sequence according to the continuous acquisition in real time.
- step s2 when the user equipment 1 performs the matching query according to the user input sequence, when the English input word is included in the recognition input sequence, after the query obtains the rest of the input sequence corresponding to the Chinese language, it is also based on the Chinese word and the English word.
- the association to obtain the most appropriate combination of terms which can be a common semantic association, or a combination of choices saved in the thesaurus or in the user input history.
- user device 1 obtains various Chinese translations of “wiondows” “window”, “window”, “window software”, and obtains “woshiyong” corresponding to Chinese “I use” “I am With “, user device 1 determines the following items as the most preferred option according to the meaning of the Chinese and English words and their translations: "1 I use Windows software, 2 I use windows, 3 I use Windows, 4 I use the window”.
- the user equipment 1 in step s2, the user equipment 1 also obtains the priority of each input term option when a plurality of input term options are obtained by performing a matching query in the thesaurus according to the user input sequence. Subsequently, the plurality of matched input term options are displayed to the user in the term column in the order of priority in step s3, wherein the higher the priority, the higher the input term option is displayed.
- the user equipment 1 may also perform a query in the thesaurus according to the user characteristics to obtain a plurality of matching input term options.
- the priority of the user can also be determined according to the user characteristics.
- User characteristics include the user's input history, user-set personal preference selection, user attributes, user address, etc.
- User attributes include the user's occupation, gender, international, birthplace, age, etc. Interest.
- the user equipment 1 may determine the priority level according to the selection frequency of each term option in the user input history record and the semantic relevance between each term in each term option. .
- the user equipment 1 can also determine the priority according to the input preference set by the user.
- the user equipment 1 can also determine the geographical location of the user equipment according to the IP address of the current user equipment, so that the priority of the vocabulary related to the area in the input sequence can be determined, for example, when the user input sequence is "woxihuanbund", where The translation of "bund” has “1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund", when User Equipment 1 knows that it is currently located in Shanghai, China according to its IP address, so it can be determined that "bund” corresponds to the translation of "Shanghai Bund” or "The Bund” has the highest priority, so the following entry options are available: “I like to go to the Bund; 2 I like the Bund; 3 I like the pier; 4 I like the embankment; 5 I like the league.”
- user features include, but are not limited to, the above.
- the user equipment 1 can store the above user input history, user setting input preferences, and various associations between words in the local storage. Preferably, the user device 1 can also update information such as saved user input history, input preferences, and inter-vocabulary relevance.
- the user device 1 also obtains the user's selection of the plurality of input entry options provided by further interaction with the user, and then updates the thesaurus and user input according to the acquired user selection. Historical records, vocabulary associations, etc., for example, can add new entry options and prioritization of existing term options in the thesaurus, user characteristics. More preferably, in step s5 (not shown), when the user equipment accesses the Internet, the user equipment 1 may also search for new combinations of terms in the Internet and update the thesaurus and the like.
- FIG. 7 shows a flow diagram of a method in accordance with another aspect of the present invention, showing a process in which a user device is connected to a network device to allow a user to perform mixed input in English and Chinese.
- the user equipment 1 is connected to the network device 2 via a network, and the network may be an Internet or an intranet.
- the user equipment 1 can be any electronic product that can interact with the user through a keyboard, a remote controller, a touch pad, or a voice control device, such as a computer, a smart phone, a PDA, a game machine, or an IPTV.
- the network device 2 can be a network server, a small host, a mainframe, etc., wherein the network device 2 holds a network vocabulary.
- the user equipment 1 can perform a human machine with the user by any one of them, such as a keyboard, a remote controller, a touch panel, a voice control device, and the like.
- a keyboard when the user taps a button in the keyboard to input, the user device 1 acquires the key sequence of the user's tap in real time.
- step S2 the user equipment 1 transmits the acquired user input sequence to the network device 2 in real time and continuously.
- step S3 the network device 2 performs a matching query in the network lexicon according to the received user input sequence to obtain one or more matching input term options, wherein when the input sequence contains a sequence of English words, at least one The input term option includes the English word or its translation in the other language. That is, the present invention allows the user to input Chinese words in full spelling, double spelling, five strokes, etc., and can also directly input English words.
- the network device 2 can also help the user to automatically perform automatic segmentation of English words without the user having to switch to the English input state or close the input method. For example, when the user taps the key input
- step S3 the network device 2 recognizes that "windows” and “game” are English words and are adjacent, and a space character “” is inserted between “windows” and “game”, thereby obtaining “windows game” " , and perform the matching query described above to get
- step S3 the network device 2 can perform automatic segmentation of multiple adjacent English words in the user input sequence by a similar mechanism, such as the user input sequence "ilovethisgame", the network device 2 will be User offers include
- step S3 the network device 2 also Supports the automatic and automatic segmentation of Chinese, English and English single words in the process of inputting Chinese, English, English, and Chinese. .
- the network device 22 identifies the identification.
- the user equipment 11 is in the step 22
- the self-automatic English-English text-cutting operation provided by the user is not always able to fully satisfy the needs of the sufficient user, and at this time, the user can In the input and output sequence sequence, the supplementary fill and insert is inserted into the single quote number.
- the first network root device is first listed in the packet according to the input sequence.
- the network network vocabulary containing the English and Chinese vocabulary words is used to conduct a query query to determine whether the packet is included in the sequence of the input sequence.
- the word word of the syllabary, Dangdang surely determines that the quotation in the sequence of the input and output sequence includes the English and Chinese singular words, and then the query is queried with the user input and the input sequence sequence is divided into the remaining parts.
- the network device device 22 also inputs the input of the query query into the word in real time and in the continuous and non-continuous manner.
- the option item is sent to the user equipment device 11 for transmission.
- the user equipment 11 is used to press the selected input input word entry option received by the received user.
- a certain order sequence and a check format are provided for the user account to be selected for selection and input. . For example, if the user is displayed in the column of the input window of the display device in the display window, the cocoa will be more than one word.
- the term strip selection option and the input and input sequence column are displayed in the sub-column column, and more than one word entry option item cocoa is included in the next column. For user account selection.
- the number of options of the line item selection item may be displayed only in the line of the word bar column.
- the purpose can be either default or not, or can be set by the user's household setting, and the user can press the button to display the special function.
- the previous line or the next line of the line item selection option, the specific function of the special function can be, for example, "" ++"" and "" --"". .
- the cyclical work is continued without continuous interruption.
- the user input device is used to obtain the input sequence sequence of the user account in real time and in real time. Sending a message to the network network device 22 for example, such as ""ww"",
- step S3 when the network device 2 performs a matching query in the network lexicon according to the user input sequence, when the English input word is included in the recognition input sequence, after the query obtains the rest of the input sequence corresponding to the Chinese language, According to the association between the Chinese and the English words, the most suitable combination of terms can be obtained.
- the association can be a common semantic association. In the sequence "woshiyongwindows", the network device 2 queries each of the "wiondows".
- the network device 2 in step S3, the network device 2 also obtains the priority of each input term option when performing a matching query in the thesaurus according to the user input sequence to obtain a plurality of input term options. Then, in step S7, the network device 2 displays the received multiple input entry options from the network device 2 in the order column to the user in the order of priority, wherein the higher the priority, the higher the priority The higher the input term option is displayed.
- the network device 2 when the user logs in to the network device through the user device 1, in step S3, the network device 2 can acquire the user feature according to the ID of the user login, such as the user input history, the user-specific user vocabulary. , personal preferences set by the user, user attribute information, and the like.
- the user features may be stored in the network device 2 or in other network devices to which the network device 2 is connected.
- the network device may determine the priority level according to the selection frequency of each term option in the user input history record and the semantic relevance between each term in each term option. It is also possible to determine the priority level according to the input preference set by the user. For example, when the user sets the input preferences as: 1) Priority: Computer vocabulary > Electronic vocabulary > Common vocabulary; 2) Priority level: Chinese > English, in the input sequence “woshiyongwindows” can be judged “I use Windows software” The highest priority, "I use Windows” followed by “I use windows” again.
- the network device 2 can also know its current IP address through communication with the user equipment 1 and thereby determine the region in which it is located, so that the priority of the vocabulary related to the region in the input sequence can be determined. For example, when the user input sequence is "woxihuanbund", the translation of "bund” has “1 Embankment 2 Terminal 3 Alliance 4 (Shanghai) Bund", when Network Device 2 knows that it is currently located in Shanghai, China based on the IP address of the user equipment, thus It can be determined that the "bund” corresponding translation has the highest priority in "Shanghai Bund” or "Xipu Beach", so the following entry options are available: 1 I like to go to the Bund; 2 I like the Bund; 3 I like the pier; 4 I like Embankment; 5 I like the league.”
- the network device 2 can also update information such as saved user input history, input preferences, and inter-vocabulary relevance.
- the user device 1 acquires the user's selection of the plurality of input term options provided by the user through further interaction with the user, and transmits the selection to the network device; in step S10 (not shown)
- the network device 2 updates the thesaurus and the user input history, the association between the vocabularies, and the like according to the received user selection.
- the new term option and the existing term option may be added to the network thesaurus. feature.
- the network device 2 can also search for a new combination of terms on the Internet and update the network vocabulary and the like.
- FIG. 8 shows another preferred embodiment according to the present invention, in which the user equipment 1 itself also holds a local vocabulary, and can synchronize with the user-specific user vocabulary in the network vocabulary of the network device 2 at any time or periodically. .
- step S4 the user equipment 1 performs a matching query in the local vocabulary according to the user input sequence, and the specific query process is as described in the foregoing step s2 described with reference to FIG. Do not repeat them.
- Steps S1 to S3 are as described above with reference to FIG. 5, and the content references are not described herein. It should be understood in the art that steps S1 to S3 and step S4 may be performed synchronously, and the completion time mainly depends on the processing speed of the user equipment 1 and the network device 2 and the network transmission delay between the user equipment 1 and the network device 2. .
- step S6 the user equipment 1 will present the present One or more input term options queried and one or more input term options from network device 2 are merged, the repeating options are deleted, and multiple entries resulting from the final merge are determined according to certain rules.
- the priority order of the options then, in step S8, the input term options are provided to the user for selection or further human interaction.
- the input term option provided by network device 2 should be more accurate, so the priority is higher than the local query for the input term option.
- the network vocabulary in the network device 2 can also be The user-specific user vocabulary.
- the above description is based on Chinese and English. Those skilled in the art should understand that the present invention is also applicable to a case where a user inputs a mixed input with another text in English. Only the Chinese input rule needs to be changed and adjusted. Replace with the input rule of the other text, and replace the corresponding thesaurus and user setting input preferences.
- the other language includes: Chinese, Korean, Japanese, French, German, Italian, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Document Processing Apparatus (AREA)
- Input From Keyboards Or The Like (AREA)
- Machine Translation (AREA)
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013512737A JP2013533996A (ja) | 2010-05-31 | 2011-04-29 | 英文と別の文字の混在入力に用いられる方法と装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010187267.8 | 2010-05-31 | ||
CN 201010187267 CN102063195B (zh) | 2010-04-06 | 2010-05-31 | 一种用于供用户进行中英文混合输入的方法与设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011150730A1 true WO2011150730A1 (zh) | 2011-12-08 |
Family
ID=45090747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2011/073551 WO2011150730A1 (zh) | 2010-05-31 | 2011-04-29 | 一种用于英文与另一种文字混合输入的方法和设备 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2013533996A (zh) |
WO (1) | WO2011150730A1 (zh) |
Cited By (182)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102830811A (zh) * | 2012-08-21 | 2012-12-19 | 北京小米科技有限责任公司 | 一种移动终端输入法切换时内容匹配方法及装置 |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9606986B2 (en) | 2014-09-29 | 2017-03-28 | Apple Inc. | Integrated word N-gram and class M-gram language models |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10607140B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105446977B (zh) * | 2014-06-26 | 2019-03-29 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040260536A1 (en) * | 2003-06-16 | 2004-12-23 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing language input mode and method and apparatus for automatically switching language input modes using the same |
CN1854998A (zh) * | 2005-04-18 | 2006-11-01 | 诺基亚公司 | 进行不同类型字形的混合输入的方法和设备 |
CN1908863A (zh) * | 2005-08-07 | 2007-02-07 | 黄金富 | 双语混合输入方法及具有字典功能的手机 |
CN101546228A (zh) * | 2009-05-07 | 2009-09-30 | 腾讯科技(深圳)有限公司 | 一种实现英文提示的输入方法和装置 |
CN101943952A (zh) * | 2010-01-27 | 2011-01-12 | 北京搜狗科技发展有限公司 | 一种至少两种语言混合输入的方法和输入法系统 |
CN102012748A (zh) * | 2010-11-30 | 2011-04-13 | 哈尔滨工业大学 | 语句级中英文混合输入方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03257662A (ja) * | 1990-03-08 | 1991-11-18 | Ricoh Co Ltd | 文字列入力方式 |
JP3814000B2 (ja) * | 1995-11-17 | 2006-08-23 | 株式会社ジャストシステム | 文字列変換装置および文字列変換方法 |
JP2806452B2 (ja) * | 1996-12-19 | 1998-09-30 | オムロン株式会社 | かな漢字変換装置および方法、並びに記録媒体 |
JP2006012188A (ja) * | 2005-08-15 | 2006-01-12 | Just Syst Corp | 文書処理方法及び装置 |
-
2011
- 2011-04-29 JP JP2013512737A patent/JP2013533996A/ja active Pending
- 2011-04-29 WO PCT/CN2011/073551 patent/WO2011150730A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040260536A1 (en) * | 2003-06-16 | 2004-12-23 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing language input mode and method and apparatus for automatically switching language input modes using the same |
CN1854998A (zh) * | 2005-04-18 | 2006-11-01 | 诺基亚公司 | 进行不同类型字形的混合输入的方法和设备 |
CN1908863A (zh) * | 2005-08-07 | 2007-02-07 | 黄金富 | 双语混合输入方法及具有字典功能的手机 |
CN101546228A (zh) * | 2009-05-07 | 2009-09-30 | 腾讯科技(深圳)有限公司 | 一种实现英文提示的输入方法和装置 |
CN101943952A (zh) * | 2010-01-27 | 2011-01-12 | 北京搜狗科技发展有限公司 | 一种至少两种语言混合输入的方法和输入法系统 |
CN102012748A (zh) * | 2010-11-30 | 2011-04-13 | 哈尔滨工业大学 | 语句级中英文混合输入方法 |
Cited By (290)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US11928604B2 (en) | 2005-09-08 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US8942986B2 (en) | 2006-09-08 | 2015-01-27 | Apple Inc. | Determining user intent based on ontologies of domains |
US9117447B2 (en) | 2006-09-08 | 2015-08-25 | Apple Inc. | Using event alert text as input to an automated assistant |
US8930191B2 (en) | 2006-09-08 | 2015-01-06 | Apple Inc. | Paraphrasing of user requests and results by automated digital assistant |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US8903716B2 (en) | 2010-01-18 | 2014-12-02 | Apple Inc. | Personalized vocabulary for digital assistant |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10607141B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10984327B2 (en) | 2010-01-25 | 2021-04-20 | New Valuexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10984326B2 (en) | 2010-01-25 | 2021-04-20 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10607140B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US11410053B2 (en) | 2010-01-25 | 2022-08-09 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
CN102830811A (zh) * | 2012-08-21 | 2012-12-19 | 北京小米科技有限责任公司 | 一种移动终端输入法切换时内容匹配方法及装置 |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9606986B2 (en) | 2014-09-29 | 2017-03-28 | Apple Inc. | Integrated word N-gram and class M-gram language models |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US10720160B2 (en) | 2018-06-01 | 2020-07-21 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11360739B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User activity shortcut suggestions |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
Also Published As
Publication number | Publication date |
---|---|
JP2013533996A (ja) | 2013-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011150730A1 (zh) | 一种用于英文与另一种文字混合输入的方法和设备 | |
TWI315048B (en) | A method of entering text into an electronic device and an electronic device | |
US8229732B2 (en) | Automatic correction of user input based on dictionary | |
CN102063195B (zh) | 一种用于供用户进行中英文混合输入的方法与设备 | |
KR100891358B1 (ko) | 사용자의 다음 문자열 입력을 예측하는 글자 입력 시스템및 그 글자 입력 방법 | |
US7974979B2 (en) | Inquiry-oriented user input apparatus and method | |
US8943437B2 (en) | Disambiguation of USSD codes in text-based applications | |
US20070226649A1 (en) | Method for predictive typing | |
US20110087961A1 (en) | Method and System for Assisting in Typing | |
US9531706B2 (en) | Icon password setting apparatus and icon password setting method using keyword of icon | |
WO2012139394A1 (zh) | 用于确定资源候选项的排序结果的方法、装置及设备 | |
WO2009049049A1 (en) | Method and system for adaptive transliteration | |
WO2012000335A1 (zh) | 与应用接口相结合的输入方法和设备 | |
WO2006115642A1 (en) | Predictive conversion of user input | |
WO2012139475A1 (zh) | 获取与输入按键序列相对应的候选字符串的方法与设备 | |
US8903858B2 (en) | User interface and system for two-stage search | |
CN101753327A (zh) | 一种即时通讯中快速定位联系人的方法 | |
US20160247522A1 (en) | Method and system for providing access to auxiliary information | |
WO2011127788A1 (zh) | 用于供用户进行文字输入的方法、设备、服务器及系统 | |
RU2595531C2 (ru) | Способ и система генерирования определения слова на основе множественных источников | |
US8782067B2 (en) | Searching method, searching device and recording medium recording a computer program | |
JP2008242623A (ja) | 検索候補語句提示装置、検索候補語句提示プログラムおよび検索候補語句提示方法 | |
CN102999275B (zh) | 获取字词转换结果的方法及装置 | |
JP2004213120A (ja) | 言語入力方法及びシステム | |
Adesina et al. | A query-based SMS translation in information access system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11789109 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013512737 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC DATED 25.03.2013 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11789109 Country of ref document: EP Kind code of ref document: A1 |