WO2005038777A1 - Reconnaissance vocale intelligente a interfaces utilisateurs - Google Patents
Reconnaissance vocale intelligente a interfaces utilisateurs Download PDFInfo
- Publication number
- WO2005038777A1 WO2005038777A1 PCT/IB2004/052074 IB2004052074W WO2005038777A1 WO 2005038777 A1 WO2005038777 A1 WO 2005038777A1 IB 2004052074 W IB2004052074 W IB 2004052074W WO 2005038777 A1 WO2005038777 A1 WO 2005038777A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- suggestion
- modification
- user
- automatic
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Definitions
- the present invention relates to the field of automatic transformation of speech to text and especially to automatic text modifications of text which has been automatically converted from speech.
- the automatic text modification detects text portions according to modification rules, generates intelligent modification suggestions and interacts with a user having the final decision for the text modification.
- Speech recognition systems that transcribes speech to a written text are known in the prior art.
- Commercial speech recognition systems are nowadays widely distributed in the medical sector for example in hospitals and also in legal practices. Speech recognition for transcription of spoken speech to written text saves time and reduces costs since a transcription of a dictation has no longer to be performed by a typist.
- a dictation not only contains text to be transcribed but also commands that have to be interpreted by the speech recognition system.
- Punctuation commands should not be literally transcribed as e.g. "colon” or “full stop”. Punctuation commands, or formatting or highlighting commands should also be recognized and interpreted by an intelligent transcription system. The recognized text in combination with the interpreted commands finally yields a document which has to be corrected by a human proofreader or editor.
- Commercial speech recognition systems such as SpeechMagicTM of Philips Electronics N.V. and the Via Voice TM system of IBM Corporation feature text recognition as well as command interpretation. Both of these commercial speech recognition systems can be implemented into text processing software products for transcribing, editing, correcting and formatting text. Furthermore, these commercial systems provide voice controlled interaction between a user and a personal computer. Interpreting voice commands activate menu options and other customized software functions as for example browsing the Internet.
- a dictation inherently features ambiguous text portions such as e.g. numbers that have to be interpreted as a number or that have to be interpreted literally as a written word depending on the context of the spoken dictation.
- ambiguous text portions are easily misinterpreted by an automatic speech recognition system.
- system-based interpretations of text formatting or text highlighting commands might be erroneous.
- Such inevitably system-generated misinterpretations have to be manually corrected by a human proofreader which reduces the efficiency of the entire speech recognition system.
- a system-supported modification or correction of potentially ambiguous or misinterpreted text portions is therefore highly desirable in order to facilitate the proofreading.
- WO 97/49043 describes a method and a system for verifying accuracy of spelling and grammatical composition of a document.
- a sentence is extracted and the words of the extracted sentence are checked against misspelling.
- an indication is displayed in a combined spelling and grammar dialogue box.
- the word as well as the entire sentence in which the spelling error occurred is displayed.
- a spell checker program module receives suggestions being displayed in a suggestion list box within the combined spelling and grammar dialogue box.
- a user then inputs a command by selecting one of the command buttons of the combined spelling and grammar dialogue box. In response to the user selecting one of these command buttons the method performs the appropriate steps.
- US Pat. Nr. 6047300 describes a system and method for automatically correcting a misspelled word.
- a correctly spelled alternate word is generated if a word is detected as a misspelled word.
- the misspelled word and the correctly spelled alternate word are compared according to a set of different criteria. If the results of various different criteria comparisons satisfy a selection criteria, then the misspelled word is replaced by a correctly spelled alternate word.
- the user may have intended that the word appears as entered. To maintain the word as entered, an automatic replacement of the misspelled word must be overridden.
- the document discloses a spelling embodiment including an exception list of exception words.
- An exception word has to be defined by the user and is not subject to replacement. The user may edit the exception list to add and to remove exception words.
- US Pat. Nr. 6047300 also discloses a spelling embodiment according to which the user may or may not receive notice when a misspelled word is replaced by a correctly spelt word. If the user receives a replacement notice, then the user is aware of the replacement and may confirm or reject the replacement.
- the above cited documents only refer to listings of spelling or improper grammatical compositions within electronic text documents. Ambiguous text portions that may arise from a speech to text transcription cannot be identified by the above mentioned methods because the ambiguous text portions are correctly spelt. In the same way text formatting or text highlighting commands included in a dictation and being literally transcribed from an automatic speech recognition system are typically not detectable by means of the above mentioned correction and verifying systems.
- any kind of number is associated with an ambiguous text portion. Since a number can be interpreted as a number (which has to be written in digits) or as an enumeration or literally as a word, the speech to text recognition system requests for human expertise. A decision whether a number has to be written in digits, as an enumeration, or as a word is context dependent. Such ambiguous text portions are recognized automatically by the system and highlighted in the generated text.
- the system gives intelligent hints to the proofreader about potential misinterpretations that may have occurred in the speech to text transformation step. Not only numbers but also certain phrases or words can be subject to misinterpretation.
- the word "colon” for example may be written as “colon” (e.g. in medical reports) or as ":” as typographical sign depending on the context.
- the system features several rules to identify text portions within the recognized text that might be subject to a modification.
- the generated text is displayed on a user interface for proofreading purposes.
- potential text modifications are highlighted within the text. Highlighting can be performed by any means of accentuation as e.g. different colour, different size, different font or different typeface of the text to be modified.
- text portions being matched by at least one of the rules are automatically modified by the system and highlighted in the text.
- the proofreader can immediately identify those text portions that have been modified by the system.
- the system provides an undo function enabling the proofreader to correct automatically performed modifications of the text.
- the rules provide a confidence value indicating a likelihood whether a matched text portion is subject to modification. A text modification is automatically performed when the confidence value is above a first predefined threshold. In this case the text modification is performed without making any annotation or any further suggestion.
- the automatic modification is performed associated with an indication for the user and associated with appropriate undo information enabling the user to cancel the performed modification.
- a modification is not performed automatically but a suggestion is indicated to the user and the system requests for a decision to be made by the user whether the matched text portion has to be modified or not.
- the threshold values for the confidence value can be adapted to the proofreader's or user's preference.
- the text portions matched by the rules are not automatically modified by the system. Instead the proofreader's, or the user's expertise is requested in order to make a decision whether a modification should be performed or not.
- Text portions matched by the rules are therefore highlighted in the text.
- the highlighted text portions can then easily be discovered by the proofreader.
- the highlighted text is typically associated with one or several suggestions for the text modification.
- the user has a possibility to accept or to reject the suggestions generated by the system.
- the text modification is finally performed in response to the user's decision.
- different context based rule modules can be applied in order to detect ambiguous or problematic text portions.
- the context based rule modules are for example specified for a legal practice, or a medical report.
- the rules not only detect ambiguous text portions but also refer to some unclear commands contained in the dictation.
- commands such as "quote unquote” may be interpreted as a quoting of the next word only or as the beginning of a quoted region of unknown length.
- suggestions or hints are generated and highlighted in the text.
- the single rules may also be specified to detect inconsistencies in documents containing enumeration symbols such as "1, 2, 3,! or "a), b), c),! Since speakers are often not consistent in dictating all enumeration symbols the rules are designed for detecting missing items in a series of enumerations. In this case a hint or a suggestion is generated for the proofreader.
- references to other text sections such as "the same” or “as above” may be transcribed literally or it may be common to resolve these references and to insert the corresponding text.
- the system here provides some hint to the human proofreader if certain reference terms or phrases are detected.
- a suggestion is always generated and the appropriate text portion is always highlighted when two or more conflicting suggestions are provided for a text modification related to a distinct text portion.
- the human expertise is definitely required.
- the method provides a ranking or a list of suggestions from which the user or proofreader can make a selection.
- an automatic text modification is only performed when the automatic text modification comprises a number of editing operations which is below a predefined threshold value.
- the recognized text as well as the generated suggestions according to the different correction rules are outputted to a graphical user interface.
- the graphical user interface is designed to display the recognized text as well as to display the suggestions for potential text modification operations.
- a suggestion can be displayed in a manifold of different ways.
- the suggestion can appear in the form of a suggestion menu which is positioned directly next to the highlighted text portion to which the suggestion relates.
- the different suggestions may appear in a separate window within the graphical user interface.
- a plurality of suggestions for various text portions are only displayed in response to the user's request. Otherwise the graphical user interface may be overcrowded by a plurality of suggestions or suggestion lists.
- a user's request can be adapted in a manifold of different ways, as e.g. by clicking on a mouse button, shifting a mouse pointer on a highlighted text portion, touching with a finger on the appropriate position on the graphical user interface or simply via entering a universal key shortcut on a keyboard connected to the system.
- the appearance of various suggestions for a single highlighted text portion can also be adapted in a manifold of different ways.
- the single suggestions can appear according to a specified order (e.g. sorted by confidence value) as entries of a " menu or as entries of a list, as well as in a completely disordered way.
- the way of appearance of the suggestions may be further specified by the user.
- a decision requested from the user can be performed in different ways.
- the user can either select one of the suggestions that have to be performed by the system or the user may manually enter an alternative suggestion to be performed by the system.
- the selection of a distinct suggestion can for example be realized with the help of a mouse pointer and a mouse click or with a universal keyboard shortcut.
- the selection of a distinct suggestion triggers associated side effects.
- the system for example detects a missing enumeration, it suggests to implement this enumeration.
- the system automatically gives a hint that a following letter might become subject to capitalization.
- the execution of some automatic modification according to a first rule invokes a second potential modification according to another rule.
- the user may further decide about the triggering of such side effects locally or globally in the document.
- the triggering of side effects due to a performed modification can further be controlled by means of a previously described confidence value associated with threshold values.
- Fig. 1 is illustrative of a flow chart for performing a method of the invention
- Fig. 2 illustrates a flow chart for performing a second method of the invention
- Fig. 3 shows a block diagram of a preferred embodiment of the invention
- Fig. 4 shows a block diagram of a graphical user interface
- Fig. 5 is illustrative of a flow chart for triggering a modification rule.
- Figure 1 illustrates a flow chart for performing the method according to the invention.
- speech is transformed to text.
- step 102 it is checked, which text regions are matched by one or several modification or inconsistency rules.
- step 104 problematic text regions are detected by means of conflicting applicable modification rules or by a match of some inconsistency rule.
- step 106 the identified and detected text portions are highlighted within the text.
- step 108 the method creates several suggestions for each highlighted text portion and provides a list of suggestions.
- the created list of suggestions is displayed on the graphical user interface if requested by the user.
- step 112 the user selects one of the suggestions or the user manually inserts a text modification which is then inserted in the text.
- Figure 2 illustrates a flow chart of a method of the invention in which automatic text modifications are performed. Similar as described in figure 1 in step 200 the speech is transformed to text. In the next step 202 it is checked which regions of the recognized text are matched by one or several modification or inconsistency rules. According to the various rules, text portions potentially being subject to modification are detected by the method in step 204. In step 206 the method automatically performs text modifications according to the rules. Since these automatic text modifications can be erroneous they are highlighted in the text in step 208 and provided with some undo information for the user. In this way the method performs an automatic text modification but also indicates to the user that an automatic, hence potentially erroneous, modification has been performed in the text.
- FIG. 3 shows a block diagram of a preferred embodiment of the invention based on a speech to text transformation system 302.
- Natural speech 300 is entered into the speech to text transformation system 302.
- the speech to text transformation system 302 interacts with a user 304 and generates modified text 316.
- the speech to text transformation system 302 comprises a speech to text transformation module 306, a rule match detector module 308, a rule execution module 309 as well as a graphical user interface 310.
- the speech to text transformation system 302 further comprises context based rule modules 312, 314.
- Each of the context based rule modules 312, 314 comprises a database 318, 324, a first rule 320, 326, a second rule 322, 328 as well as additional rules not further specified here.
- Incoming speech 300 is processed in the speech to text transformation module 306 providing a recognized text.
- the rule match detector module 308 then applies one or several of the context based rule modules 312, 314 to the recognized text.
- the databases 318, 324 as well as the single rules 320, 322, 326, 328 are specified for a distinct text scope.
- the databases 318, 324 are for example specified for legal practice or medical reports.
- the rules 320, 322, 326, 328 are specified to different fields of application.
- the rule match detector module 308 detects text portions within the recognized text that might become subject to modification. Modifications of the detected text portions are performed by the rule execution module 309. According to the user's preferences, an automatic modification may be directly performed by the rule execution module 309 or may be performed according to a user's decision. Depending on the predefined threshold and confidence values, a performed modification may be indicated to the user associated with undo information or not. A requirement of a user decision is indicated to the user via the graphical user interface 310. The interaction between the speech to text transformation system 302 with the user 304 is handled via the graphical user interface 310. When the system has performed an automatic text modification the appropriate text portion is highlighted on the graphical user interface 310.
- Text portions whose modification requires a user's decision are also highlighted on the graphical user interface 310.
- the system When the system generates suggestions for an automatic modification according to the rules 320, 322, 326, 328, the suggestions are also displayed via the graphical user interface 310. Execution of the user's decisions as well as the automatic text modifications into the recognized text finally give the modified text 316 which is outputted from the speech to text transformation system 302.
- a warn icon indicating a text inconsistency is generated on the graphical user interface 310.
- Figure 4 shows a block diagram of a graphical user interface 400 of the present invention.
- the graphical user interface 400 comprises a text window 402 as well as a suggestion window 404.
- the text window 402 typically contains several highlighted text portions 406 indicating a potential modification or a warn icon of a text inconsistency.
- the highlighting of the text can be performed in different ways, such as e.g. different color, different font or other preferably visual indications.
- Various suggestions for the modification of a highlighted text portion can be displayed by means of a suggestion list 410 appearing within the text window 402 or within the suggestion window 404.
- the suggestion window 404 as well as any list of suggestions 410, 412 may be always present inside the graphical user interface 400 but may also only appear on a user's demand.
- FIG. 4 is illustrative of a flow chart representing the execution of text modifications with respect to triggering of rules as side effects of performed text modifications.
- step 514 checks whether the performed modification of the text triggers any other of the text modification rules. For example when the first modification enters a missing punctuation such as a ".”, the proceeding word of the next sentence has to be capitalized according to an other rule.
- the performed modification triggers such an other rule
- the other rule is applied to the text portion in step 516.
- the method returns to step 506 and performs the same suggestion and interaction procedure for the selected rule.
- the index j is increased by 1 and the method returns to step 504.
Abstract
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE602004015491T DE602004015491D1 (de) | 2003-10-21 | 2004-10-13 | Intelligente spracherkennung mit benutzeroberflächen |
US10/576,329 US7483833B2 (en) | 2003-10-21 | 2004-10-13 | Intelligent speech recognition with user interfaces |
CN2004800308924A CN1871638B (zh) | 2003-10-21 | 2004-10-13 | 采用用户接口的智能语音识别 |
EP04770243A EP1678707B1 (fr) | 2003-10-21 | 2004-10-13 | Reconnaissance vocale intelligente a interfaces utilisateurs |
JP2006536231A JP4864712B2 (ja) | 2003-10-21 | 2004-10-13 | ユーザインタフェースを有するインテリジェント音声認識 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03103885.4 | 2003-10-21 | ||
EP03103885 | 2003-10-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005038777A1 true WO2005038777A1 (fr) | 2005-04-28 |
Family
ID=34443045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2004/052074 WO2005038777A1 (fr) | 2003-10-21 | 2004-10-13 | Reconnaissance vocale intelligente a interfaces utilisateurs |
Country Status (7)
Country | Link |
---|---|
US (1) | US7483833B2 (fr) |
EP (1) | EP1678707B1 (fr) |
JP (1) | JP4864712B2 (fr) |
CN (1) | CN1871638B (fr) |
AT (1) | ATE403215T1 (fr) |
DE (1) | DE602004015491D1 (fr) |
WO (1) | WO2005038777A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100536532C (zh) * | 2005-05-23 | 2009-09-02 | 北京大学 | 自动加配字幕的方法和系统 |
JP2014029554A (ja) * | 2006-05-25 | 2014-02-13 | Multimodal Technologies Inc | 音声認識方法 |
WO2018044887A1 (fr) * | 2016-08-31 | 2018-03-08 | Nuance Communications, Inc. | Interface utilisateur destinée à une application de dictée mettant en œuvre une reconnaissance vocale automatique |
US10873621B1 (en) | 2014-08-20 | 2020-12-22 | Ivanti, Inc. | Terminal emulation over html |
US10938886B2 (en) | 2007-08-16 | 2021-03-02 | Ivanti, Inc. | Scripting support for data identifiers, voice recognition and speech in a telnet session |
US11100278B2 (en) | 2016-07-28 | 2021-08-24 | Ivanti, Inc. | Systems and methods for presentation of a terminal application screen |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030101054A1 (en) * | 2001-11-27 | 2003-05-29 | Ncc, Llc | Integrated system and method for electronic speech recognition and transcription |
US20100158217A1 (en) | 2005-09-01 | 2010-06-24 | Vishal Dhawan | System and method for placing telephone calls using a distributed voice application execution system architecture |
US11102342B2 (en) | 2005-09-01 | 2021-08-24 | Xtone, Inc. | System and method for displaying the history of a user's interaction with a voice application |
US9799039B2 (en) | 2005-09-01 | 2017-10-24 | Xtone, Inc. | System and method for providing television programming recommendations and for automated tuning and recordation of television programs |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7624019B2 (en) * | 2005-10-17 | 2009-11-24 | Microsoft Corporation | Raising the visibility of a voice-activated user interface |
WO2007052281A1 (fr) * | 2005-10-31 | 2007-05-10 | Hewlett-Packard Development Company, L.P. | Procede et systeme de selection de textes a editer |
US7881928B2 (en) * | 2006-09-01 | 2011-02-01 | International Business Machines Corporation | Enhanced linguistic transformation |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8386248B2 (en) * | 2006-09-22 | 2013-02-26 | Nuance Communications, Inc. | Tuning reusable software components in a speech application |
JP4867654B2 (ja) * | 2006-12-28 | 2012-02-01 | 日産自動車株式会社 | 音声認識装置、および音声認識方法 |
US7991609B2 (en) * | 2007-02-28 | 2011-08-02 | Microsoft Corporation | Web-based proofing and usage guidance |
TWI321313B (en) * | 2007-03-03 | 2010-03-01 | Ind Tech Res Inst | Apparatus and method to reduce recognization errors through context relations among dialogue turns |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US20090326938A1 (en) * | 2008-05-28 | 2009-12-31 | Nokia Corporation | Multiword text correction |
US9081590B2 (en) * | 2008-06-24 | 2015-07-14 | Microsoft Technology Licensing, Llc | Multimodal input using scratchpad graphical user interface to edit speech text input with keyboard input |
ES2386673T3 (es) * | 2008-07-03 | 2012-08-24 | Mobiter Dicta Oy | Procedimiento y dispositivo de conversión de voz |
CN101651788B (zh) * | 2008-12-26 | 2012-11-21 | 中国科学院声学研究所 | 一种在线语音文本对齐系统及方法 |
JP2010160316A (ja) | 2009-01-08 | 2010-07-22 | Alpine Electronics Inc | 情報処理装置及びテキスト読み上げ方法 |
US9280971B2 (en) * | 2009-02-27 | 2016-03-08 | Blackberry Limited | Mobile wireless communications device with speech to text conversion and related methods |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US8379801B2 (en) | 2009-11-24 | 2013-02-19 | Sorenson Communications, Inc. | Methods and systems related to text caption error correction |
US9218807B2 (en) * | 2010-01-08 | 2015-12-22 | Nuance Communications, Inc. | Calibration of a speech recognition engine using validated text |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9002700B2 (en) * | 2010-05-13 | 2015-04-07 | Grammarly, Inc. | Systems and methods for advanced grammar checking |
US11062615B1 (en) | 2011-03-01 | 2021-07-13 | Intelligibility Training LLC | Methods and systems for remote language learning in a pandemic-aware world |
US10019995B1 (en) | 2011-03-01 | 2018-07-10 | Alice J. Stiebel | Methods and systems for language learning based on a series of pitch patterns |
JP5673330B2 (ja) * | 2011-04-25 | 2015-02-18 | 株式会社デンソー | 音声入力装置 |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
WO2014018039A1 (fr) * | 2012-07-26 | 2014-01-30 | Nuance Communications, Inc. | Dispositif de formatage de texte à personnalisation intuitive |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
WO2014197334A2 (fr) | 2013-06-07 | 2014-12-11 | Apple Inc. | Système et procédé destinés à une prononciation de mots spécifiée par l'utilisateur dans la synthèse et la reconnaissance de la parole |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
CN105374356B (zh) * | 2014-08-29 | 2019-07-30 | 株式会社理光 | 语音识别方法、语音评分方法、语音识别系统及语音评分系统 |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9576575B2 (en) * | 2014-10-27 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Providing voice recognition shortcuts based on user verbal input |
US9678947B2 (en) * | 2014-11-21 | 2017-06-13 | International Business Machines Corporation | Pattern identification and correction of document misinterpretations in a natural language processing system |
US9940396B1 (en) * | 2014-12-10 | 2018-04-10 | Amazon Technologies, Inc. | Mining potential user actions from a web page |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US9787819B2 (en) * | 2015-09-18 | 2017-10-10 | Microsoft Technology Licensing, Llc | Transcription of spoken communications |
DK201670539A1 (en) * | 2016-03-14 | 2017-10-02 | Apple Inc | Dictation that allows editing |
CN105827417A (zh) * | 2016-05-31 | 2016-08-03 | 安徽声讯信息技术有限公司 | 一种用于会议记录并可随时修改的语音速记装置 |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | USER INTERFACE FOR CORRECTING RECOGNITION ERRORS |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
DK201770429A1 (en) | 2017-05-12 | 2018-12-14 | Apple Inc. | LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK179549B1 (en) | 2017-05-16 | 2019-02-12 | Apple Inc. | FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES |
CN107679032A (zh) * | 2017-09-04 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | 语音转换纠错方法和装置 |
CN109949828B (zh) * | 2017-12-20 | 2022-05-24 | 苏州君林智能科技有限公司 | 一种文字校验方法及装置 |
JP2019191713A (ja) * | 2018-04-19 | 2019-10-31 | ヤフー株式会社 | 決定プログラム、決定方法、及び決定装置 |
US11170770B2 (en) * | 2018-08-03 | 2021-11-09 | International Business Machines Corporation | Dynamic adjustment of response thresholds in a dialogue system |
US11620552B2 (en) * | 2018-10-18 | 2023-04-04 | International Business Machines Corporation | Machine learning model for predicting an action to be taken by an autistic individual |
US11551694B2 (en) * | 2021-01-05 | 2023-01-10 | Comcast Cable Communications, Llc | Methods, systems and apparatuses for improved speech recognition and transcription |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6047300A (en) * | 1997-05-15 | 2000-04-04 | Microsoft Corporation | System and method for automatically correcting a misspelled word |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3285954B2 (ja) * | 1992-09-25 | 2002-05-27 | 株式会社東芝 | 音声認識装置 |
US6064959A (en) * | 1997-03-28 | 2000-05-16 | Dragon Systems, Inc. | Error correction in speech recognition |
US6098034A (en) * | 1996-03-18 | 2000-08-01 | Expert Ease Development, Ltd. | Method for standardizing phrasing in a document |
US6085206A (en) | 1996-06-20 | 2000-07-04 | Microsoft Corporation | Method and system for verifying accuracy of spelling and grammatical composition of a document |
US5829000A (en) * | 1996-10-31 | 1998-10-27 | Microsoft Corporation | Method and system for correcting misrecognized spoken words or phrases |
JP3082746B2 (ja) * | 1998-05-11 | 2000-08-28 | 日本電気株式会社 | 音声認識システム |
CN1119760C (zh) * | 1998-08-31 | 2003-08-27 | 索尼株式会社 | 自然语言处理装置及方法 |
JP2000089786A (ja) * | 1998-09-08 | 2000-03-31 | Nippon Hoso Kyokai <Nhk> | 音声認識結果の修正方法および装置 |
US6256315B1 (en) * | 1998-10-27 | 2001-07-03 | Fujitsu Network Communications, Inc. | Network to network priority frame dequeuing |
JP2000259178A (ja) * | 1999-03-08 | 2000-09-22 | Fujitsu Ten Ltd | 音声認識装置 |
JP2000293193A (ja) * | 1999-04-08 | 2000-10-20 | Canon Inc | 音声入力装置、音声入力方法、及び記憶媒体 |
US6611802B2 (en) * | 1999-06-11 | 2003-08-26 | International Business Machines Corporation | Method and system for proofreading and correcting dictated text |
US6347296B1 (en) * | 1999-06-23 | 2002-02-12 | International Business Machines Corp. | Correcting speech recognition without first presenting alternatives |
JP2001313720A (ja) * | 2000-02-24 | 2001-11-09 | Jintetsuku:Kk | 電子商取引システムにおける個人情報確認方法 |
US7149970B1 (en) * | 2000-06-23 | 2006-12-12 | Microsoft Corporation | Method and system for filtering and selecting from a candidate list generated by a stochastic input method |
US6856956B2 (en) * | 2000-07-20 | 2005-02-15 | Microsoft Corporation | Method and apparatus for generating and displaying N-best alternatives in a speech recognition system |
AU2002346116A1 (en) * | 2001-07-20 | 2003-03-03 | Gracenote, Inc. | Automatic identification of sound recordings |
DE10138408A1 (de) * | 2001-08-04 | 2003-02-20 | Philips Corp Intellectual Pty | Verfahren zur Unterstützung des Korrekturlesens eines spracherkannten Textes mit an die Erkennungszuverlässigkeit angepasstem Wiedergabegeschwindigkeitsverlauf |
US20040030540A1 (en) * | 2002-08-07 | 2004-02-12 | Joel Ovil | Method and apparatus for language processing |
-
2004
- 2004-10-13 EP EP04770243A patent/EP1678707B1/fr active Active
- 2004-10-13 AT AT04770243T patent/ATE403215T1/de not_active IP Right Cessation
- 2004-10-13 DE DE602004015491T patent/DE602004015491D1/de active Active
- 2004-10-13 US US10/576,329 patent/US7483833B2/en active Active
- 2004-10-13 WO PCT/IB2004/052074 patent/WO2005038777A1/fr active IP Right Grant
- 2004-10-13 JP JP2006536231A patent/JP4864712B2/ja not_active Expired - Fee Related
- 2004-10-13 CN CN2004800308924A patent/CN1871638B/zh active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6047300A (en) * | 1997-05-15 | 2000-04-04 | Microsoft Corporation | System and method for automatically correcting a misspelled word |
Non-Patent Citations (2)
Title |
---|
"DRAGON NATURALLY SPEAKING DELUXE", COMPUTERS IN HUMAN SERVICES, NEW YORK, NY, US, vol. 15, no. 1, 1998, pages 79 - 84, XP000827992, ISSN: 0740-445X * |
MANKOFF J ET AL: "OOPS: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces", COMPUTERS AND GRAPHICS, PERGAMON PRESS LTD. OXFORD, GB, vol. 24, no. 6, December 2000 (2000-12-01), pages 819 - 834, XP004225280, ISSN: 0097-8493 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100536532C (zh) * | 2005-05-23 | 2009-09-02 | 北京大学 | 自动加配字幕的方法和系统 |
JP2014029554A (ja) * | 2006-05-25 | 2014-02-13 | Multimodal Technologies Inc | 音声認識方法 |
US10938886B2 (en) | 2007-08-16 | 2021-03-02 | Ivanti, Inc. | Scripting support for data identifiers, voice recognition and speech in a telnet session |
US10873621B1 (en) | 2014-08-20 | 2020-12-22 | Ivanti, Inc. | Terminal emulation over html |
US11100278B2 (en) | 2016-07-28 | 2021-08-24 | Ivanti, Inc. | Systems and methods for presentation of a terminal application screen |
WO2018044887A1 (fr) * | 2016-08-31 | 2018-03-08 | Nuance Communications, Inc. | Interface utilisateur destinée à une application de dictée mettant en œuvre une reconnaissance vocale automatique |
US10706210B2 (en) | 2016-08-31 | 2020-07-07 | Nuance Communications, Inc. | User interface for dictation application employing automatic speech recognition |
Also Published As
Publication number | Publication date |
---|---|
US7483833B2 (en) | 2009-01-27 |
JP2007509377A (ja) | 2007-04-12 |
US20070083366A1 (en) | 2007-04-12 |
CN1871638B (zh) | 2012-01-25 |
JP4864712B2 (ja) | 2012-02-01 |
DE602004015491D1 (de) | 2008-09-11 |
CN1871638A (zh) | 2006-11-29 |
ATE403215T1 (de) | 2008-08-15 |
EP1678707A1 (fr) | 2006-07-12 |
EP1678707B1 (fr) | 2008-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7483833B2 (en) | Intelligent speech recognition with user interfaces | |
US7149970B1 (en) | Method and system for filtering and selecting from a candidate list generated by a stochastic input method | |
US4914704A (en) | Text editor for speech input | |
US5761689A (en) | Autocorrecting text typed into a word processing document | |
CN100593167C (zh) | 语言输入用户界面 | |
US5940847A (en) | System and method for automatically correcting multi-word data entry errors | |
US5909667A (en) | Method and apparatus for fast voice selection of error words in dictated text | |
US6510412B1 (en) | Method and apparatus for information processing, and medium for provision of information | |
JP4829901B2 (ja) | マニュアルでエントリされた不確定なテキスト入力を音声入力を使用して確定する方法および装置 | |
US6356866B1 (en) | Method for converting a phonetic character string into the text of an Asian language | |
US7165019B1 (en) | Language input architecture for converting one text form to another text form with modeless entry | |
US6195637B1 (en) | Marking and deferring correction of misrecognition errors | |
US6334102B1 (en) | Method of adding vocabulary to a speech recognition system | |
US4876665A (en) | Document processing system deciding apparatus provided with selection functions | |
JPH07325824A (ja) | 文法チェックシステム | |
JP2003514304A (ja) | スペルミス、タイプミス、および変換誤りに耐性のある、あるテキスト形式から別のテキスト形式に変換する言語入力アーキテクチャ | |
JPH07325828A (ja) | 文法チェックシステム | |
KR19990078364A (ko) | 문서처리장치 및 그의 방법 | |
US20070288240A1 (en) | User interface for text-to-phone conversion and method for correcting the same | |
US20070277118A1 (en) | Providing suggestion lists for phonetic input | |
ZA200706059B (en) | Language information system | |
JPH09325787A (ja) | 音声合成方法、音声合成装置、文章への音声コマンド組み込み方法、及び装置 | |
JP2570681B2 (ja) | ワード・プロセッサ | |
JPH0916597A (ja) | 文章推敲装置及び方法 | |
JP3387421B2 (ja) | 単語入力支援装置及び単語入力支援方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480030892.4 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004770243 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006536231 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007083366 Country of ref document: US Ref document number: 10576329 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004770243 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10576329 Country of ref document: US |
|
WWG | Wipo information: grant in national office |
Ref document number: 2004770243 Country of ref document: EP |