WO2003036930A1 - Web server controls for web enabled recognition and/or audible prompting - Google Patents
Web server controls for web enabled recognition and/or audible prompting Download PDFInfo
- Publication number
- WO2003036930A1 WO2003036930A1 PCT/US2002/033245 US0233245W WO03036930A1 WO 2003036930 A1 WO2003036930 A1 WO 2003036930A1 US 0233245 W US0233245 W US 0233245W WO 03036930 A1 WO03036930 A1 WO 03036930A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- controls
- control
- readable medium
- computer readable
- recognition
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 claims description 85
- 238000000034 method Methods 0.000 claims description 73
- 230000007246 mechanism Effects 0.000 claims description 38
- 238000012790 confirmation Methods 0.000 claims description 34
- 230000004913 activation Effects 0.000 claims description 33
- 238000009877 rendering Methods 0.000 claims description 27
- 230000027455 binding Effects 0.000 claims description 19
- 238000009739 binding Methods 0.000 claims description 19
- 238000013459 approach Methods 0.000 abstract description 27
- 238000013515 script Methods 0.000 description 94
- 230000006870 function Effects 0.000 description 62
- 238000012545 processing Methods 0.000 description 30
- 238000004422 calculation algorithm Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 16
- 230000004044 response Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 12
- 230000003993 interaction Effects 0.000 description 12
- 238000012360 testing method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000012546 transfer Methods 0.000 description 10
- 230000009471 action Effects 0.000 description 9
- 230000006399 behavior Effects 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000015572 biosynthetic process Effects 0.000 description 7
- 238000003786 synthesis reaction Methods 0.000 description 7
- 239000002131 composite material Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013523 data management Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 235000006719 Cassia obtusifolia Nutrition 0.000 description 2
- 235000014552 Cassia tora Nutrition 0.000 description 2
- 244000201986 Cassia tora Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008275 binding mechanism Effects 0.000 description 2
- 238000013479 data entry Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011010 flushing procedure Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 241001422033 Thestylus Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
- H04M3/4938—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals comprising a voice browser which renders and interprets, e.g. VoiceXML
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72445—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Definitions
- the present invention relates to access of information over a network such as the Internet. More particularly, the present invention relates to controls for a server that generates client side markup enabled with recognition and/or audible prompting .
- Small computing devices such as personal digital assistants (PDA)
- PDA personal digital assistants
- portable phones are used with ever increasing frequency by people in their day-to-day activities.
- the functionality of these devices is increasing, and in some cases, merging.
- many portable phones now can be used to access and browse the Internet as well as can be used to store personal information such as addresses, phone numbers and the like.
- a document server processes requests from a client through a VoiceXML interpreter.
- the web server can produce VoiceXML documents in reply, which are processed by the VoiceXML interpreter and rendered audibly to the user.
- voice commands through voice recognition, the user can navigate the web.
- the mechanisms are commonly referred to as ⁇ voice dialogs", which also must address errors when incorrect information or no information is provided by the user, for example, in response to an audible question. Since the mechanisms are not commonly based on the visual content of the web page, they cannot be generated automatically, and therefore typically require extensive development time by the application developer.
- a second approach to speech enabling web content includes writing specific voice pages in a new language.
- An advantage of this approach is that the speech-enabled page contains all the mechanisms needed for aural dialog such as repairs and navigational help.
- the application pages must then be adapted to include the application logic as found in the visual content pages.
- the application logic of the visual content pages must be rewritten in the form of the speech-enabling language.
- this process can be automated by the use of tools creating visual and aural pages from the same specification, maintenance of the visual and speech enabled pages is usually difficult to synchronize.
- this approach does not easily allow multimodal applications, for example where both visual and speech interaction is provided on the web page.
- Web server controls are provided for generating client side markups with recognition and/or audible prompting. Three approaches are disclosed for implementation of the controls.
- controls commonly related to visual rendering are extended to include an attributes related to recognition and/or audible prompting.
- controls such as "label” use a library having markup information, • which provides a visual prompt on a display.
- "textbox” provides an input field on a visual display.
- an additional library is provided for recognition and/or audibly prompting, wherein the controls include attributes or parameters to use both libraries .
- the controls access the - current, existing library for visual markup information, but include attributes and mechanisms to perform recognition and/or audible prompting.
- the controls use the library, but only when visual rendering is desired.
- a set of companion controls having attributes related to recognition and/or audible prompting are formed.
- the companion controls use a library having recognition and audibly prompting markup information.
- the companion controls are selectively associated with visual controls. In this manner, application logic remains with the visual controls, wherein the companion controls provide recognized results to the visual controls.
- the companion controls follow a dialog in that controls are provided for prompting a question, obtaining an answer, confirming a result, providing a command, or making a statement.
- a question/answer control can also be formed from one or more of these controls in order to' form a dialog or sub-dialog pertaining to a specific topic.
- FIG. 1 is a plan view of a first embodiment of a computing device operating environment.
- FIG. 2 is a block diagram of the computing device of FIG. 1.
- FIG. 3 is a block diagram of a general purpose computer.
- FIG. 4 is a block diagram of an architecture for a client/server system.
- FIG. 5 is a display for obtaining credit card information.
- FIG. 6 is an exemplary page of markup language executable on a client having a display and voice recognition capabilities.
- FIG. 7 is a block diagram illustrating a first approach for providing recognition and audible prompting in client side markups.
- FIG. 8 is a block diagram illustrating a second approach for providing recognition and audible prompting in client side markups .
- FIG. 9 is a block diagram illustrating a third approach for providing recognition and audible prompting in client side markups.
- FIG. 10 is a block diagram illustrating companion controls.
- FIG. 11 is a detailed block diagram illustrating companion controls.
- FIG. 1 an exemplary form of a data management device (PIM, PDA or the like) is illustrated at 30.
- PIM data management device
- the present invention can also be practiced using other computing devices discussed below, and in particular, those computing devices having limited surface areas for input buttons or the like.
- phones and/or data management devices Will also benefit from the present invention.
- Such devices will have an enhanced utility compared to existing portable personal information management devices and other portable electronic devices, and the functions and compact size of such devices will more likely encourage the user to carry the device -at all times. Accordingly, it is not intended that the scope of the architecture herein described be limited by the disclosure of an exemplary data management or PIM device, phone or computer herein illustrated.
- FIG. 1 An exemplary form of a data management mobile device 30 is illustrated in FIG. 1.
- the mobile device 30 includes a housing 32 and has an user interface including a display 34, which uses a contact sensitive display screen in conjunction with a stylus 33.
- the stylus 33 is used to press or contact the display 34 at designated coordinates to select a field, to selectively move a starting position of a cursor, or to otherwise provide command information such as through gestures or handwriting.
- one or more buttons 35 can be included on the device 30 for navigation.
- other input mechanisms such as rotatable wheels, rollers or the like can also be provided.
- another form of input can include a visual input such as through computer vision.
- FIG. 2 a block diagram illustrates the functional components comprising the mobile device 30.
- a central processing unit (CPU) 50 implements the software control functions.
- CPU 50 is coupled to display 34 so that text and graphic icons generated in accordance with the controlling software appear on the display 34.
- a speaker 43 can be coupled to CPU 50 typically with a digital-to-analog converter 59 to provide an audible output.
- Data that is downloaded or entered by the user into the mobile device 30 is stored in a non-volatile read/write random access memory store 54 bi-directionally coupled to the CPU 50.
- Random access memory (RAM) 54 provides volatile storage for instructions that are executed by CPU 50, and storage for temporary data, such as register values.
- ROM 58 can also be used to store the operating system software for the device that controls the basic functionality of the mobile 30 and other operating system kernel functions (e.g., the loading of software components into RAM 54) .
- RAM 54 also serves as a storage for the code in the manner analogous to the function of a hard drive on a PC that is used to store application programs . It should be noted that although nonvolatile memory is used for storing the code, it alternatively can be stored in volatile memory that is not used for execution of 'the code.
- Wireless signals can be transmitted/received by the mobile device through a wireless transceiver 52, which is coupled to CPU 50.
- An optional communication interface 60 can also be provided for downloading data directly from a computer (e.g., desktop computer), or from a wired network, if desired. Accordingly, interface 60 can comprise various forms of communication devices, for example, an infrared link, modem, a network card, or the like.
- Mobile device 30 includes a microphone 29, and analog-to-digital (A/D) converter 37, and an optional recognition program (speech, DTMF, handwriting, gesture or computer vision) stored in store 54.
- A/D analog-to-digital
- speech DTMF
- handwriting gesture or computer vision
- store 54 stores speech signals, which are digitized by A/D converter 37.
- the speech recognition program can perform normalization and/or feature extraction functions on the digitized speech signals to obtain intermediate speech recognition results.
- speech data is transmitted to a remote recognition server 204 discussed below and illustrated in the architecture of FIG. 5. Recognition results are then returned to mobile device 30 for rendering (e.g.
- a web server 202 (FIG. 5) , wherein the web server 202 and mobile device 30 operate in a client/server relationship.
- Similar processing can be used for other, forms of input.
- handwriting input can be digitized with or without pre-processing on device 30.
- this form of input can be transmitted to the recognition server 204 for recognition wherein the recognition results are returned to at least one of the device 30 and/or web server 202.
- DTMF data, gesture data and visual data can be processed similarly.
- device 30 (and the other forms of clients discussed below) would include necessary hardware such as a camera for visual input.
- the present invention can . be used with numerous other computing devices such as a general desktop computer.
- the present invention will allow a user with limited physical abilities to input or enter text into a computer or other computing device when other conventional input devices, such as a full alpha-numeric keyboard, are too difficult to operate.
- the invention is also operational with numerous other general purpose or special purpose computing systems, environments or configurations.
- Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, wireless or cellular telephones, regular telephones (without any screen) , personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing -environments that include any of the above systems or devices, and the like.
- FIG. 3 The following is a brief description of a general purpose computer 120 illustrated in FIG. 3.
- the computer 120 is again only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computer 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated therein.
- the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote computer storage media including memory storage devices. Tasks performed by the programs and modules are described below and with the aid of figures.
- processor executable instructions which can be written on any form of a computer readable medium.
- components of computer 120 may include, but are not limited to, a processing unit 140, a system memory 150, and a system bus 141 that couples various system components including the system memory to the processing unit 140.
- the system bus 141 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- bus architectures include Industry Standard Architecture (ISA) bus, Universal Serial Bus (USB) , Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
- Computer 120 typically includes a variety of computer readable mediums.
- Computer readable mediums can be any available media that can be accessed by computer 120 and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer readable mediums may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 120.
- Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, FR, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
- the system memory 150 includes computer storage media in the form of volatile and/or nonvolatile memory such as- read only memory (ROM) 151 and random access memory (RAM) 152.
- ROM read only memory
- RAM random access memory
- a basic input/output system 153 (BIOS) containing the basic routines .that help to transfer information between elements within computer 120, such as during startup, is typically stored in ROM 151.
- BIOS basic input/output system 153
- RAM 152 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 140.
- FIG. 3 illustrates operating system 54, application programs 155, other program modules 156, and program data 157.
- the computer 120 may also include other removable/non-removable volatile/nonvolatile computer storage media.
- FIG. 3 illustrates a hard disk drive 161 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 171 that reads from or writes to a removable, nonvolatile magnetic disk 172, and an optical disk drive 175 that reads from or writes to a removable, nonvolatile optical disk 176 such as a CD ROM or other optical media.
- removable/nonremovable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
- the hard dis-k drive 161 is typically connected to the system bus 141 through a non-removable memory interface such as interface 160, and magrietic disk drive 171 and optical disk drive 175 are typically connected to the system bus 141 by a removable memory interface, such as interface 170.
- hard disk drive 161 is illustrated as storing operating system 164, application programs 165, other program modules 166, and program data 167. Note that these components can either be the same as or different from operating system 154, application programs 155, other program modules 156, and program data 157. Operating system 164, application programs 165, other program modules 166, and program data 167 are given different numbers here to illustrate that, at a minimum, they are different copies.
- a user may enter commands and information into the computer 120 through input devices such as a keyboard 182, a microphone 183, and a pointing device 181, such as a mouse, trackball or touch pad.
- Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
- a monitor 184 or other type of display device is also connected to the system bus 141 via an interface, such as a video interface 185.
- computers may also include other peripheral output devices such as speakers 187 and printer 186, which may be connected through an output peripheral interface 188.
- the computer 120 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 194.
- the remote computer 194 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 120.
- the logical connections depicted in FIG. 3 include a local area network (LAN) 191 and a wide area network (WAN) 193, but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the computer 120 When used in a LAN networking environment, the computer 120 is connected to the LAN 191 through a network interface or adapter 190. When used in a WAN networking environment, the computer 120 typically includes a modem 192 or other means for establishing communications over the WAN 193, such as the Internet.
- the modem 192 which may be internal or external, may be connected to the system bus 141 via the user input interface 180, or other appropriate mechanism.
- program modules depicted relative to the computer 120, or portions thereof may be stored in the remote memory storage device.
- FIG. 3 illustrates remote application programs 195 as residing on remote computer 194. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- FIG. 4 illustrates architecture 200 for web based recognition as can be used with the present invention.
- information stored in a web server 202 can be accessed through mobile device 30 (which herein also represents other forms of computing devices having a display screen, a microphone, a camera, a touch sensitive panel, etc., as required based on the form of input) , or through phone 80 wherein information is requested audibly or through tones generated by phone 80 in response to keys depressed and wherein information from web server 202 is provided only audibly back to the user.
- Architecture 200 is unified in that whether information is obtained through device 30 or phone 80 using speech recognition, a single recognition server 204 can support either mode of operation.
- architecture 200 operates using an extension of well- known markup languages (e.g. HTML, XHTML, dHTML, XML, WML, and the like) .
- markup languages e.g. HTML, XHTML, dHTML, XML, WML, and the like.
- information stored on web server 202 can also be accessed using well-known GUI methods found in these markup languages.
- authoring on the web server 202 is easier, and legacy applications currently existing can be also easily modified to include voice or other forms of recognition.
- device 30 executes HTML+ scripts, or the like, provided by web server 202.
- voice recognition is required, by way of example, speech data, which can be digitized audio signals or speech features wherein the audio signals have been preprocessed by device 30 as discussed above, are provided to recognition server 204 with an indication of a grammar or language model to use during speech recognition.
- the implementation of the recognition server 204 can take many forms, one of which is illustrated, but generally includes a recognizer 211.
- the results of recognition are provided back to device 30 for local rendering if desired or appropriate.
- device 30 Upon compilation of information through recognition and any graphical user interface if used, device 30 sends the information to web server 202 for further processing and receipt of further HTML scripts, if necessary.
- web server 202 and recognition server 204 are commonly connected, and separately addressable, through a network 205, herein a wide area network such as the Internet. It therefore is no ' t necessary that any of these devices be physically located adjacent to each other.
- web server 202 includes recognition server 204. In this manner, authoring at web server 202 can be focused on the application to which it is intended without the authors needing to know the intricacies of recognition server 204. Rather, recognition server 204 can be independently designed and connected to the network 205, and thereby, be updated and improved without further changes required at web server 202.
- web server 202 can also include an authoring mechanism that can dynamically generate client-side markups and scripts.
- the web server 202, recognition server 204 and client 30 may be combined depending on the capabilities of the implementing machines. For instance, if the client comprises a general purpose computer, e.g. a personal computer, the client may include the recognition server 204. Likewise, if desired, the web server 202 and recognition server 204 can be incorporated into a single machine.
- Access to web server 202 through phone 80 includes connection of phone 80 to a wired or wireless telephone network 208, that in turn, connects phone 80 to a third party gateway 210. Gateway 210 connects phone 80 to a telephony voice browser 212.
- Telephone voice browser 212 includes a media 1 server 214 that provides a telephony interface and a voice browser 216. Like device 30, telephony voice, browser 212 receives HTML scripts or the like from web server 202. In one embodiment, the HTML scripts are of the form similar to HTML scripts provided to device 30. In this manner, web server 202 need .not support device 30 and phone 80 separately, or even support standard GUI clients separately. Rather, a common markup language can be used. In addition, like device 30, voice recognition from audible signals transmitted by phone 80 are provided from voice browser 216 to recognition server 204, either through the network 205, or through a dedicated line 207, for example, using TCP/IP. Web server 202, recognition server 204 and telephone voice browser 212 can be embodied in any suitable computing environment such as the general purpose desktop computer illustrated in FIG. 3.
- web server 202 can include a server side plug-in authoring tool or module 209 (e.g. ASP, ASP+, ASP. Net by Microsoft Corporation, JSP, Javabeans, or the like) .
- Server side plug-in module 209 can dynamically generate client-side markups and even a specific form of markup for the type of client 'accessing the web server 202.
- the client information can be provided to the web server 202 upon initial establishment of the client/server relationship, or the web server 202 can include modules or routines to detect the capabilities of the client device.
- server side plug-in module 209 can generate a client side markup for each of . the voice recognition scenarios, i.e. voice only through phone 80 or multimodal for device 30.
- high-level dialog modules can be implemented as a server-side control stored in store 211 for use by developers in application authoring.
- the high-level dialog modules 211 would generate dynamically client- side markup and • script in both voice-only and multimodal scenarios based on parameters specified by developers.
- the high-level dialog modules 211 can include parameters to generate client-side markups to fit the developers' needs.
- controls and/or objects can include one or more of the following functions: recognizer controls and/or objects for recognizer configuration, recognizer execution and/or post-processing; synthesizer controls and/or objects for synthesizer configuration and prompt playing; grammar controls and/or objects for specifying input grammar resources; and/or binding controls and/or objects for processing recognition results.
- the extensions are designed to be a lightweight markup layer, which adds the power of an audible, visual, handwriting, etc.
- the extensions can remain independent of: the high-level page in which they are contained, e.g. HTML; the low- level formats which the extensions used to refer to linguistic resources, e.g. the text- o-speech and grammar formats; and the individual properties of the recognition and speech- synthesis platforms used in the recognition server 204.
- the high-level page in which they are contained e.g. HTML
- the low- level formats which the extensions used to refer to linguistic resources e.g. the text- o-speech and grammar formats
- the individual properties of the recognition and speech- synthesis platforms used in the recognition server 204 e.g. the techniques, tags and server side controls described hereinafter can be similarly applied in handwriting recognition, gesture recognition and image recognition.
- the extensions are a small set of XML elements, with associated attributes and DOM object properties, events, and methods, which may be used in conjunction with a source markup document to apply a recognition and/or audible prompting interface, DTMF or call control to a source .page.
- the extensions' formalities and semantics are independent of the nature of the source document, so the extensions can be used equally effectively within HTML, XHTML, cHTML, XML, WML, or with any other SGML- derived markup.
- the extensions follow the document object model wherein new functional objects or elements, which can be hierarchical, are provided. Each of the elements are discussed in detail in the Appendix, but generally the elements can include attributes, properties, methods, events and/or other "child" elements.
- the extensions may be interpreted in two different “modes” according to the capabilities of the device upon which the browser is being executed on.
- object mode the full capabilities are available.
- the programmatic manipulation of the extensions by an application is performed by whatever mechanisms are enabled by the browser on the device, e.g. a JScript interpreter in an XHTML browser, or a WMLScript interpreter in a WML browser. For this reason, only a small set of core properties and methods of the extensions need to be defined, and these manipulated by whatever programmatic mechanisms exist on the device or client side.
- the object mode provides eventing and scripting and can offer greater functionality to give the dialog author a much finer client-side control over speech interactions .
- a browser that supports full event and scripting is called an "uplevel browser”. This form of a browser will support all the attributes, properties, methods and events of the extensions. Uplevel browsers are commonly found on devices with greater processing capabilities.
- the extensions can also be supported in a "declarative mode".
- a browser operating in a declarative mode is called a “downlevel browser” and does not support full eventing and scripting capabilities. Rather, this form of browser will support the declarative aspects of a given extension (i.e. the core element and attributes), but not all the DOM (document object model) object properties, methods and events.
- This mode employs exclusively declarative syntax, and may further be used in conjunction with declarative multimedia synchronization and coordination mechanisms (synchronized markup language) such as SMIL (Synchronized Multimedia Integration Language) 2.0.
- SMIL Synchronet Markup Language
- a particular mode of entry should be discussed.
- use of speech recognition in conjunction with at least a display and, in a further embodiment, a pointing device as well which enables the coordination of multiple modes of input, e.g. to indicate the fields for data entry is particularly useful.
- the user is generally able to coordinate the actions of the pointing device with the -speech input, so for example the user is under control of when to select a field and provide corresponding information relevant to the field.
- GUI credit card submission graphical user interface
- a HTML markup language code is illustrated.
- the HTML code includes a body portion 270 and a script portion 272. Entry of information in each of the fields 250, 252 and 254 is controlled or executed by code portions 280, 282 and 284, respectively.
- code portion 280 on selection of field 250, for 1 example, by use of stylus 33 of device 30, the event "onClick” is initiated which calls or executes function "talk" in script portion 272. This action activates a grammar used for speech recognition that is associated with the type of data generally expected in field 250. This type of interaction, which involves more than one technique of input (e.g. voice and pen- click/roller) is referred as "multimodal".
- the grammar is a syntactic grammar such as but not limited to a context-free grammar, a N-grammar or a hybrid grammar.
- a "grammar” includes information for performing recognition, and in a further embodiment, information corresponding to expected input to be entered, for example, in a specific field.
- a control 290 (herein identified as "reco") includes various elements, two of which are illustrated, namely a grammar element "grammar” and a "bind" element.
- the grammars can originate at web server 202 and be downloaded to the client and/or forwarded to a remote server for speech processing.
- the grammars can then be stored locally thereon in a cache.
- the grammars are provided to the recognition server 204 for use in recognition.
- the grammar element is used to specify grammars, either ' inline or referenced using an attribute.
- syntax of reco control 290 is provided to receive the corresponding results and associate it with the corresponding field, which can include rendering of the text therein on display 34.
- the reco object upon completion of speech recognition with the result sent back to the client, it deactivates the reco object and associates the recognized text with the corresponding field.
- Portions 282 and 284 operate similarly wherein unique reco objects and grammars are called for each of the fields 252 and 254 and upon receipt of the recognized text is associated with each of the fields 252 and 254.
- the function "handle" checks the length of the card number with respect to the card type.
- server side plug-in module 209 outputs client side markups when a request has been made from the client device 30.
- the server side plug-in module 209 allows the website, and thus, the application and services provided by the application to be defined or constructed.
- the instructions in the server side plug-in module 209 are made of a complied code. The code is run when a web request reaches the web server 202.
- the server side plug-in module 209 then outputs a new client side markup page that is sent to the client device 30. As is well known, this process is commonly referred to as rendering.
- the server side plug-in module 209 operates on "controls” that abstract and encapsulate the markup language, and thus, the code of the client side markup page.
- Such controls that abstract and encapsulate the markup language and operate on the webserver 202 include or are equivalent to "Servlets” or “Server-side plug ins” to name a few.
- server side plug-in modules of the prior art can generate client side markup for visual rendering and interaction with the client device 30.
- Three different approaches are provided herein for extending the server side plug-in module 209 to include recognition and audible prompting extensions such as the exemplary client side extensions discussed above.
- the current, visual, server side controls (which include parameters for visual display such as location for rendering, font, foreground color, background color, etc.) are extended to include parameters or attributes for recognition and audibly prompting for related recognition.
- the attributes generally pertain to audible prompting parameters such as whether the prompt comprises inline text for text-to-speech conversion, playing of a prerecorded audio file 1 (e.g. a wave file), the location of the data (text for text-to-speech conversion or a prerecorded audio file) for audible rendering, etc.
- the parameters or attributes can include the location of the grammar to be used during recognition, confidence level thresholds, etc. Since the> server side plug-in module 209 generates client side markup, the parameters and attributes for the controls for the server side plug- in module 209 relate to the extensions provided in the client side markup for recognition and/or audible prompting.
- the controls indicated at 300A in Fig. 7 are controls, which are well-known in website application development or authoring tools such as ASP, ASP+, ASP. et, JSP, Javabeans, or the like. Such controls are commonly formed in a library and used by controls 302 to perform a particular visual task.
- Library 300A includes methods for generating the desired client markup, event handlers, etc.
- Examples of visual controls 302 include a "Label" control that provides a selected' text label on a visual display such as the label "Credit Card submission" 304 in Fig. 5.
- Another example of a higher level visual control 302 is a "Textbox", which allows data to be entered in a data field such as is indicated at 250 in Fig. 5.
- the existing visual controls 302 are also well-known.
- each of the visual i controls 302 would include further parameters or attributes related to recognition or audible prompting.
- further attributes may include whether an audio data file will be rendered or text-to-speech conversion will be employed as well as the location of this data file.
- a library 300B similar to library 300A, includes further markup information for performing recognition and/or audible prompting.
- Each of the visual controls 302 is coded so as to provide this information to the controls 300B as appropriate to perform the particular task related to recognition or audible prompting.
- the "Textbox" control which generates an input field on a visual display and allows the user of the client device 30 to enter information, would also include appropriate recognition or audible prompting parameters or attributes such as the grammar to be used for recognition. It should be noted that the recognition or audible prompting parameters are optional and need not be used if recognition or audible prompting is not otherwise desired. In general, if a control at level 302 includes parameters that pertain to visual aspects, the control will access and use the library 300A. Likewise, if the control includes parameters pertaining to recognition and/or audible prompting the control will access or use the library 300B. It should be noted that libraries 30'OA and 300B have been illustrated separately in order to emphasize the additional information present in library 300B and that a single library having the information of libraries 300A and 300B can be implemented.
- each of the current or prior art visual controls 302 are extended to include appropriate recognition/audible prompting attributes.
- the controls 302 can be formed in a library.
- the server side plug-in module 209 accesses the library for markup information. Execution of the controls generates a client side markup page, or a portion thereof, with the provided parameters.
- new visual, recognition/audible prompting controls 304 are provided such that the controls 304 are a subclass relative to visual controls 302, wherein recognition/audible prompting functionality or markup information is provided at controls 304.
- a new set of controls 304 are provided for recognition/audible prompting and include appropriate parameters or attributes to perform the desired recognition or an audible prompting related to a recognition task on the client device 30.
- the controls 304 use the existing visual controls 302 to the extent that visual information is rendered or obtained through a display. For instance, a control "SpeechLabel" at level 304 uses the "Label" control at level 302 to provide an audible rendering and/or visual text rendering.
- a "SpeechTextbox" control would 'associate a grammar and related recognition resources and processing with an input field.
- the attributes for controls 304 include where the grammar is located for recognition, the inline text for text-to-speech conversion, or the location of a prerecorded audio data file that will be rendered directly or a text file through text-to-speech conversion.
- the second approach is advantageous in that interactions of the recognition controls 304 with the visual controls 302 are through parameters or attributes, and thus, changes in the visual controls 302 may not require any changes in the recognition controls 304 provided the parameters or attributes interfacing between the controls 304 and 302 are .still appropriate.
- a corresponding recognition/audible prompting control at level 304 may also have to be written.
- a third approach is illustrated in Fig. 9.
- controls 306 of the third approach are separate from the visual controls 302, but are associated selectively therewith as discussed below. In this manner, the controls 306 do not directly build upon the visual controls 302, but rather provide recognition/audible prompting enablement without having to rewrite the visual controls 302.
- the controls 306, like the controls 302, use a library 300.
- library 300 includes both visual and recognition/audible prompting markup information and as such is a combination of libraries 30OA and 300B of Fig. 7.
- the visual controls 302 .do not need to be changed in content.
- the controls 306 can form a single module which is consistent and does not need to change according to the nature of the speech-enabled control 302.
- the process of speech enablement that is, the explicit association of the controls 306 with the visual controls 302 is fully under the developer's control at design time, since it is an explicit and selective process.
- This also makes it possible for the markup language of the visual controls to receive input values from multiple , sources such as through recognition provided by the markup language generated by controls 306, or through a conventional input device such as a keyboard.
- the controls 306 can be added to an existing application authoring page of a visual authoring page of the server side plug-in module 209.
- the controls 306 provide a new modality of interaction (i.e. recognition and/or audible prompting) for the user of the client device 30, while reusing the visual controls' application logic and visual input/output capabilities.
- controls 306 can be associated with the visual controls 302 whereat the application logic can be coded
- controls 306 may be hereinafter referred to as "companion controls 306" and the visual controls 302 be referred to as "primary controls 302".
- these references are provided for purposes of distinguishing ' controls 302 and 306 and are not intended to be limiting.
- the companion controls 306 could be used to develop or author a website that does not include visual renderings such as a voice-only website. In such a case, certain application logic could be embodied in the companion control logic.
- FIG. 10 An exemplary set of companion controls 306 are further illustrated in Fig. 10.
- the set of companion controls 306 can be grouped as output controls 308 and input controls 310.
- Output controls 308 provide "prompting" client side markups, which typically involves the playing of a prerecorded audio file, or text for text-to-speech conversion, the data included in the markup directly or referenced via a URL.
- a single output control can be defined with parameters to handle all audible prompting, and thus should be considered as a further aspect of the present invention, in the exemplary embodiment, the forms or types of audible prompting in a human dialog are formed as separate controls.
- the output controls 308 can include a "Question” control 308A, a "Confirmation” control 308B and a “Statement” control 308C, which will be discussed in detail below.
- the input controls 310 can also form or follow human dialog and include a "Answer” control 310A and a “Command” .control 310B. The input controls 310 are discussed below, but generally the input controls 310 associate a grammar with expected or possible input from the user of the client device 30.
- At least one of the output controls 308 or one of the input controls 310 is associated with a primary or visual control 302.
- the output controls 308 and input controls 310 are arranged or organized under a "Question/Answer" (hereinafter also "QA") control ' 320.
- QA control 320 is executed on the web server 202, which means it is defined on the application development web page held on the web server . using the server-side markup formalism (ASP, JSP or the like) , but is output as a different form of markup to the client device 30.
- ASP server-side markup formalism
- QA control 320 could comprise a single question control 308A and an answer control 310A.
- the question control 308A contains one or more prompt objects or controls 322, while the answer control 310A can define a grammar through grammar object or control 324 for recognition of the input data and related processing on that input.
- Line 326 represents the association of the QA control 320 with the corresponding primary control 302, if used.
- an audible prompt may not be necessary.
- a corresponding QA control 320 may or may not have a corresponding prompt such as an audio playback or a text-to-speech conversion, but would have a grammar corresponding to the expected value for recognition, and event handlers 328 to process the input, or process other recognizer events such as no speech detected, speech not recognized, or events fired on timeouts (as illustrated in "Eventing” below) .
- the QA control through the output controls 308 and input controls 310 and additional logic can perform one or more of the following: provide output audible prompting, collect input data, perform confidence validation of the input result, allow additional types of input such as "help" commands, or commands that allow the user of the client device to navigate to other selected areas of the website, allow confirmation of input data and control of dialog flow at the website, to name a few.
- the QA control 320 contains all the controls related to a specific topic. In this manner, a dialog is created through use of the controls with respect to the topic in order to inform to obtain information, to confirm validity, or to repair a dialog or change the topic of conversation.
- the application developer can define the visual layout of the application using the visual controls 302.
- the application developer can then define the spoken interface of the application using companion controls 306 (embodied as QA control 320, or output controls 308 and input control 310).
- companion controls 306 embodied as QA control 320, or output controls 308 and input control 310.
- each of the companion controls 306 are then linked or otherwise associated with the corresponding primary or visual control 302 to provide recognition and audible prompting.
- the application developer can define or encode the application by switching between visual controls 302 and companion controls 306, forming the links ' therebetween, until the application is completely defined or encoded.
- the question controls 308A and answer controls 310A in a QA control 320 hold the prompt and grammar resources relevant to the primary control 302, and related binding (associating recognition results with input fields of the client-side markup page) and processing logic.
- the presence, or not, of question controls 308A and answer controls 310A determines whether speech output or recognition input is enabled on activation.
- Command controls 310B and user initiative answers are activated by specification of the Scope property on the answer controls 310A and command controls 310B.
- a QA control 320 will typically hold one question control or object 308A and one answer control or object 310A.
- command controls 310B may also be specified, e.g. Help,
- a typical ⁇ regular' QA control for voice-only dialog is as follows:
- the QA control can be identified by its "id”, while the association of the QA control with the desired primary or visual control is obtained through the parameter "ControlsToSpeechEnable", which identifies one or more primary controls by their respective identifiers.
- ControlsToSpeechEnable identifies one or more primary controls by their respective identifiers.
- other well-known techniques can be used to form the association. For instance, direct, implicit associations are available .through the first and second approaches described above, or separate tables can be created used to maintain the associations.
- the parameter "runat” instructs the web server that this code should be executed at the webserver 202 to generate the correct markup.
- a QA control might also hold only a statement control 308C, in which case it is a prompt- only control without active grammars (e.g. for a welcome prompt) .
- a QA control might hold only an answer control 310A, in which case it may be a multimodal control, whose answer control 310A activates its grammars directly as the result of an event from the GUI, or a scoped mechanism (discussed below) for user .initiative.
- a QA control 320 may also hold multiple output controls 308 and input controls 310 such as multiple question controls 308A and multiple answers controls 310A. This allows an author to describe interactional flow about the same entity within the same QA control. This is particularly useful for more complex voice-only dialogs. So a mini-dialog which may involve different kinds of question and answer (e.g. asking, confirming, giving help, etc.), can be specified within the wrapper of the QA control associated with the visual control which represents the dialog entity.
- a complex QA control is illustrated in Fig. 11. The foregoing represent the main features of the QA control. Each feature is described from a functional perspective below.
- the answer control 310A abstracts the notion of grammars, binding and other recognition processing into a single object or control. Answer controls 310A can be used to specify a set- of possible grammars relevant to a question, along with binding declarations and relevant scripts. Answer controls for multimodal applications such as "Tap- and-Talk" are activated and deactivated by GUI browser events.
- the following example illustrates an answer control 310A used in a multimodal application to select a departure city on the "mouseDown" event of the textbox "txtDepCity", and write its value into the primary textbox control:
- Typical answer controls 310A in voice-only applications are activated directly by question controls 308A as described below.
- the answer control further includes a mechanism to associate a received result with the primary controls.
- binding places the values in the primary controls; however, in another embodiment the association mechanism may allow the primary control to look at or otherwise access the recognized results.
- Question controls 308A abstracts the notion of the prompt tags (Appendix A) into an object which contains a selection of possible prompts and the answer controls 310A which are considered responses to the question. Each question control 308A is able to specify which answer control 310A it activates on its execution. This permits appropriate response grammars to be bundled into answer controls 310A, which reflect relevant question controls 308A.
- the following question control 308A might be used in a voice-only application to ask for a
- the following example illustrates how to determine whether or not to activate a QA control based upon information known to the application.
- the example is a portion of a survey application.
- the survey is gathering information from employees regarding the mode of transportation they use to get to work.
- the portion of the survey first asks whether or not the user rides the bus to work. If the answer is:
- ControlsToSpeechEnable "lstDaysRodeBus"
- ClientTest "RideBusCheck”
- runat server” >
- Quantestion id Q_DaysRodeBus
- the QA control "QA_DaysRodeBus" is executed based on a boolean parameter "ClientTest", which in this example, is set based on the function RideBusCheck () . If the function returns a false condition, the QA control is not activated, whereas if a true condition is returned the QA control is activated.
- the use of an activation mechanism allows increased flexibility and improved dialog flow in the client side markup page produced. As indicated in Appendix B many of the controls and objects include an activation mechanism.
- Command controls 310B are user utterances common in voice-only dialogs which typically have little semantic import in terms of the question asked, but rather seek assistance or effect navigation, e.g. help, cancel, repeat, etc.
- the Command control 310B within a QA control 306 can be used to specify not only the grammar and associated processing on recognition (rather like an answer control 310A without binding of the result to an input field), but also a 'scope' of context and a type. This allows for the authoring of both global and context-sensitive behavior on the client side markup.
- controls 306 can be organized in a tree structure similar to that used in visual controls 302. Since 1 each of the controls 306 are also associated with selected visual controls 302, the organization of the controls 306 can be related to the structure of the controls 302.
- the QA controls 302 may be used to speech- enable both atomic controls (textbox, label, etc.) and container controls (form, panel, etc.) This provides a way of scoping behaviour and of obtaining modularity of subdialog controls. For example, the scope will allow the user of the client device to navigate to other portions of the client side markup page without completing a dialog.
- "Scope” is determined as a node of the primary controls tree.
- the following is' an example "help” command, scoped at the level of the "Pnll” container control, which contains two textboxes .
- help grammar
- the GlobalGiveHelp subroutine will execute every time "help” is recognized.
- the same typed command can be scoped to the required level of context :
- the QA control 320 can also include a method for simplifying the authoring of common
- confirm control 308 a user response to ⁇ which city?' which matches the AnsDepCity grammar but whose confidence level does not exceed the confirmThreshold value will trigger the confirm control 308. More flexible methods of confirmation available to the author include mechanisms using multiple question controls and multiple answer controls.
- additional input controls related to the confirmation control include an accept control, a deny control and a correct control.
- Each of these controls could be activated (in a manner similar to the other controls) by the corresponding confirmation control and include grammars to accept, deny or correct results, respectively. For instance, users are likely to deny be saying “no", to accept by saying "yes” or “yes + current value” (e.g., "Do you want to go to Seattle?" “Yes, to Seattle”) , to correct by saying "no” + new value (e.g., "Do you want to go to Seattle” “No, Pittsburgh”) .
- the statement control allows the application developer to provide an output upon execution of the client side markup when a response is not required from the user of the client device 30.
- An example could be a "Welcome" prompt played at the beginning of execution of a client side markup page.
- An attribute can be provided in the statement control to distinguish different types of information to be provided to the user of the client device. For instance, attributes can be provided to denote a warning message or a help message. These types could have different built-in properties such as different voices. If desired, different forms of statement controls can be provided, i.e. a help control, warning control, etc. Whether provided as separate controls or attributes of the statement control, the different types of statements have different roles in the dialog created, but share the fundamental role of providing information to the user of the client device without expecting an answer back. Eventing
- Event handlers as indicated in FIG. 11 are provided in the QA control 320, the output controls 308 and the input controls 310 for actions/inactions of the user of the client device 30 and for operation of the recognition server 204 to name a few, other events are specified in Appendix B. For instance, mumbling, where the speech recognizer detects that the user has spoken but is unable to recognize the words and silence, where speech is not detected at all, are specified in the QA control 320. These events reference client-side script functions defined by the author. In a multimodal application specified earlier, a simple mumble handler that puts an error message in the text box could be written as follows:
- a client-side script or module (herein referred to as "RunSpeech") is provided to the client device.
- the purpose of this script is to execute dialog flow via logic, which is specified in the script when executed on the client device 30, i.e. when the markup pertaining to the controls is activated for execution on the client due to values contained therein.
- the script allows multiple dialog turns between page requests, and therefore, is particularly helpful for control of voice-only dialogs such as through telephony browser 216.
- the client-side script RunSpeech is executed in a loop manner on the client device 30 until a completed form in submitted, or a new - page is otherwise requested from the client device 30.
- the controls can activate each other (e.g. question control activating a selected answer control) due to values when executed on the client.
- the controls can "activate" each other in order to generate appropriate markup, in which case server-side processing may be implemented.
- the algorithm generates a dialog turn by outputting speech and recognizing user input.
- the overall logic of the algorithm is as follows for a voice-only scenario: 1. Find next active output companion control; 2. If it is a statement, play the statement and go back to 1; If it is a question or a confirm go to 3;
- the algorithm is relatively simple because, as noted above, controls contain built-in information about when they can be activated.
- the algorithm also makes . use of the role of the controls in the dialogue. For example statements are played immediately, while questions and confirmations are only played once the expected answers have been collected.
- implicit confirmation can be provided whereby the system confirms a piece of information and asks a question at the same time. For example the system could confirm the arrival city of a flight and ask for the travel date in one utterance: "When do you want to go to Seattle?" (i.e. asking ⁇ when' and implicitly confirming estination: Seattle' ) . If the user gives a date then the city is considered implicitly accepted since, if the city was wrong, users would have immediately 'challenged it. In this scenario, it becomes clear that the knowledge of what a user is trying to achieve is vitally important: are they answering the question, or are they correcting the value, or are they asking for help? By using the role of the user input in the dialogue the system can know when to implicitly accept a value .
- a dialog is created due to the role of the control in the dialog and the relationship with other controls, wherein the algorithm executes the controls and thus manages the dialog.
- Each control contains information based on its type which is used by the execution algorithm to select (i.e. make active for execution) a given control according to whether or not it serves a useful purpose at that point in the dialog on the client. For example, confirmation controls are only active when there is a value to confirm and the system does not have sufficient confidence in that value to proceed. In a further implementation, most of these built-in pieces of information can be overridden or otherwise adapted by application developers .
- the following table summarizes the controls, their corresponding role in the dialog and the relationship with other controls.
- Turn 1 is a statement on the part of the System. Since a statement control activates no answer controls in response, the system does not- expect input. The system goes on to activate a question cqntrol at turn 2. This in turn activates a set of possible answer controls, including one which holds a grammar containing the cities available through the service, including "San Francisco", “Seattle”, etc., which permits the user to provide such a city in turn 3. The user's turn 3 is misrecognized by the system. Although the system believes it has a value from an answer control for the city, its confidence in that value is low (rightly so, since it has recognized incorrectly) .
- This low confidence value in a just-received answer control is sufficient information for RunSpeech to trigger a confirmation control on the part of the system, as generated at turn 4.
- the confirmation control in turn activates a deny control, a correct control and an accept control and makes their respective grammars available to recognize the user's next turn.
- User turns 5, 9 and 11 illustrate example responses for these controls. Turn 5 of the user simply denies the value "no". This has the effect of removing the value from the system, so the next action of RunSpeech is to ask the question again to re-obtain the value (turn 6) .
- User turn 9 is a correct control, which has again been activated as a possible response to the confirmation control .
- a correct control not only denies the value undergoing confirmation, it also provides a new value. So user turn 9 is recognized by the system as a correct control with a new value which, correctly this time, is recognized as "San Francisco".
- the client-side script RunSpeech examines the values inside each of the primary controls and an attribute of the QA control, and any selection test of the QA controls on the current page, and selects a single QA control for execution. For example, within the selected QA control, a single question and its corresponding prompt are selected for output, and then a grammar is activated related to typical answers to the corresponding question. Additional grammars may also be activated, in parallel, allowing other commands (or other answers) , which are indicated ' as being allowable. Assuming recognition has been made and any further processing on the input data is complete, the client-side script RunSpeech will begin again to ascertain which QA control should be executed next. An exemplary implementation and algorithm of RunSpeech is provided in Appendix B.
- controls and the RunSpeech algorithm or module is not limited to the client/server application described above, but rather can be adapted for use with other application abstractions.
- an application such as VoiceXML, which runs only on the client device 30, could conceivably .
- include further elements or controls such as question and answer provided above as part of the VoiceXML browser and operating in the same manner.
- the mechanisms of the RunSpeech algorithm described above could be executed by default by the browser without the necessity for extra script.
- other platforms such ' as finite state machines can be adapted to include the controls and RunSpeech algorithm or module herein described.
- the companion controls 306 are associated with the primary controls 302 (the existing controls on the page) . As such the companion controls 306 can re-use the business logic and presentation capabilities of the primary controls 302. This is done in two ways: storing values in the primary controls 302 and notifying the primary controls of the changes 302.
- the companion controls 306 synchronize or associates their values with the primary controls 302 via the mechanism called binding. Binding puts values retrieved from recognizer into the primary controls 302, for example putting text into a textbox, he'rein exemplified with the answer control. Since primary controls 302 are responsible for visual presentation, this provides visual feedback to the users in multimodal scenarios.
- the companion controls 306 also offer a mechanism to notify the primary controls 302 that they have received an input via the recognizer. This allows the primary controls 302 to take actions, such as invoking the business logic. (Since the notification amounts to a commitment of the companion controls 306 to the values which they write into the primary controls 302, the implementation provides a mechanism to control this notification with a fine degree of control. This control is provided by the RejectThreshold and Conf irmThreshold properties on the answer control, which specify numerical acoustic confidence values below which the system should respectively reject or attempt to confirm a value.)
- the following tags are a set of markup elements that allows a document to use speech as an input or output medium.
- the tags are designed to be self-contained XML that can be imbedded into any SGML derived markup ' languages such as HTML, XHTML, cHTML, SMIL, WML and the like.
- the tags used herein are similar to SAPI 5.0, which are known methods available from Microsoft Corporation of Redmond, Washington.
- SAPI 5.0 which are known methods available from Microsoft Corporation of Redmond, Washington.
- the tags, elements, events, attributes, properties, return values, etc. are merely exemplary and should not be considered limiting. Although exemplified herein for speech and DTMF recognition, similar tags can be provided for other forms of recognition.
- the Reco element is used to specify possible user inputs and a means for dealing with the input results.
- its main elements are ⁇ grammar> and ⁇ bind>, and it contains resources for configuring recognizer properties .
- Reco elements are activated programmatically in uplevel browsers via Start and Stop methods, or in SMIL-enabled browsers by using SMIL commands. They ' are considered active declaratively in downlevel browsers (i.e. non script-supporting browsers) by their presence on the page. In order to permit the activation of multiple grammars in parallel, multiple Reco elements may be considered active simultaneously.
- Recos may also take a partcular mode - ⁇ automatic' , v single' or multiple' - to distinguish the kind of recognition scenarios which they enable and the behaviour of the recognition platform.
- the Reco element contains one or more grammars and optionally a set of bind elements which inspect the results of recognition and copy the relevant portions to values in the containing page.
- Reco supports the programmatic activation and deactivation of individual grammar rules. Note also that all top-level rules in a grammar are active by default for a recognition context. 2.1.1 ⁇ grammar> element
- the grammar element is used to specify grammars, either inline or referenced using the src attribute. At least one grammar (either inline or referenced) is typically specified. Inline grammars can be text-based grammar formats, while referenced grammars can be text-based or binary type. Multiple grammar elements may be specified. If more than one grammar element is specified, the rules within " grammars are added as extra rules ' within the same grammar. Any rules with the same name will be overwritten.
- langID Optional. String indicating which language speech engine should use.
- langID follows a precedence order from the lowest scope - remote grammar file (i.e language id is specified within the grammar file) followed by grammar element followed by reco element.
- the bind element is used to bind values from the recognition results into the page.
- the recognition results consumed by the bind element can be an XML document containing a semantic markup 30 language (SML) for specifying recognition results. Its contents include semantic values, actual words spoken, and confidence scores. SML could also include alternate recognition choices (as in an N-best recognition result) .
- SML semantic markup 30 language
- targetElement The element to which the value content from the SML will be assigned (as in W3C SMIL 2.0) .
- targetAttribute Optional.
- the attribute of the target element to which the value content from the SML will be assigned (as with the attrlbuteName attribute in SMIL 2.0) . If unspecified, defaults to "value”.
- test Optional.
- An XML Pattern (as in the W3C XML
- This binding may be conditional, as in the following example, where a test is made on the confidence attribute of the dest_city result as a pre-condition to the bind operation:
- the bind element is a simple declarative means of 10 processing recognition results on downlevel or uplevel browsers.
- the reco DOM object supported by uplevel browsers implements the onReco event handler to permit programmatic script analysis and post-processing of the recognition ' 15 return.
- Reco The following attributes of Reco are used to configure the speech recognizer for a dialog turn.
- initialTimeout Optional. The time in milliseconds between start of recognition and 25 the detection of speech. This value is passed to the recognition platform, and if exceeded, an onSilence event will be provided from the recognition platform (see 2.4.2). If not specified, the speech platform will use a default value.
- babbleTimeout Optional. The period of time in milliseconds in which the recognizer must return a result after detection of speech. For recos in automatic and single mode, this applies to the period between speech detection and the stop call. For recos in ⁇ multiple' mode, this timeout applies to the period between speech detection and each recognition return - i.e. the period is restarted after each return of results 'or other event. If exceeded, different events are thrown according to whether an error has occurred or not. If the recognizer is still processing audio - eg in the case of an exceptionally long utterance - the onNoReco event is thrown, with status code 13 (see 2.4.4). If the timeout is exceeded for any other reason, however, a recognizer error is more likely, and the onTimeout event is thrown. If not specified, the speech platform will default to an internal value.
- maxTimeout Optional. The ' period of time in milliseconds between recognition start and results returned to the browser. If exceeded, the onTimeout event is thrown by the browser - this caters for network or recognizer failure in distributed environments. For recos in ⁇ multiple' mode, as with babbleTimeout, the period is restarted after the return of each recognition or other event. Note that the maxTimeout attribute should be greater than or equal . to the sum of initialTimeout and babbleTimeout. If not specified, the value will be a browser default.
- endSilence Optional. For Recos in automatic mode, the period of silence in milliseconds after the end of an utterance which must be free of speech after which the recognition results are returned. Ignored for recos of modes Other than automatic. If unspecified, defaults to platform internal value. • reject: Optional. The recognition rejection threshold, below which the platform will throw the ⁇ no reco' event. If not specified, the speech platform will use a default value. Confidence scores range between 0 and 100 (integer)-. Reject values lie in between.
- server Optional. URI of speech platform (for use when the tag interpreter and recognition platform are not co-located) .
- the following properties contain the results returned by the recognition process (these are supported by uplevel browsers) .
- Reco activation and grammar activation may be controlled using the following methods in the Reco' s DOM object. With these methods, uplevel browsers can start and stop Reco objects, cancel recognitions in progress, and activate and deactivate individual grammar top-level rules (uplevel browsers only) .
- the Start method starts the recognition process, using as active grammars all the top-level rules for the recognition context which have not been explicitly deactivated.
- the method sets a non-zero status code and fires an. onNoReco event when fails.
- the Stop method is a call to end the recognition proces's.
- the Reco object stops recording audio, and the recognizer returns recognition results on the audio . received up to the point where recording was stopped. All the recognition resources used by Reco ' are released, and its grammars deactivated. (Note that this method need not be used explicitly for typical recognitions in automatic mode, since the recognizer itself will stop the reco object on endpoint detection after recognizing a complete sentence.) If the Reco has not been started, the call has no effect.
- the Cancel method stops the audio feed to the recognizer, deactivates the grammar, and releases the recognizer and discards any recognition results.
- the browser will disregard a recognition result for canceled recognition. If the recognizer has not been started, the call has no effect.
- the Activate method activates a top-level rule in the context free grammar (CFG) . Activation must be called before recognition begins, since it will have no effect during a ⁇ Started f recognition process. Note that all the grammar top-level rules for the recognition context which have not been explicitly deactivated are already treated as active.
- CFG context free grammar
- the method deactivates a top-level rule in the grammar. If the rule does not exist, the method has no effect.
- the Reco DOM object supports the following events, whose handlers may be specified as attributes of the reco element .
- This event gets fired when the recognizer has a recognition result available for the browser. For recos in automatic mode, this event stops, the recognition process automatically and clears resources (see 2.3.2). OnReco is typically used for programmatic analysis of the recognition result and processing of the result into the page.
- the handler can query the event object for data (see the use of the event object in the example below) .
- the following XHTML fragment uses onReco to call a script to parse the recognition outcome and assign the values to the proper fields.
- onSilence handles the event of no speech detected by the recognition platform before the duration of time specified in the initialTimeout attribute on the Reco (see 2.2.1). This event cancels the recognition process automatically for- the automatic recognition mode.
- Event Object Info
- Event Object Info
- the handler can query the event obj ect for data .
- onNoReco is a handler for the event thrown by the speech recognition platform when . it is unable to return valid recognition results. The different cases in which this may happen are distinguished by status code. The event stops the recognition process automaticall .
- Event Object Info
- the handler can query the event object for data.
- the prompt element is used to specify system output. Its content may be one or more of the following: • inline or referenced text, which may be marked up with prosodic or other speech output information;
- Prompt elements may be interpreted declaratively by downlevel browsers (or activated by SMIL commands) , or by object methods on uplevel browsers.
- the prompt element contains the resources for system output, either as text or references to audio files, or both.
- Simple prompts need specify only the text required for output, eg:
- This simple text may also contain further markup of any of the kinds described below.
- Speech Synthesis markup Any format of speech synthesis markup language can be used inside the prompt element. (This format may be specified in the ⁇ tts' attribute described in 3.2.1.) The following example shows text with an instruction to emphasize certain words within it:
- the actual content of the prompt may need to be computed on the client just before the prompt is output.
- the value needs to be dereferenced in a. variable.
- the value element may be used for this purpose.
- Value Element value Optional. Retrieves the values of an element in the document .
- targetElement Optional. Either href or targetElement must be specified. _ The id of the element containing the value to be retrieved.
- targetAttribute Optional. The attribute of the element from which the value will be retrieved.
- the URI of an audio segment, href will override targetElement if both are present.
- the targetElement attribute is used to reference an element within the containing document.
- the content of the element whose id is specified by targetElement is inserted into the text to be synthesized. If the desired content is held in an attribute of the element, the targetAttribute attribute may be used to specify the necessary attribute on the targetElement. This is useful for dereferencing the values in HTML form controls, for example.
- the "value" attributes of the "txtBoxOrigin" and "txtBoxDest" elements are inserted into the text before the prompt is output
- the value element may also be used to refer to a pre- recorded audio file for playing instead of, or within, a synthesized prompt.
- the following example plays a beep at the end of the prompt: ⁇ prompt>
- the src attribute may be used with an empty element to reference external content via URI, as in:
- the target of the src attribute can hold any or all of the above content specified for inline prompts.
- the prompt element holds the following attributes (downlevel browsers) and properties (downlevel and uplevel browsers) .
- prefetch Optional.
- a Boolean flag indicating whether the prompt should be immediately synthesized and cached at browser when the page is loaded. Default is false.
- Uplevel browsers support the following properties in the prompt's DOM object.
- bookmark Read-only. A string object recording the text of the last synthesis bookmark encountered. . status: Read-only. Status code returned by the speech platform.
- Prompt playing may be controlled using the following methods in the prompt's DOM object. In this way, uplevel browsers can start and stop prompt objects, pause and resume prompts in progress, and change the speed and volume of the synthesized speech.
- Start playback of the prompt Unless an argument is given, the method plays the contents of the object. Only a single prompt object is considered ⁇ started' at a given time, so if Start is called in succession, all playbacks are played in sequence.
- the prompt DOM object supports the following events, whose handlers may be specified as attributes of the prompt element.
- Event Object Info
- Event Object Info
- the handler can query the event object for data.
- Event Object Info
- the handler can query the event object for data.
- the following example shows how bookmark events can be used to determine the semantics of a user response - either a correction to a departure city or the provision of a destination city - in terms of when bargein happened during the prompt output.
- the onBargein handler calls a script which sets a global ⁇ mark' variable to the last bookmark encountered in the prompt, and the value of this ⁇ mark' is used in the reco's postprocessing function ( ⁇ heard' ) to set the correct value.
- DTMF can cause prompt object to fire a barge-in event.
- tags and eventing discussed below with respect to DTMF recognition and call control discussed in Section 5 generally pertain to interaction between the voice browser 216 and media server 214.
- bind assign DTMF conversion result to proper field.
- targetElement Required. The element to which a partial recognition result will be assigned to
- targetAttribute the attribute of the target element to which the recognition result will be assigned to (cf. .same as in SMIL 2.0) .
- Default is "value" .
- This example demonstrates how to allow users entering into multiple fields.
- Example 3 How to allow both speech and DTMF inputs and disable speech when user starts DTMF.
- An XML DOM Node object representing DTMF to string conversion matrix also called DTMF grammar.
- the default grammar is
- Escape key is one key.
- Read-only string storing white space separated token string, where each token is converted according to DTMF grammar.
- Read-Write Timeout period for adjacent DTMF keystokes, in milliseconds. If unspecified, defaults to the telephony platform's internal setting.
- Event Object Info
- the handler can query the event obj ect for data .
- Event Object Info
- the handler can query the event object for data.
- Event Object Info
- the handler- can query the event object for data.
- This object is as native as window object in a GUI browser.
- the lifetime of the telephone object is the same as the browser instance itself.
- a voice browser for telephony instantiates the telephone object, one for each call. Users don't instantiate or dispose the object.
- XML DOM node object Read-only. XML DOM node object. Implementation specific. This is the address of the caller. For PSTN, may a combination of ANI and ALL For VoIP, this is the caller's IP address.
- Synta telephone. Transfer (strText) ;
- the browser may release resources allocated for the call. It is up to the application to recover the session state when the transferred call returns using strUID.
- the underlying telephony platform may route the returning call to a different browser. The call can return only when the recipient terminates the call.
- o strText Required. The address of the intended receiver.
- o strUID Required. The session ID uniquely identifying the current call. When the transferred call is routed back, the srtUID will appear in the address attribute.
- o imaxTime Optional. Maximum duration in seconds of the transferred call. If unspecified, defaults to platform-internal value Return value : None.
- App developers using telephone voice browser may implement the following event handlers .
- This example shows scripting wired to the call control events to manipulate the telephony session.
- dialog- flow 0 This example shows how to implement a simple dialog flow which seeks values for input boxes and offers context- sensitive help for the input. It uses the title attribute on the HTML input mechanisms (used in a visual browser as a "tooltip" mechanism) to help 5 form the content of the help prompt .
- the following example shows activation of prompt and reco elements using SMIL mechanisms.
- the QA control adds speech functionality to the primary control to which it is attached.
- Its object model is an abstraction of the content model of the exemplary tags in Appendix A.
- ControlsToSpeechEnable specifies the list of IDs of the primary controls to speech enable. IDs are comma delimited.
- Speechlndex specifies the ordering information of the QA control - this is used by RunSpeech. Note: If more than one
- RunSpeech will execute them in source order. In situations where some QA controls have Speechlndex specified and some QA controls do not, RunSpeech will order the QA controls first by Speechlndex, then by source order.
- ClientTest specifies a client-side script function which returns a boolean value to determine when the QA control is considered available for selection by the RunSpeech algorithm.
- the system strategy can therefore be changed by using this as a condition to activate or de-activate QA controls more sensitively than Speechlndex. If not specified, the QA control is considered available for activation.
- QA control contains an array of question objects or controls, defined by the dialog author. Each question control will typically relate to a function of the system, eg asking for a value, etc. Each question control may specify an activation function using the ClientTest attribute, so an active QA control may ask different kinds of questions about its primary control under different circumstances. For example, the activation condition for main question Q_Main may be that the corresponding primary control has no value, and the activation condition for a Q_GiveHelp may be that the user has just requested help. Each Question may specify answer controlss from within the QA control which are activated when the question control is outputted.
- QA control contains an -array of statement objects or controls. Statements are used to provide information to the listener, such as welcome prompts.
- QA control contains an array of answer objects or controls.
- An answer control is activated directly by a question control within the QA control, or by a StartEvent from the Primary control. Where multiple answers are used, they will typically reflect answers to the system functions, e.g. A_Main might provide a value in response to Q_Main, and A_Confirm might providing a yes/no + correction to Confirm.
- QA control may contain a confirm object or control.
- This object is a mechanism provided to the dialog authors which simplify the authoring of common confirmation subdialogs .
- a Command array holds a set of command controls. Command controls can be thought of as answer controls without question controls, whose behavior on recognition can be scoped down the control tree.
- the question control is used for the speech output relating to a given primary control. It contains a set of prompts for presenting information or asking a question, and a list of ids of the answer controls, which may provide an answer to that question. If multiple • answer controls are specified, these grammars are loaded in parallel when the question is activated. An exception will be thrown if no answer control is specified in the question control.
- ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a question control is considered active within its QA control (the QA control itself must be active for the question to be evaluated) .
- the first question control with a true condition is selected for output.
- the function may be used to determine whether to output a question which asks for a value ("Which city do you want?") or which attempts to confirm it ("Did you say London?") . If not specified, the question condition is considered true.
- Prompt [] Prompts
- the prompt . array specifies a list of prompt objects, discussed below. Prompts are also able to specify conditions of selection (via client functions) , and during RunSpeech execution only the first prompt with a true condition is selected for playback.
- Answers is an array of references by ID to controls that are possible answers to the question.
- the behavior is to activate the grammar from each valid answer control in response to the prompt asked by the question control.
- Integer initialTimeout The time in milliseconds between start of , recognition and the detection of speech. This value is passed to the recognition platform, and if exceeded, an onSilence event will be thrown from the recognition platform. If not specified, the speech platform will use a default value.
- the speech- platform will default to an internal value.
- PromptFunction specifies a client-side function that will be called once the question has been selected but before the prompt is played. This gives a chance to the application developer to perform last minute modifications to the prompt that may be required. PromptFunction takes the ID of the target prompt as a required parameter.
- OnClientNoReco specifies the name of the client-side function to call when the NoReco (mumble) event is received.
- the prompt object contains information on how to play prompts. All the properties defined are read/write properties .
- Count specifies an integer which is used for prompt selection. When the value of the count specified on a prompt matches the value of the count of its question control, the prompt is selected for playback. Legal values are 0 - 100.
- ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a prompt within an active question control will be selected for output.
- the first prompt with a true condition is selected.
- the function may be used to implement prompt tapering, eg ("Which city would you like to depart from?” for a function returning true if the user if a first-timer, or "Which city?” for an old hand) . If not specified, the prompt's condition is considered true.
- the prompt property contains the text of, the prompt to play. This is defined as the content of the prompt element. It may contain further markup, as in TTS rendering information, or ⁇ value> elements. As with all parts of the page, it may also be specified as script code within ⁇ script> tags, for dynamic rendering of prompt output.
- Source specifies the URL from which to retrieve the text of the prompt to play. If an inline prompt is specified, this property is ignored.
- Bargein is used to specify whether or not barge-in (wherein the user of the client device begins speaking when a prompt is being played) is allowed on the prompt. The defaults is true.
- OnClientBookmark accesses the name of the client-side function to call when a bookmark is encountered.
- ClientTest and the count attribute of each prompt are evaluated in order.
- the first prompt with both ClientTest and count true is played.
- a missing count is considered true.
- a missing ClientTest is considered true.
- the playOnce attribute specifies whether or not a statement control may be activated more than once per page.
- playOnce is a Boolean attribute with a default (if not specified) of TRUE, i.e., the statement control is executed only once.
- ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a statement control will be selected for output. RunSpeech will activate the first Statement with ClientTest equal to true. If not specified, the ClientTest condition is considered true.
- PromptFunction specifies a client-side function that will be called once the statement control has been selected but before the prompt is played. This gives a chance to the authors to do last minute modifications to the prompt that may be required.
- the prompt array specifies a list of prompt objects. Prompts are also able to specify conditions of selection (via client functions), and during RunSpeech execution only the first prompt with a true condition is selected for playback.
- Confirm controls are special types of question controls. They may hold all the properties and objects of other questions controls, but they are activated differently.
- the RunSpeech algorithm will check the confidence score found in the confirmThreshold of the answer control of the ControlsToSpeechEnable. If it is too low, the confirm control is activated. If the confidence score of the answer control is below the confirmThreshold, then the binding is done but the onClientReco method is not called.
- the dialog author may specify more than one confirm control per QA control. RunSpeech will determine which confirm control to activate based on the function specified by ClientTest.
- the answer control is used to specify speech input resources and features. It contains a set of grammars related to the primary control. JSIote that an answer may be used independently of a question, in multimodal applications without prompts, for example, or in telephony applications where user initiative may be enabled by extra- answers. Answer controls are activated directly by question controls, by a triggering event, or by virtue of explicit scope. An exception will be thrown if no grammar object is ' specified in the answer control.
- Scope holds the id of any named element on the page. Scope is used in answer control for scoping the availability of user initiative (mixed task initiative: i.e. service jump digressions) grammars. If scope is specified in an answer control, then it will be activated whenever a QA control corresponding to a primary control within the subtree of the contextual control is activated.
- StartEvent specifies the name of the event from the primary control that will activate the answer control (start the Reco object) . This will be typically used in multi-modal applications, eg onMouseDown, for tap-and-talk.
- StopEvent specifies the name of the event from the primary control that will de-activate the answer control (stop the Reco object) . This will be typically used in multi-modal applications, eg onMouseUp, for tap-and-talk.
- ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances an answer control otherwise selected by scope or by a question control will be considered active.
- the test could be used during confirmation for a Correction' answer control to disable itself when activated by a question control, but mixed initiative is not desired (leaving only accept/deny answers controls active) .
- a scoped answer control which permits a service jump can determine more flexible means of activation by specifying a test which is true or false depending on another part of the dialog. If not specified, the answer control's condition is considered true.
- Grammars accesses a list of grammar objects.
- DTMFs holds an array of DTMF objects.
- Binds holds a list of the bind objects necessary to map the answer control grammar results (dtmf or spoken) into control values. All binds specified for an answer will be executed when the relevant output is recognized. If no bind is specified, the SML output returned by recognition will be bound to the control specified in the ControlsToSpeechEnable of the QA control
- OnClientReco specifies the name of the client-side function to call when spoken recognition results become available.
- OnClientDTMF OnClientDTMF holds the name of the client-side function to call when DTMF recognition results become available.
- the value of autobind determines whether or not the system default bindings are implemented for a recognition return from the answer control. If unspecified, the default is true. Setting autobind to false is an instruction to the system not to perform the automatic binding.
- the server attribute is an optional attribute specifying the URI of the speech server to perform the recognition. This attribute over-rides the URI of the global speech server attribute,.
- RejectThreshold specifies the minimum confidence score to consider returning a recognized utterance. If overall confidence is below this level, a NoReco event will be thrown. Legal values are 0-100.
- the grammar object contains information on the selection and content of grammars, and the means for processing recognition results. All the properties defined ' are read/write properties. ⁇ Grammar
- the ClientTest property references a client-side boolean function which determines under which conditions a grammar is active. If multiple grammars are specified within an answer control (e.g. to implement a system/mixed initiative strategy, or to reduce the perplexity of possible answers when the dialog is going badly) , only the first grammar with a true ClientTest function will be selected for activation during RunSpeech execution. If this property is unspecified, true is assumed.
- Source Source accesses the URI of the grammar to load, if specified.
- InlineGrammar accesses the text of the grammar if specified inline. If that property is not empty, the Source attribute is ignored.
- Binds may be specified both for spoken grammar and for DTMF recognition returns in a single answer control. ⁇ bind
- string Value Value specifies the text that will be bound into the target element. It is specified as an XPath on the SML . output from recognition.
- TargetElement specifies the id of the primary control to which the bind statement applies. If not specified, this is assumed to be the ControlsToSpeechEnable of the , relevant QA control.
- TargetAttribute specifies the attribute on the TargetElement control in which bind the value. If not specified, this is assumed to be the Text property of the target element.
- the Test attribute specifies a condition which must evaluate to true on the binding mechanism. This is specified as an XML Pattern on the SML output from recognition. 1.5.2.1 Automatic binding
- the default behavior on the recognition return to a speech- enabled primary control is to bind certain properties into that primary control. This is useful for the dialog controls to examine the recognition results from the primary controls across turns (and even pages) . Answer controls will perform the following actions upon receiving recognition results:
- DTMF may be used by answer controls in telephony applications.
- the DTMF object essentially applies a different modality of grammar (a keypad input grammar rather than a speech input grammar) to the same answer.
- the DTMF content model closely matches that of the client side output Tags DTMF element. Binding mechanisms for DTMF returns are specified using the targetAttribute attribute of DTMF object.
- a flag which states whether or not to flush the telephony server's DTMF buffer before recognition begins. Setting flush to false permits DTMF key input to be stored between recognition/page calls, which permits the user to type- ahead' .
- TargetAttribute specifies the property on the primary control in which to bind the value. If not specified, this is assumed to be the Text property of, the primary control.
- the ClientTest property references a client-side boolean function which determines under which conditions a DTMF grammar is active. If multiple grammars are specified within a DTMF object, only the first grammar with a true ClientTest function will be selected for activation during RunSpeech execution. If this property is unspecified, true is assumed.
- DTMFGrammar maps a key to an output value associated with the key.
- the following sample shows how to map the "1" and "2" keys to text output values.
- the command control is a ' special variation of answer control which can be defined in any QA control.
- Command controls are forms of user input which are not answers to the question at hand (eg, Help, Repeat, Cancel) , and which do not need to bind recognition results into primary controls.
- the command grammar is active for every QA control within that scope. Hence a command does not need to be activated directly by a question control or an event, and its grammars are activated in parallel independently of answer controls building process.
- Command controls of the same type at QA controls lower in scope can override superior commands with context-sensitive behavior (and even different / extended grammars if necessary) .
- Scope holds the id of a primary control. Scope is used in command controls for scoping the availability of the command grammars. If scope is specified for a command control, the command's grammars will be activated whenever a QA control corresponding to a primary control within the subtree of the contextual control is activated.
- Type specifies the type of command (eg help' , Cancel' etc.) in order to allow the overriding of identically typed commands at lower levels of the scope tree. Any string value is possible in this attribute, so it is up to the author to ensure that types are used correctly;
- RejectThreshold specifies the minimum confidence level .of recognition that is necessary to trigger the command in recognition (this is likely to be used when higher than usual confidence is required, eg before executing the result of a ⁇ Cancel' command). Legal values are 0-100.
- Mixed initiative dialogs provide the capability of accepting input for multiple controls with the asking of a single question.
- the answer to the question "what are your travel plans" may provide values for an origin city textbox control, a destination city textbox control and a calendar control ("Fly from Puyallup to Yakima on September 30 th ”) .
- a robust way to encode mixed initiative dialogs is to handwrite the mixed initiative grammar and relevant binding statements, and apply these to a single control.
- the following example shows a single page used for a simple mixed initiative voice interaction about travel.
- the first QA control specifies the mixed initiative grammar and binding, and a relevant prompt asking for two items.
- the second and third QA controls are not mixed initiative, and so bind directly to their respective primary control by default (so no bind statements are required) .
- the RunSpeech algorithm will select the QA controls based on an attribure "Speechlndex" and whether or not their primary controls hold valid values.
- Application developers can also specify several question controls in a QA control. Some question controls can allow a mixed initiative style of answer, whilst others are more directed. By authoring conditions on these question controls, application developer can select between the questions depending on the dialogue situation.
- the mixed initiative question asks the value of the two textboxes at the same time (e.g., what are your travel plans?' ) and calls the mixed initiative answer (e.g., ⁇ from London to Seattle'). If this fails, then the value of each textbox is asked separately
- the mixed-initiative grammar may still be activated, thus allowing users to provide both values .
- a standard QA control can specify a scope for the activation of its grammars. Like a command control, this QA control will activate the grammar from a relevant answer control whenever another QA control is activated within the scope of this context . Note that its question control will only be asked if the QA control itself is activated.
- the promptFuncti ⁇ n script is called after a question control is selected but before a prompt is chosen and played. This lets application developers build or modify the prompt at the last minute. In the example below, this is used to change the prompt depending on the level of experience of the users.
- Promptl.Text "Please choose between e-mail, calendar and news"; return;
- a mechanism is needed to provide voice-only clients with the information necessary to properly render speech-enabled pages. Such a mechanism must provide the execution of dialog logic and maintain state of user prompting and grammar activation as specified by the application developer.
- the page containing speech-enabled controls is visible to the user of the client device.
- the user of the client device may provide speech input into any visible speech-enabled control in any desired order using the a multimodal paradigm.
- the mechanism used by voice-only clients to render speech- enabled pages is the RunSpeech script or algorithm.
- the RunSpeech script relies upon the Speechlndex attribute of the QA control and the SpeechGroup control discussed below.
- the system parses a control script or webpage having the server controls and creates a tree structure of server controls. Normally the root of the tree -..is the Page control. If the control script uses custom o.r user control, the children tree of this custom or user control is expanded. Every node in the tree has an ID and it is easy to have name conflict in the tree when it expands. To deal with possible name conflict, the system includes a concept of NamingContainer. Any node in the tree can implement NamingContainer and its children lives within that name space.
- the QA controls can appear anywhere in the server control tree.
- a SpeechGroup control is provided.
- the Speechgroup control is hidden from application developer.
- One SpeechGroup control is created and logically attached to every NamingContainer node that contain QA controls in its children tree. QA and SpeechGroup controls are considered members of its direct NamingContainer' s SpeechGroup. The top level SpeechGroup control is attached to, the Page object. This membership logically constructs a tree - a logical speech tree - of QA controls and SpeechGroup controls.
- SpeechGroup control For simple speech-enabled pages or script (i.e., pages that do not contain other NamingContainers) , only the root SpeechGroup control is generated and placed in the page' s server control tree before the page is sent to the voice- only client.
- the SpeechGroup control maintains information regarding the number and rendering order of QA controls on the page.
- SpeechGroup controls For pages containing a combination of QA control (s) and NamingContainer (s) , multiple SpeechGroup controls are generated: one SpeechGroup control for the page (as described above) and a SpeechGroup control for each NamingContainer.
- the page-level SpeechGroup control maintains QA control information as described above as well as number and rendering order of composite controls.
- the SpeechGroup control associated with each NamingContainer maintains the number and rendering order of QAs within each composite.
- the main job of the SpeechGroup control is to maintain the list of QA controls and SpeechGroups on each page and/or the list of QA controls comprising a composite control.
- client side markup script e.g. HTML
- each SpeechGroup writes out a QACollection object on the client side.
- a QACollection has a list of QA controls and QACollections . This corresponds to the logical server side speech tree.
- the RunSpeech script will query the page-level QACollection object for the next QA control to invoke during voice-only dialog processing.
- the page level SpeechGroup control located on each page is also responsible for:
- SpeechGroup controls When the first SpeechGroup control renders, it queries the System. Web. UI . Page. Request .Browser property for the browser string. This property is then passed to the RenderSpeechHTML and RenderSpeechScript methods for each QA control on the page. The QA control will then render for the appropriate client (multimodal or voice-only) .
- the onLoad event is sent to each control on the page.
- the page-level SpeechGroup control is created by the first QA control receiving the onLoad event.
- the creation of SpeechGroup controls is done in the following manner: (assume a page containing composite controls)
- the Render event is sent to the speech-enabled page.
- the page-level SpeechGroup control receives the Render event, it generates client side script to include RunSpeech. js and inserts it into the page that is eventually sent to the client device. It also calls all its direct children to render speech related HTML and scripts. If a child is SpeechGroup, the child in turn calls its children again. In this manner, the server rendering happens along the server side logical speech tree.
- SpeechGroup When a SpeechGroup renders, it lets its children (which can be either QA or SpeechGroup) render speech HTML and scripts in the order of their Speechlndex. But a SpeechGroup is hidden and doesn't naturally have a Speechlndex. In fact, a SpeechGroup will have the same Speechlndex as its NamingContainer, the one it attaches to.
- the NamingContainer is usually a UserControl or other visible control, and an author can set Speechlndex to it.
- RunSpeech The purpose of RunSpeech is to permit dialog flow via logic which is specified in script or logic on the client.
- RunSpeech is specified in an external script file, and loaded by a single line generated by the server- side rendering of the SpeechGroup control, e.g.:
- the RunSpeech. js script file should expose a means for validating on the client that the script has loaded correctly and has the right version id, etc.
- the actual validation script will be automatically generated by the page class as inline functions that are executed after the attempt to load the file.
- Linking to an external script is functionally equivalent to specifying it inline, yet it is both more efficient, since browsers are able to cache the file, and cleaner, since the page is not cluttered with generic functions.
- Tap-and-talk multimodality can be enabled by coordinating the activation of grammars with the onMouseDown event.
- the wiring script to do this will be generated by the Page based on the relationship between controls (as specified in the ControlsToSpeechEnable property of the QA control in) .
- the ⁇ input> and ⁇ reco> elements are output by each control's Render method.
- the wiring mechanism to add the grammar activation command is performed by client-side script generated by the Page, which changes the attribute of the primary control to add the activation command before any existing handler for the activation event:
- TextBoxl .onMouseDown "Recol . Start ( ) ; "+TextBoxl . onMouseDown; ⁇ /script>
- Page Class properties The Page also contains the following properties which are available to the script at runtime:
- SML - 7 a name/value pair for the ID of the control and it' s associated SML returned by recognition.
- SpokenText - a name/value pair for the ID of the control and it's associated recognized utterance
- the RunSpeech algorithm is used to drive dialog flow on the client device. This may involve system prompting and dialog management (typically for voice-only dialogs) , and/or processing of speech input (voice-only and multimodal dialogs) . It is specified as a script file referenced by URI from every relevant speech-enabled page (equivalent to inline embedded script) .
- Rendering of the page for voice only browsers is done in the following manner:
- RunSpeech module or function works as follows (RunSpeech is called in response to document . onreadystate becoming "complete") : (1) Find the first active QA control in speech index order (determining whether a QA control is active is explained below) .
- a QA control is considered active if and only if:
- the QA control's ClientTest either is not present or returns true, AND (2)
- the QA control contains an active question control or statement control (tested in source order) ,
- the QA control contains only statement controls, OR b. At least one of the controls referenced by the QA control's ControlsToSpeechEnable has an empty or default value.
- a question control is considered active if and only if:
- the question control contains an active prompt object .
- a prompt object is considered active if and only if:
- the prompt object's ClientTest either is not present or returns true, AND (2)
- the prompt object's Count is either not present, or is less than or equal to the Count of the parent question control.
- a QA control is run as follows:
- An answer control is considered active if and only if:
- a command control is considered active if and only if:
- RunSpeech relies on events to continue driving the dialog - as described so far it would stop after running a single QA control.
- Event handlers are included for Prompt .OnComplete, Reco. OnReco, Reco . OnSilence, Reco.OnMaxTimeout, and Reco. OnNoReco . Each of these will be described in turn.
- RunSpeechOnComplete works as follows :
- RunSpeech If the active Prompt object was contained within a statement control, or a question control which had no active answer controls, RunSpeech is called.
- RunSpeechOnReco is responsible for creating and setting the SML, SpokenText and Confidence properties of the ControlsToSpeechEnable.
- the SML, SpokenText and Confidence properties are then available to scripts at runtime.
- RunSpeechOnSilence, RunSpeechOnMaxTimeout, and RunSpeechOnNoReco all work the same way: (1) The appropriate OnClientXXX function is called, if specified.
- the first active confirm control is found (the activation of a confirm control is determined in exactly the same way as the activation of a question control) .
- RunSpeech is called.
- Else the QA, control is run, with the selected confirm control as the active question control.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Web server controls (302, 304, 306) are provided for generating client side markups with recognition and/or audible prompting. Three approaches are disclosed for implementation of the controls (302, 304, 306).
Description
WEB SERVER CONTROLS FOR WEB ENABLED RECOGNITION AND/OR AUDIBLE PROMPTING
BACKGROUND OF THE INVENTION
The present invention relates to access of information over a network such as the Internet. More particularly, the present invention relates to controls for a server that generates client side markup enabled with recognition and/or audible prompting .
Small computing devices such as personal digital assistants (PDA) , devices and portable phones are used with ever increasing frequency by people in their day-to-day activities. With the increase in processing powe'r now available for microprocessors used to run these devices, the functionality of these devices is increasing, and in some cases, merging. For instance, many portable phones now can be used to access and browse the Internet as well as can be used to store personal information such as addresses, phone numbers and the like.
In view that these computing devices are being used for browsing the Internet, or are used in other server/client architectures, it is therefore necessary to enter information into the computing device. Unfortunately, due to the desire to keep these devices as .small as possible in order that they are easily carried, conventional keyboards having all the letters of the alphabet as isolated buttons are usually not possible due to the limited surface area available on the housings of the computing devices.
To address this problem, there has been increased interest and adoption of using voice or speech to access information over a wide area network such as the Internet. For example, voice portals such as through the use of VoiceXML (voice extensible markup language) have been advanced to. allow Internet content to be accessed using only a telephone. In this architecture, a document server (for example, a web server) processes requests from a client through a VoiceXML interpreter. The web server can produce VoiceXML documents in reply, which are processed by the VoiceXML interpreter and rendered audibly to the user. Using voice commands through voice recognition, the user can navigate the web.
Generally, there are two techniques of "speech enabling" information or web content. In the first technique, existing visual markup language pages typically visually rendered by a device having a display are interpreted and rendered aurally. However, this approach often yields poor results because markup meant for visual interaction usually do not have enough information to create a sensible aural dialog automatically. In addition, voice interaction is ■ prone to error, especially over noisy channels such as a telephone. Without visual or other forms of persistent feedback, navigation through the web server application can be extremely difficult for the user. This approach thus requires mechanisms such as help messages, which are also rendered audibly to the user in order to help them navigate through the
website. The mechanisms are commonly referred to as ΛΛvoice dialogs", which also must address errors when incorrect information or no information is provided by the user, for example, in response to an audible question. Since the mechanisms are not commonly based on the visual content of the web page, they cannot be generated automatically, and therefore typically require extensive development time by the application developer.
A second approach to speech enabling web content includes writing specific voice pages in a new language. An advantage of this approach is that the speech-enabled page contains all the mechanisms needed for aural dialog such as repairs and navigational help. However, a significant disadvantage is that the application pages must then be adapted to include the application logic as found in the visual content pages. In other words, the application logic of the visual content pages must be rewritten in the form of the speech-enabling language. Even when this process can be automated by the use of tools creating visual and aural pages from the same specification, maintenance of the visual and speech enabled pages is usually difficult to synchronize. In addition, this approach does not easily allow multimodal applications, for example where both visual and speech interaction is provided on the web page. Since the visual and speech-enabled pages are unrelated, the input and output logic is not easily coordinated to work with each other.
To date, speech interaction authoring is also cumbersome due to the organization or format currently used as the interface. Generally, the speech interface either tends to be tied too closely to the business logic of the application, which inhibits re-use of the elements of the speech interface in other applications, or the speech interface is too restricted by a simplistic dialog model (e.g. forms and fields).
There is thus an ongoing need to improve upon the architecture and methods used to provide speech recognition in a server/client architecture such as the Internet. In particular, a method, system or authoring tool that addresses one, several or all of the foregoing disadvantages and thus provides generation of speech-enabled recognition and/or speech-enabled prompting of client markup from a web server is needed.
SUMMARY OF THE INVENTION Web server controls are provided for generating client side markups with recognition and/or audible prompting. Three approaches are disclosed for implementation of the controls.
In a first approach, controls commonly related to visual rendering are extended to include an attributes related to recognition and/or audible prompting. Typically, controls such as "label" use a library having markup information, • which provides a visual prompt on a display. Similarly, "textbox" provides an input field on a visual display. In the
first approach, an additional library is provided for recognition and/or audibly prompting, wherein the controls include attributes or parameters to use both libraries .
In a second approach, the controls access the - current, existing library for visual markup information, but include attributes and mechanisms to perform recognition and/or audible prompting. In other words, the controls use the library, but only when visual rendering is desired.
In a third approach, a set of companion controls having attributes related to recognition and/or audible prompting are formed. The companion controls use a library having recognition and audibly prompting markup information. The companion controls are selectively associated with visual controls. In this manner, application logic remains with the visual controls, wherein the companion controls provide recognized results to the visual controls. The companion controls follow a dialog in that controls are provided for prompting a question, obtaining an answer, confirming a result, providing a command, or making a statement. A question/answer control can also be formed from one or more of these controls in order to' form a dialog or sub-dialog pertaining to a specific topic.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a plan view of a first embodiment of a computing device operating environment.
FIG. 2 is a block diagram of the computing device of FIG. 1.
FIG. 3 is a block diagram of a general purpose computer.
FIG. 4 is a block diagram of an architecture for a client/server system.
FIG. 5 is a display for obtaining credit card information.
FIG. 6 is an exemplary page of markup language executable on a client having a display and voice recognition capabilities.
FIG. 7 is a block diagram illustrating a first approach for providing recognition and audible prompting in client side markups.
FIG. 8 is a block diagram illustrating a second approach for providing recognition and audible prompting in client side markups .
FIG. 9 is a block diagram illustrating a third approach for providing recognition and audible prompting in client side markups.
FIG. 10 is a block diagram illustrating companion controls.
FIG. 11 is a detailed block diagram illustrating companion controls.
DETAILED DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENTS
Before describing architecture of web based recognition and methods for implementing the same, it may be useful to describe generally computing devices that can function in the architecture. Referring now to FIG. 1, an exemplary form of a data management
device (PIM, PDA or the like) is illustrated at 30. However, it is contemplated that the present invention can also be practiced using other computing devices discussed below, and in particular, those computing devices having limited surface areas for input buttons or the like. For example, phones and/or data management devices Will also benefit from the present invention. Such devices will have an enhanced utility compared to existing portable personal information management devices and other portable electronic devices, and the functions and compact size of such devices will more likely encourage the user to carry the device -at all times. Accordingly, it is not intended that the scope of the architecture herein described be limited by the disclosure of an exemplary data management or PIM device, phone or computer herein illustrated.
An exemplary form of a data management mobile device 30 is illustrated in FIG. 1. The mobile device 30 includes a housing 32 and has an user interface including a display 34, which uses a contact sensitive display screen in conjunction with a stylus 33. The stylus 33 is used to press or contact the display 34 at designated coordinates to select a field, to selectively move a starting position of a cursor, or to otherwise provide command information such as through gestures or handwriting. Alternatively, or in addition, one or more buttons 35 can be included on the device 30 for navigation. In addition, other input mechanisms such as rotatable
wheels, rollers or the like can also be provided. However, it should be noted that the invention is not intended to be limited by these forms of input mechanisms. For instance, another form of input can include a visual input such as through computer vision.
Referring now to FIG. 2, a block diagram illustrates the functional components comprising the mobile device 30. A central processing unit (CPU) 50 implements the software control functions. CPU 50 is coupled to display 34 so that text and graphic icons generated in accordance with the controlling software appear on the display 34. A speaker 43 can be coupled to CPU 50 typically with a digital-to-analog converter 59 to provide an audible output. Data that is downloaded or entered by the user into the mobile device 30 is stored in a non-volatile read/write random access memory store 54 bi-directionally coupled to the CPU 50. Random access memory (RAM) 54 provides volatile storage for instructions that are executed by CPU 50, and storage for temporary data, such as register values. Default values for configuration options and other variables are stored in a read only memory -(ROM) 58. ROM 58 can also be used to store the operating system software for the device that controls the basic functionality of the mobile 30 and other operating system kernel functions (e.g., the loading of software components into RAM 54) .
RAM 54 also serves as a storage for the code in the manner analogous to the function of a hard drive on a PC that is used to store application programs . It should be noted that although nonvolatile memory is used for storing the code, it alternatively can be stored in volatile memory that is not used for execution of 'the code.
Wireless signals can be transmitted/received by the mobile device through a wireless transceiver 52, which is coupled to CPU 50. An optional communication interface 60 can also be provided for downloading data directly from a computer (e.g., desktop computer), or from a wired network, if desired. Accordingly, interface 60 can comprise various forms of communication devices, for example, an infrared link, modem, a network card, or the like.
Mobile device 30 includes a microphone 29, and analog-to-digital (A/D) converter 37, and an optional recognition program (speech, DTMF, handwriting, gesture or computer vision) stored in store 54. By way of example, in response to audible information, instructions or commands from a user of device 30, microphone 29 provides speech signals, which are digitized by A/D converter 37. The speech recognition program can perform normalization and/or feature extraction functions on the digitized speech signals to obtain intermediate speech recognition results. Using wireless transceiver 52 or communication interface 60, speech data is
transmitted to a remote recognition server 204 discussed below and illustrated in the architecture of FIG. 5. Recognition results are then returned to mobile device 30 for rendering (e.g. visual and/or audible) thereon, and eventual transmission to a web server 202 (FIG. 5) , wherein the web server 202 and mobile device 30 operate in a client/server relationship. Similar processing can be used for other, forms of input. For example, handwriting input can be digitized with or without pre-processing on device 30. Like the speech data, this form of input can be transmitted to the recognition server 204 for recognition wherein the recognition results are returned to at least one of the device 30 and/or web server 202. Likewise, DTMF data, gesture data and visual data can be processed similarly. Depending on the form of input, device 30 (and the other forms of clients discussed below) would include necessary hardware such as a camera for visual input.
In addition to the portable or mobile computing devices described above, it should also be understood that the present invention can . be used with numerous other computing devices such as a general desktop computer. For instance, the present invention will allow a user with limited physical abilities to input or enter text into a computer or other computing device when other conventional input devices, such as a full alpha-numeric keyboard, are too difficult to operate.
The invention is also operational with numerous other general purpose or special purpose computing systems, environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, wireless or cellular telephones, regular telephones (without any screen) , personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing -environments that include any of the above systems or devices, and the like.
The following is a brief description of a general purpose computer 120 illustrated in FIG. 3. However, the computer 120 is again only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computer 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated therein.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or
implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. Tasks performed by the programs and modules are described below and with the aid of figures. Those skilled in the art can implement the description and figures as processor executable instructions, which can be written on any form of a computer readable medium.
With reference to FIG. 3, components of computer 120 may include, but are not limited to, a processing unit 140, a system memory 150, and a system bus 141 that couples various system components including the system memory to the processing unit 140. The system bus 141 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Universal Serial Bus (USB) , Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus. Computer 120 typically includes a variety of computer readable mediums.
Computer readable mediums can be any available media that can be accessed by computer 120 and includes both volatile and nonvolatile media, removable and non-removable media. By way of 'example, and not limitation, computer readable mediums may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 120.
Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired
connection, and wireless media such as acoustic, FR, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 150 includes computer storage media in the form of volatile and/or nonvolatile memory such as- read only memory (ROM) 151 and random access memory (RAM) 152. A basic input/output system 153 (BIOS) , containing the basic routines .that help to transfer information between elements within computer 120, such as during startup, is typically stored in ROM 151. RAM 152 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 140. By way of example, and not limitation, FIG. 3 illustrates operating system 54, application programs 155, other program modules 156, and program data 157.
The computer 120 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only, FIG. 3 illustrates a hard disk drive 161 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 171 that reads from or writes to a removable, nonvolatile magnetic disk 172, and an optical disk drive 175 that reads from or writes to a removable, nonvolatile optical disk 176 such as a CD ROM or other optical media. Other removable/nonremovable, volatile/nonvolatile computer storage media that can be used in the exemplary operating
environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard dis-k drive 161 is typically connected to the system bus 141 through a non-removable memory interface such as interface 160, and magrietic disk drive 171 and optical disk drive 175 are typically connected to the system bus 141 by a removable memory interface, such as interface 170.
The drives and their associated computer storage media discussed above and illustrated in FIG. 3, provide storage of computer readable instructions, data structures, program modules and other data for the computer 120. In FIG. 3, for example, hard disk drive 161 is illustrated as storing operating system 164, application programs 165, other program modules 166, and program data 167. Note that these components can either be the same as or different from operating system 154, application programs 155, other program modules 156, and program data 157. Operating system 164, application programs 165, other program modules 166, and program data 167 are given different numbers here to illustrate that, at a minimum, they are different copies.
A user may enter commands and information into the computer 120 through input devices such as a keyboard 182, a microphone 183, and a pointing device 181, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like.
These and other input devices are often connected to the processing unit 140 through a user input interface 180 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) . A monitor 184 or other type of display device is also connected to the system bus 141 via an interface, such as a video interface 185. In addition to the monitor, computers may also include other peripheral output devices such as speakers 187 and printer 186, which may be connected through an output peripheral interface 188.
The computer 120 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 194. The remote computer 194 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 120. The logical connections depicted in FIG. 3 include a local area network (LAN) 191 and a wide area network (WAN) 193, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 120 is connected to the LAN 191 through a network interface or adapter 190. When used in a WAN networking environment, the computer 120
typically includes a modem 192 or other means for establishing communications over the WAN 193, such as the Internet. The modem 192, which may be internal or external, may be connected to the system bus 141 via the user input interface 180, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 120, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 3 illustrates remote application programs 195 as residing on remote computer 194. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
EXEMPLARY ARCHITECTURE FIG. 4 illustrates architecture 200 for web based recognition as can be used with the present invention. -Generally, information stored in a web server 202 can be accessed through mobile device 30 (which herein also represents other forms of computing devices having a display screen, a microphone, a camera, a touch sensitive panel, etc., as required based on the form of input) , or through phone 80 wherein information is requested audibly or through tones generated by phone 80 in response to keys depressed and wherein information from web server 202 is provided only audibly back to the user.
In this exemplary embodiment, Architecture 200 is unified in that whether information is obtained through device 30 or phone 80 using speech recognition, a single recognition server 204 can support either mode of operation. In addition, architecture 200 operates using an extension of well- known markup languages (e.g. HTML, XHTML, dHTML, XML, WML, and the like) . Thus, information stored on web server 202 can also be accessed using well-known GUI methods found in these markup languages. By using an extension of well-known markup languages, authoring on the web server 202 is easier, and legacy applications currently existing can be also easily modified to include voice or other forms of recognition.
Generally, device 30 executes HTML+ scripts, or the like, provided by web server 202. When voice recognition is required, by way of example, speech data, which can be digitized audio signals or speech features wherein the audio signals have been preprocessed by device 30 as discussed above, are provided to recognition server 204 with an indication of a grammar or language model to use during speech recognition. The implementation of the recognition server 204 can take many forms, one of which is illustrated, but generally includes a recognizer 211. The results of recognition are provided back to device 30 for local rendering if desired or appropriate. Upon compilation of information through recognition and any graphical
user interface if used, device 30 sends the information to web server 202 for further processing and receipt of further HTML scripts, if necessary.
As illustrated in FIG. 4, device 30, web server 202 and recognition server 204 are commonly connected, and separately addressable, through a network 205, herein a wide area network such as the Internet. It therefore is no't necessary that any of these devices be physically located adjacent to each other. In particular, it is not necessary that web server 202 includes recognition server 204. In this manner, authoring at web server 202 can be focused on the application to which it is intended without the authors needing to know the intricacies of recognition server 204. Rather, recognition server 204 can be independently designed and connected to the network 205, and thereby, be updated and improved without further changes required at web server 202. As discussed below, web server 202 can also include an authoring mechanism that can dynamically generate client-side markups and scripts. In a further embodiment, the web server 202, recognition server 204 and client 30 may be combined depending on the capabilities of the implementing machines. For instance, if the client comprises a general purpose computer, e.g. a personal computer, the client may include the recognition server 204. Likewise, if desired, the web server 202 and recognition server 204 can be incorporated into a single machine.
Access to web server 202 through phone 80 includes connection of phone 80 to a wired or wireless telephone network 208, that in turn, connects phone 80 to a third party gateway 210. Gateway 210 connects phone 80 to a telephony voice browser 212. Telephone voice browser 212 includes a media1 server 214 that provides a telephony interface and a voice browser 216. Like device 30, telephony voice, browser 212 receives HTML scripts or the like from web server 202. In one embodiment, the HTML scripts are of the form similar to HTML scripts provided to device 30. In this manner, web server 202 need .not support device 30 and phone 80 separately, or even support standard GUI clients separately. Rather, a common markup language can be used. In addition, like device 30, voice recognition from audible signals transmitted by phone 80 are provided from voice browser 216 to recognition server 204, either through the network 205, or through a dedicated line 207, for example, using TCP/IP. Web server 202, recognition server 204 and telephone voice browser 212 can be embodied in any suitable computing environment such as the general purpose desktop computer illustrated in FIG. 3.
However, it should be noted that if DTMF recognition is employed, this form of recognition would generally be performed at the media server 214, rather than at the recognition server 204. In other words, the DTMF grammar would be used by the media server 214. .
Referring back to FIG. 4, web server 202 can include a server side plug-in authoring tool or module 209 (e.g. ASP, ASP+, ASP. Net by Microsoft Corporation, JSP, Javabeans, or the like) . Server side plug-in module 209 can dynamically generate client-side markups and even a specific form of markup for the type of client 'accessing the web server 202. The client information can be provided to the web server 202 upon initial establishment of the client/server relationship, or the web server 202 can include modules or routines to detect the capabilities of the client device. In this manner, server side plug-in module 209 can generate a client side markup for each of . the voice recognition scenarios, i.e. voice only through phone 80 or multimodal for device 30. By using a consistent client side model, application authoring for many different clients is significantly easier.
In addition to dynamically generating client side markups, high-level dialog modules, discussed below, can be implemented as a server-side control stored in store 211 for use by developers in application authoring. In general, the high-level dialog modules 211 would generate dynamically client- side markup and • script in both voice-only and multimodal scenarios based on parameters specified by developers. The high-level dialog modules 211 can include parameters to generate client-side markups to fit the developers' needs.
EXEMPLARY CLIENT SIDE EXTENSIONS Before describing dynamic generation of client-side markups to which the present invention is directed, it may be helpful to first discuss an exemplary form of extensions to the markup language for use in web based recognition.
As indicated above, the markup languages such as HTML, XHTML cHTML, XML, WML or any other SGML-derived markup, which are used for interaction between the web server 202 and the client device 30, are extended to include controls and/or objects that provide recognition in a client/server architecture. Generally, controls and/or objects can include one or more of the following functions: recognizer controls and/or objects for recognizer configuration, recognizer execution and/or post-processing; synthesizer controls and/or objects for synthesizer configuration and prompt playing; grammar controls and/or objects for specifying input grammar resources; and/or binding controls and/or objects for processing recognition results. The extensions are designed to be a lightweight markup layer, which adds the power of an audible, visual, handwriting, etc. interface to existing markup languages. As such, the extensions can remain independent of: the high-level page in which they are contained, e.g. HTML; the low- level formats which the extensions used to refer to linguistic resources, e.g. the text- o-speech and grammar formats; and the individual properties of the recognition and speech- synthesis platforms used in
the recognition server 204. Although speech recognition will be discussed below, it should be understood that the techniques, tags and server side controls described hereinafter can be similarly applied in handwriting recognition, gesture recognition and image recognition.
In the exemplary embodiment, the extensions (also commonly known as "tags") are a small set of XML elements, with associated attributes and DOM object properties, events, and methods, which may be used in conjunction with a source markup document to apply a recognition and/or audible prompting interface, DTMF or call control to a source .page. The extensions' formalities and semantics are independent of the nature of the source document, so the extensions can be used equally effectively within HTML, XHTML, cHTML, XML, WML, or with any other SGML- derived markup. The extensions follow the document object model wherein new functional objects or elements, which can be hierarchical, are provided. Each of the elements are discussed in detail in the Appendix, but generally the elements can include attributes, properties, methods, events and/or other "child" elements.
At this point, it should also be noted that the extensions may be interpreted in two different "modes" according to the capabilities of the device upon which the browser is being executed on. In a first mode, "object mode", the full capabilities are available. The programmatic manipulation of the
extensions by an application is performed by whatever mechanisms are enabled by the browser on the device, e.g. a JScript interpreter in an XHTML browser, or a WMLScript interpreter in a WML browser. For this reason, only a small set of core properties and methods of the extensions need to be defined, and these manipulated by whatever programmatic mechanisms exist on the device or client side. The object mode provides eventing and scripting and can offer greater functionality to give the dialog author a much finer client-side control over speech interactions . As used herein, a browser that supports full event and scripting is called an "uplevel browser". This form of a browser will support all the attributes, properties, methods and events of the extensions. Uplevel browsers are commonly found on devices with greater processing capabilities.
The extensions can also be supported in a "declarative mode". As used herein, a browser operating in a declarative mode is called a "downlevel browser" and does not support full eventing and scripting capabilities. Rather, this form of browser will support the declarative aspects of a given extension (i.e. the core element and attributes), but not all the DOM (document object model) object properties, methods and events. This mode employs exclusively declarative syntax, and may further be used in conjunction with declarative multimedia synchronization and coordination mechanisms (synchronized markup language) such as
SMIL (Synchronized Multimedia Integration Language) 2.0. Downlevel browsers will typically be found on devices with limited processing capabilities .
At this point though, a particular mode of entry should be discussed. In particular, use of speech recognition in conjunction with at least a display and, in a further embodiment, a pointing device as well which enables the coordination of multiple modes of input, e.g. to indicate the fields for data entry, is particularly useful. Specifically, in this mode of data entry, the user is generally able to coordinate the actions of the pointing device with the -speech input, so for example the user is under control of when to select a field and provide corresponding information relevant to the field. For instance, a credit card submission graphical user interface (GUI) is illustrated in FIG. 5, a user could first decide to enter the credit card number in field 252 and then enter the type of credit card in field 250 followed by the expiration date in field 254. Likewise, the user could return back to field 252 and correct an errant entry, if desired. When combined with speech recognition, an easy and natural form of navigation is provided. As used herein, this form of entry using both a screen display allowing free form actions of the pointing device on the screen, e.g. the selection of fields and recognition is called "multimodal".
Referring to FIG. 6, a HTML markup language code is illustrated. The HTML code includes a body portion 270 and a script portion 272. Entry of information in each of the fields 250, 252 and 254 is controlled or executed by code portions 280, 282 and 284, respectively. Referring first to code portion 280, on selection of field 250, for1 example, by use of stylus 33 of device 30, the event "onClick" is initiated which calls or executes function "talk" in script portion 272. This action activates a grammar used for speech recognition that is associated with the type of data generally expected in field 250. This type of interaction, which involves more than one technique of input (e.g. voice and pen- click/roller) is referred as "multimodal".
Referring now back to the grammar, the grammar is a syntactic grammar such as but not limited to a context-free grammar, a N-grammar or a hybrid grammar. (Of course, DTMF grammars, handwriting grammars, gesture grammars and image grammars would be used when corresponding forms of recognition are employed. As used herein, a "grammar" includes information for performing recognition, and in a further embodiment, information corresponding to expected input to be entered, for example, in a specific field.) A control 290 (herein identified as "reco") includes various elements, two of which are illustrated, namely a grammar element "grammar" and a "bind" element. Generally, like the code downloaded to a client from web server 202, the grammars can
originate at web server 202 and be downloaded to the client and/or forwarded to a remote server for speech processing. The grammars can then be stored locally thereon in a cache. Eventually, the grammars are provided to the recognition server 204 for use in recognition. The grammar element is used to specify grammars, either ' inline or referenced using an attribute.
Upon receipt of recognition results from recognition server 204 corresponding to the recognized speech, handwriting, gesture, image, etc., syntax of reco control 290 is provided to receive the corresponding results and associate it with the corresponding field, which can include rendering of the text therein on display 34. In the illustrated embodiment, upon completion of speech recognition with the result sent back to the client, it deactivates the reco object and associates the recognized text with the corresponding field. Portions 282 and 284 operate similarly wherein unique reco objects and grammars are called for each of the fields 252 and 254 and upon receipt of the recognized text is associated with each of the fields 252 and 254. With respect to receipt of the card number field 252, the function "handle" checks the length of the card number with respect to the card type.
GENERATION OF CLIENT SIDE MARKUPS As indicated above, server side plug-in module 209 outputs client side markups when a request has been made from the client device 30. In short,
the server side plug-in module 209 allows the website, and thus, the application and services provided by the application to be defined or constructed. The instructions in the server side plug-in module 209 are made of a complied code. The code is run when a web request reaches the web server 202. The server side plug-in module 209 then outputs a new client side markup page that is sent to the client device 30. As is well known, this process is commonly referred to as rendering. The server side plug-in module 209 operates on "controls" that abstract and encapsulate the markup language, and thus, the code of the client side markup page. Such controls that abstract and encapsulate the markup language and operate on the webserver 202 include or are equivalent to "Servlets" or "Server-side plug ins" to name a few.
As is known, server side plug-in modules of the prior art can generate client side markup for visual rendering and interaction with the client device 30. Three different approaches are provided herein for extending the server side plug-in module 209 to include recognition and audible prompting extensions such as the exemplary client side extensions discussed above. In a first approach illustrated schematically in Fig. 7, the current, visual, server side controls (which include parameters for visual display such as location for rendering, font, foreground color, background color, etc.) are extended to include parameters or
attributes for recognition and audibly prompting for related recognition. Using speech recognition and associated audible prompting by way of example, the attributes generally pertain to audible prompting parameters such as whether the prompt comprises inline text for text-to-speech conversion, playing of a prerecorded audio file1 (e.g. a wave file), the location of the data (text for text-to-speech conversion or a prerecorded audio file) for audible rendering, etc. For recognition, the parameters or attributes can include the location of the grammar to be used during recognition, confidence level thresholds, etc. Since the> server side plug-in module 209 generates client side markup, the parameters and attributes for the controls for the server side plug- in module 209 relate to the extensions provided in the client side markup for recognition and/or audible prompting.
The controls indicated at 300A in Fig. 7 are controls, which are well-known in website application development or authoring tools such as ASP, ASP+, ASP. et, JSP, Javabeans, or the like. Such controls are commonly formed in a library and used by controls 302 to perform a particular visual task. Library 300A includes methods for generating the desired client markup, event handlers, etc. Examples of visual controls 302 include a "Label" control that provides a selected' text label on a visual display such as the label "Credit Card Submission" 304 in Fig. 5. Another example of a higher level visual
control 302 is a "Textbox", which allows data to be entered in a data field such as is indicated at 250 in Fig. 5. The existing visual controls 302 are also well-known. In the first approach for extending server side plug-in module controls to include recognition and/or audible prompting, each of the visual i controls 302 would include further parameters or attributes related to recognition or audible prompting. In the case of the "label" control, which otherwise provides selected text on a visual display, further attributes may include whether an audio data file will be rendered or text-to-speech conversion will be employed as well as the location of this data file. A library 300B, similar to library 300A, includes further markup information for performing recognition and/or audible prompting. Each of the visual controls 302 is coded so as to provide this information to the controls 300B as appropriate to perform the particular task related to recognition or audible prompting.
As another example, the "Textbox" control, which generates an input field on a visual display and allows the user of the client device 30 to enter information, would also include appropriate recognition or audible prompting parameters or attributes such as the grammar to be used for recognition. It should be noted that the recognition or audible prompting parameters are optional and need not be used if recognition or audible prompting is not otherwise desired.
In general, if a control at level 302 includes parameters that pertain to visual aspects, the control will access and use the library 300A. Likewise, if the control includes parameters pertaining to recognition and/or audible prompting the control will access or use the library 300B. It should be noted that libraries 30'OA and 300B have been illustrated separately in order to emphasize the additional information present in library 300B and that a single library having the information of libraries 300A and 300B can be implemented.
In this approach, each of the current or prior art visual controls 302 are extended to include appropriate recognition/audible prompting attributes. The controls 302 can be formed in a library. The server side plug-in module 209 accesses the library for markup information. Execution of the controls generates a client side markup page, or a portion thereof, with the provided parameters.
In a second approach illustrated in Fig. 8, new visual, recognition/audible prompting controls 304 are provided such that the controls 304 are a subclass relative to visual controls 302, wherein recognition/audible prompting functionality or markup information is provided at controls 304. In other words, a new set of controls 304 are provided for recognition/audible prompting and include appropriate parameters or attributes to perform the desired recognition or an audible prompting related to a recognition task on the client device 30. The
controls 304 use the existing visual controls 302 to the extent that visual information is rendered or obtained through a display. For instance, a control "SpeechLabel" at level 304 uses the "Label" control at level 302 to provide an audible rendering and/or visual text rendering. Likewise, a "SpeechTextbox" control would 'associate a grammar and related recognition resources and processing with an input field. Like the first approach, the attributes for controls 304 include where the grammar is located for recognition, the inline text for text-to-speech conversion, or the location of a prerecorded audio data file that will be rendered directly or a text file through text-to-speech conversion. The second approach is advantageous in that interactions of the recognition controls 304 with the visual controls 302 are through parameters or attributes, and thus, changes in the visual controls 302 may not require any changes in the recognition controls 304 provided the parameters or attributes interfacing between the controls 304 and 302 are .still appropriate. However, with the creation of further visual controls 302, a corresponding recognition/audible prompting control at level 304 may also have to be written.
A third approach is illustrated in Fig. 9. Generally, controls 306 of the third approach are separate from the visual controls 302, but are associated selectively therewith as discussed below. In this manner, the controls 306 do not directly build upon the visual controls 302, but rather
provide recognition/audible prompting enablement without having to rewrite the visual controls 302. The controls 306, like the controls 302, use a library 300. In this embodiment, library 300 includes both visual and recognition/audible prompting markup information and as such is a combination of libraries 30OA and 300B of Fig. 7.
There are significant advantages to this third approach. Firstly, the visual controls 302 .do not need to be changed in content. Secondly, the controls 306 can form a single module which is consistent and does not need to change according to the nature of the speech-enabled control 302. Thirdly, the process of speech enablement, that is, the explicit association of the controls 306 with the visual controls 302 is fully under the developer's control at design time, since it is an explicit and selective process. This also makes it possible for the markup language of the visual controls to receive input values from multiple , sources such as through recognition provided by the markup language generated by controls 306, or through a conventional input device such as a keyboard. In short, the controls 306 can be added to an existing application authoring page of a visual authoring page of the server side plug-in module 209. The controls 306 provide a new modality of interaction (i.e. recognition and/or audible prompting) for the user of the client device 30, while reusing the visual controls' application logic and visual input/output capabilities. In view
that the controls 306 can be associated with the visual controls 302 whereat the application logic can be coded, controls 306 may be hereinafter referred to as "companion controls 306" and the visual controls 302 be referred to as "primary controls 302". It should be noted that these references are provided for purposes of distinguishing ' controls 302 and 306 and are not intended to be limiting. For instance, the companion controls 306 could be used to develop or author a website that does not include visual renderings such as a voice-only website. In such a case, certain application logic could be embodied in the companion control logic.
An exemplary set of companion controls 306 are further illustrated in Fig. 10. The set of companion controls 306 can be grouped as output controls 308 and input controls 310. Output controls 308 provide "prompting" client side markups, which typically involves the playing of a prerecorded audio file, or text for text-to-speech conversion, the data included in the markup directly or referenced via a URL. Although a single output control can be defined with parameters to handle all audible prompting, and thus should be considered as a further aspect of the present invention, in the exemplary embodiment, the forms or types of audible prompting in a human dialog are formed as separate controls. In particular, the output controls 308 can include a "Question" control 308A, a "Confirmation" control 308B and a "Statement" control 308C, which will be discussed in detail
below. Likewise, the input controls 310 can also form or follow human dialog and include a "Answer" control 310A and a "Command" .control 310B. The input controls 310 are discussed below, but generally the input controls 310 associate a grammar with expected or possible input from the user of the client device 30.
Although the question control 308A, confirmation control 308B, statement control 308C, answer control 310A, command control 310B, other controls as well as the general structure of these controls, the parameters and event handlers, are specifically discussed with respect to use as companion controls 306, it should be understood that these controls, the general structure, parameters and event handlers can be adapted to provide recognition and/or audible prompting in the other two approaches discussed above with respect to Figs. 7 and 8. For instance, the parameter "ClientToSpeechEnable", which comprises one exemplary mechanism to form the association between a companion control and a visual control, would not be needed when embodied in the approaches of Figs. 7 and 8.
In a multimodal application, at least one of the output controls 308 or one of the input controls 310 is associated with a primary or visual control 302. In the embodiment illustrated, the output controls 308 and input controls 310 are arranged or organized under a "Question/Answer" (hereinafter also "QA") control' 320. QA control 320 is executed on the web server 202, which means it is
defined on the application development web page held on the web server . using the server-side markup formalism (ASP, JSP or the like) , but is output as a different form of markup to the client device 30. Although illustrated in Fig. 10 where the QA control appears to be formed of all of the output controls 308 and the input controls 310, it should be understood that these are merely options wherein one or more may be included for a QA control.
At this point it may be helpful to explain use of the controls 308 and 310 in terms of application scenarios. Referring to Fig. 11 and in a voice-only application QA control 320 could comprise a single question control 308A and an answer control 310A. The question control 308A contains one or more prompt objects or controls 322, while the answer control 310A can define a grammar through grammar object or control 324 for recognition of the input data and related processing on that input. Line 326 represents the association of the QA control 320 with the corresponding primary control 302, if used. In a multimodal scenario, where the user of the client device 30 may touch on the visual textbox, for example with a "TapEvent", an audible prompt may not be necessary. For example, for a primary control comprising a textbox having visual text forming an indication of what the user of client device should enter in the corresponding field, a corresponding QA control 320 may or may not have a corresponding prompt such as an audio playback or a text-to-speech
conversion, but would have a grammar corresponding to the expected value for recognition, and event handlers 328 to process the input, or process other recognizer events such as no speech detected, speech not recognized, or events fired on timeouts (as illustrated in "Eventing" below) .
In general, the QA control through the output controls 308 and input controls 310 and additional logic can perform one or more of the following: provide output audible prompting, collect input data, perform confidence validation of the input result, allow additional types of input such as "help" commands, or commands that allow the user of the client device to navigate to other selected areas of the website, allow confirmation of input data and control of dialog flow at the website, to name a few. In short, the QA control 320 contains all the controls related to a specific topic. In this manner, a dialog is created through use of the controls with respect to the topic in order to inform to obtain information, to confirm validity, or to repair a dialog or change the topic of conversation.
In one method of development, the application developer can define the visual layout of the application using the visual controls 302. The application developer can then define the spoken interface of the application using companion controls 306 (embodied as QA control 320, or output controls 308 and input control 310). As illustrated in FIGS. 10 and 11, each of the companion controls 306 are
then linked or otherwise associated with the corresponding primary or visual control 302 to provide recognition and audible prompting. Of course if desired, the application developer can define or encode the application by switching between visual controls 302 and companion controls 306, forming the links ' therebetween, until the application is completely defined or encoded.
At this point, it may be helpful to provide a short description of each of the output controls 308 and input controls 310. Detailed descriptions are provided below in Appendix B.
Questions, Answers and Commands
Generally, as indicated above, the question controls 308A and answer controls 310A in a QA control 320 hold the prompt and grammar resources relevant to the primary control 302, and related binding (associating recognition results with input fields of the client-side markup page) and processing logic. The presence, or not, of question controls 308A and answer controls 310A determines whether speech output or recognition input is enabled on activation. Command controls 310B and user initiative answers are activated by specification of the Scope property on the answer controls 310A and command controls 310B.
In simple voice-only applications, a QA control 320 will typically hold one question control or object 308A and one answer control or object 310A.
Although not shown in the example below, command controls 310B may also be specified, e.g. Help,
Repeat, Cancel, etc., to enable user input which does not directly relate to the answering of a particular question.
A typical Λregular' QA control for voice-only dialog is as follows:
<Speech:QA id="QA_WhichOne"
ControlsToSpeechEnable="textBoxl" runat="server" >
<Question >
<prompt> Which one do you want? </prompt>
</Question> <Answer >
<grammar src="whichOne . gram" /> </Answer> </Speech:QA>
(The examples provided herein are written in the ASP. Net framework by example only and should not be considered as limiting the present invention.)
In this example, the QA control can be identified by its "id", while the association of the QA control with the desired primary or visual control is obtained through the parameter "ControlsToSpeechEnable", which identifies one or more primary controls by their respective identifiers. If desired, other well-known techniques can be used to form the association. For instance, direct, implicit associations are available .through the first and second approaches described above, or
separate tables can be created used to maintain the associations. The parameter "runat" instructs the web server that this code should be executed at the webserver 202 to generate the correct markup.
A QA control might also hold only a statement control 308C, in which case it is a prompt- only control without active grammars (e.g. for a welcome prompt) . Similarly a QA control might hold only an answer control 310A, in which case it may be a multimodal control, whose answer control 310A activates its grammars directly as the result of an event from the GUI, or a scoped mechanism (discussed below) for user .initiative.
It should also be noted that a QA control 320 may also hold multiple output controls 308 and input controls 310 such as multiple question controls 308A and multiple answers controls 310A. This allows an author to describe interactional flow about the same entity within the same QA control. This is particularly useful for more complex voice-only dialogs. So a mini-dialog which may involve different kinds of question and answer (e.g. asking, confirming, giving help, etc.), can be specified within the wrapper of the QA control associated with the visual control which represents the dialog entity. A complex QA control is illustrated in Fig. 11.
The foregoing represent the main features of the QA control. Each feature is described from a functional perspective below.
Answer Control
The answer control 310A abstracts the notion of grammars, binding and other recognition processing into a single object or control. Answer controls 310A can be used to specify a set- of possible grammars relevant to a question, along with binding declarations and relevant scripts. Answer controls for multimodal applications such as "Tap- and-Talk" are activated and deactivated by GUI browser events. The following example illustrates an answer control 310A used in a multimodal application to select a departure city on the "mouseDown" event of the textbox "txtDepCity", and write its value into the primary textbox control:
<Speech:QA controlsToSpeechEnable="txtDepCity" runat="server">
<Answer id="AnsDepCity"
StartEvent="onMouseDown" StopEvent="onMouseUp" /> <grammar src="/grammars/depCities . gram"/> <bind value="//sml/DepCity" targetElement="txtCity" />
</Answer> </Speech:QA>
Typical answer controls 310A in voice-only applications are activated directly by question controls 308A as described below.
The answer control further includes a mechanism to associate a received result with the primary controls. Herein, binding places the values in the primary controls; however, in another embodiment the association mechanism may allow the primary control to look at or otherwise access the recognized results.
Question Control
Question controls 308A abstracts the notion of the prompt tags (Appendix A) into an object which contains a selection of possible prompts and the answer controls 310A which are considered responses to the question. Each question control 308A is able to specify which answer control 310A it activates on its execution. This permits appropriate response grammars to be bundled into answer controls 310A, which reflect relevant question controls 308A.
The following question control 308A might be used in a voice-only application to ask for a
Departure City:
<Speech : QA id="QADepCity" controlsToSpeechEnable="txtDepCity" runat=" server" > <Question id="Ql" Answers="AnsDepCity" > <prompt>
Please give me the departure city . </prompt> </Question>
<Answer id="AnsDepCity" ... /> </Speech : QA>
In the example below, different prompts can be called depending on an internal condition of the question control 308A. The ability to specify conditional tests on the prompts inside a question control 308A means that changes in wording can be accommodated within the same functional unit of the question control 308A.
<Speech:QA id="QADepCity" controlsToSpeechEnable="txtDepCity" runat="server" > <Question id="Ql" Answers="AnsDepCity" > <prompt count="l">
Now I need to get the departure city. Where would you like to fly from? </prompt> <prompt count="2">
Which departure city? </prompt> </Question>
<Answer id="AnsDepCity" ... /> </Speech:QA>
Conditional QA Control
The following example illustrates how to determine whether or not to activate a QA control based upon information known to the application. The example is a portion of a survey application. The survey is gathering information from employees regarding the mode of transportation they use to get to work.
The portion of the survey first asks whether or not the user rides the bus to work. If the answer is:
Yes, the next question asks how many days last week the users rode the bus.
No, the "number of days rode the bus" question is bypassed.
<asp:Label id="lblDisplayl" text="Do you ride the bus to work?" runat="server" />
<asp: DropDownList id="lstRodeBusYN" runat="server">
<asp:ListItem selected="true">No</asp:ListItem>
<asp : ListItem>Yes</as : Listltem> </asp : DropDownList>
<Speech:QA id="QA_RideBus
ControlsToSpeechEnable="lstRodeBusYN" runat="server" >
<SDN: Question id="Q_RideBus" >
<prompt bargeln="False">
Do you ride the bus to work?
</prompt>
</SDN:Question>
<SDN:Answer id="A_RideBus" autobind="False" StartEvent="onMouseDown" StopEvent="onMouseUp" runat="server" onClientReco="ProcessRideBusAnswer"
<grammar src=" ... " /> <— ! "yes/no" grammar —>
</SDN:Answer>
</Speech:QA>
<asp:Label id="lblDisplay2" enabled="False" text="How many days last week did you ride the bus to work?" runat="server" />
<asp: DropDownList id="lstDaysRodeBus" enabled="False' runat="server">
<asp:ListItem selected="true"
>0</asp : Listltem>
<asp:ListItem>K/asp:ListItem> <asp:ListItem>2</asp:ListItem> <asp:ListItem>3</asp:ListItem> <asp:ListItem>4</asp:ListItem> <asp:ListItem>5</asp:ListItem> <asp : Listltem>6</asp : Listltem> <a.sp : Listltem>7</asp : Listltem> </asp : DropDownList>
<Speech:QA id="QA_DaysRodeBus"
ControlsToSpeechEnable="lstDaysRodeBus" ClientTest="RideBusCheck" runat="server" > <Question id="Q_DaysRodeBus" >
<prompt bargeIn="False">
How many days last week did you ride the bus to work?
</prompt>
</SDN:Question>
<SDN:Answer id="A_DaysRodeBus" autobind="False" StartEvent="onMouseDown" StopEvent="onMouseUp" runat="server" onClientReco="ProcessDaysRodeBusAnswer"
<grammar src=" ..." /> <— ! "numbers" grammar —>
</ SDN : Answer >
</Speech:QA>
<script language="jscript"> function ProcessRideBusAnswer ( ) {
<— ! using SML attribute of the Event object, determine yes or no answer --> '
<— ! then select the appropriate item in the dropdown listbox —>
<-- ! and enable the next label and dropdown listbox if answer is "yes" —> if <— ! Answer is "yes" —> { IstRodeBusYN . selectedlndex=2 lblDisplay2. enabled="true" IstDaysRodeBus . enabled="true" } > } function RideBusCheck ( ) { if IstRodeBusYN. selectedlndex="l" <— ! this is no —> then return "False" endif } function ProcessDaysRσdeBusAnswer ( ) {
<— ! case statement to select proper dropdown item —>
} </script>
In the example' provided above, the QA control "QA_DaysRodeBus" is executed based on a boolean parameter "ClientTest", which in this example, is set based on the function RideBusCheck () . If the function returns a false condition, the QA control is not activated, whereas if a true condition
is returned the QA control is activated. The use of an activation mechanism allows increased flexibility and improved dialog flow in the client side markup page produced. As indicated in Appendix B many of the controls and objects include an activation mechanism.
Command Control
Command controls 310B are user utterances common in voice-only dialogs which typically have little semantic import in terms of the question asked, but rather seek assistance or effect navigation, e.g. help, cancel, repeat, etc. The Command control 310B within a QA control 306 can be used to specify not only the grammar and associated processing on recognition (rather like an answer control 310A without binding of the result to an input field), but also a 'scope' of context and a type. This allows for the authoring of both global and context-sensitive behavior on the client side markup.
As appreciated by those skilled in the art from the foregoing description, controls 306 can be organized in a tree structure similar to that used in visual controls 302. Since1 each of the controls 306 are also associated with selected visual controls 302, the organization of the controls 306 can be related to the structure of the controls 302.
The QA controls 302 may be used to speech- enable both atomic controls (textbox, label, etc.) and container controls (form, panel, etc.) This
provides a way of scoping behaviour and of obtaining modularity of subdialog controls. For example, the scope will allow the user of the client device to navigate to other portions of the client side markup page without completing a dialog.
In one embodiment, "Scope" is determined as a node of the primary controls tree. The following is' an example "help" command, scoped at the level of the "Pnll" container control, which contains two textboxes .
<asp:panel id="Pnll" ...>
<asp: textbox id="tbl" ... />
<asp: textbox id="tb2" ... /> </asp:panel>
<Speech:QA ... > <Command id="HelpCmdl" scope="Pnll" type="help" onClientReco="GlobalGiveHelp ( ) " >
<Grammar src="grammars/help. gram"/> </Command> </Speech:QA>
<script> function GlobalGiveHel ( ) {
} </script>
As specified, the "help" grammar will be active in every QA control relating to "Pnll" and its contents. The GlobalGiveHelp subroutine will execute every time "help" is recognized. To override this and
achieve context-sensitive behavior, the same typed command can be scoped to the required level of context :
<Speech:QA ... > <Command id="HelpCmd2" scope="Tb2" type="help" onClientReco="SpecialGiveHelp ( ) " >
<Grammar src="grammars/help. gram"/> </Command> </Speech:QA>
<script> function SpecialGiveHelp ( ) {
} </script>
Confirmation Control
The QA control 320 can also include a method for simplifying the authoring of common
'confirmation subdialogs. The following QA control exemplifies a typical subdialog which asks and then confirms a value:
<Speech:QA id="qaDepCity" controlsToSpeechEnable="txtDepCity" runat="server" >
<! — asking for a value —> <Question id="AskDeρCity" type="ask"
Answers="AnsDepCity" > <prompt> Which city? </prompt> </Question>
<Answer id="AnsDepCity" confirmThreshold="60" > <grammar src="grammars/depCity. gram" /> </Answer>
<! — confirming the value —> <Confirm id="ConfirmDepCity"
Answers="AnsConfDepCity" > ' <prompt>
Did you say <value targetElement="txtDepCity/Text">? </prompt> </Confirm> <Answer id="AnsConfDepCity" >
<grammar src="grammars/YesNoDepCity. gram" /> </Answer>
</Speech:QA>
In this example, a user response to Λwhich city?' which matches the AnsDepCity grammar but whose confidence level does not exceed the confirmThreshold value will trigger the confirm control 308. More flexible methods of confirmation available to the author include mechanisms using multiple question controls and multiple answer controls.
In a further embodiment, additional input controls related to the confirmation control include an accept control, a deny control and a correct control. Each of these controls could be activated (in a manner similar to the other controls) by the corresponding confirmation control and include grammars to accept, deny or correct results, respectively. For instance, users are likely to deny
be saying "no", to accept by saying "yes" or "yes + current value" (e.g., "Do you want to go to Seattle?" "Yes, to Seattle") , to correct by saying "no" + new value (e.g., "Do you want to go to Seattle" "No, Pittsburgh") .
Statement Control
The statement control allows the application developer to provide an output upon execution of the client side markup when a response is not required from the user of the client device 30. An example could be a "Welcome" prompt played at the beginning of execution of a client side markup page.
An attribute can be provided in the statement control to distinguish different types of information to be provided to the user of the client device. For instance, attributes can be provided to denote a warning message or a help message. These types could have different built-in properties such as different voices. If desired, different forms of statement controls can be provided, i.e. a help control, warning control, etc. Whether provided as separate controls or attributes of the statement control, the different types of statements have different roles in the dialog created, but share the fundamental role of providing information to the user of the client device without expecting an answer back.
Eventing
Event handlers as indicated in FIG. 11 are provided in the QA control 320, the output controls 308 and the input controls 310 for actions/inactions of the user of the client device 30 and for operation of the recognition server 204 to name a few, other events are specified in Appendix B. For instance, mumbling, where the speech recognizer detects that the user has spoken but is unable to recognize the words and silence, where speech is not detected at all, are specified in the QA control 320. These events reference client-side script functions defined by the author. In a multimodal application specified earlier, a simple mumble handler that puts an error message in the text box could be written as follows:
<Speech:QA controlsToSpeechEnable="txtDepCit y" onClientNoReco="OnMumble ( ) " runat="server"> <Answer id="AnsDepCity"
StartEvent="onMouseDown" StopEvent="onMouseUp" > <grammar src="/grammars/depCities . gram"/> <bind value="//sml/DepCity" targetElement="txtCity" />
</Answer> </Speech:QA>
<script> function OnMumble ( ) { txtDepCity. value="...recognition error... ";
} </script>
Control Execution Algorithm
In one embodiment, a client-side script or module (herein referred to as "RunSpeech") is provided to the client device. The purpose of this script is to execute dialog flow via logic, which is specified in the script when executed on the client device 30, i.e. when the markup pertaining to the controls is activated for execution on the client due to values contained therein. The script allows multiple dialog turns between page requests, and therefore, is particularly helpful for control of voice-only dialogs such as through telephony browser 216. The client-side script RunSpeech is executed in a loop manner on the client device 30 until a completed form in submitted, or a new - page is otherwise requested from the client device 30.
It should be noted that in one embodiment, the controls can activate each other (e.g. question control activating a selected answer control) due to values when executed on the client. However, in a further embodiment, the controls can "activate" each other in order to generate appropriate markup, in which case server-side processing may be implemented.
Generally, in one embodiment, the algorithm generates a dialog turn by outputting speech and recognizing user input. The overall logic of the algorithm is as follows for a voice-only scenario: 1. Find next active output companion control;
2. If it is a statement, play the statement and go back to 1; If it is a question or a confirm go to 3;
3. Collect expected answers;
4. Collect commands;
5. Play output control and listen in for input;
6. Activate recognized Answer or Command object or, issue an event if none is recognized;
7. Go back to 1. In the multimodal case, the logic is simplified to the following algorithm:
1. Wait for triggering event - i.e., user tapping on a control;
2. Collect expected answers;
3. Listen in for input;
4. Activate recognized Answer object or, if none, throw event;
5. Go back to 1.
The algorithm is relatively simple because, as noted above, controls contain built-in information about when they can be activated. The algorithm also makes . use of the role of the controls in the dialogue. For example statements are played immediately, while questions and confirmations are only played once the expected answers have been collected.
In a further embodiment, implicit confirmation can be provided whereby the system confirms a piece of information and asks a question at the same time. For example the system could
confirm the arrival city of a flight and ask for the travel date in one utterance: "When do you want to go to Seattle?" (i.e. asking Λwhen' and implicitly confirming estination: Seattle' ) . If the user gives a date then the city is considered implicitly accepted since, if the city was wrong, users would have immediately 'challenged it. In this scenario, it becomes clear that the knowledge of what a user is trying to achieve is vitally important: are they answering the question, or are they correcting the value, or are they asking for help? By using the role of the user input in the dialogue the system can know when to implicitly accept a value .
In summary, a dialog is created due to the role of the control in the dialog and the relationship with other controls, wherein the algorithm executes the controls and thus manages the dialog. Each control contains information based on its type which is used by the execution algorithm to select (i.e. make active for execution) a given control according to whether or not it serves a useful purpose at that point in the dialog on the client. For example, confirmation controls are only active when there is a value to confirm and the system does not have sufficient confidence in that value to proceed. In a further implementation, most of these built-in pieces of information can be overridden or otherwise adapted by application developers .
The following table summarizes the controls, their corresponding role in the dialog and the relationship with other controls.
The use of these controls may be explained with an illustration of a simple human/computer dialog. In the dialog below, each dialog turn on the part of the
System or the User is characterized according to the control (indicated in parentheses) which reflects its purpose in the dialog.
1. System (Statement): "Welcome to the travel booking service" .
2. System (Question): "Where would you like to go?"
3. User (Answer) : "San Francisco."
4. System (Confirmation): "Did you say Seattle?"
5. User (Deny) : "No."
6. System (Question) : "Where would you like to go?"
7. User (Answer): "San Francisco."
8. System (Confirmation): "Did you say Seattle?"
9. User (Correct) : "I said San Francisco."
10. System (Confirmation): "Did you say San Francisco?"
11. User (Correct): "Yes."
12. System (Question): "When would you like to leave?"
13. User (Command): "Help."
Turn 1 is a statement on the part of the System. Since a statement control activates no answer controls in response, the system does not- expect input. The system goes on to activate a question cqntrol at turn 2. This in turn activates a set of possible answer controls, including one which holds a grammar containing the cities available through the service, including "San Francisco", "Seattle", etc., which permits the user to provide such a city in turn 3.
The user's turn 3 is misrecognized by the system. Although the system believes it has a value from an answer control for the city, its confidence in that value is low (rightly so, since it has recognized incorrectly) . This low confidence value in a just-received answer control is sufficient information for RunSpeech to trigger a confirmation control on the part of the system, as generated at turn 4. The confirmation control in turn activates a deny control, a correct control and an accept control and makes their respective grammars available to recognize the user's next turn. User turns 5, 9 and 11 illustrate example responses for these controls. Turn 5 of the user simply denies the value "no". This has the effect of removing the value from the system, so the next action of RunSpeech is to ask the question again to re-obtain the value (turn 6) .
Turns 7 and 8 return us to a confirmation control as with 3 and 4.
User turn 9 is a correct control, which has again been activated as a possible response to the confirmation control . A correct control not only denies the value undergoing confirmation, it also provides a new value. So user turn 9 is recognized by the system as a correct control with a new value which, correctly this time, is recognized as "San Francisco".
The system' s confidence in the new value is low, however, and yet another confirmation control is generated at turn 10. This in turn activates accept,
deny and correct controls in response, and user turn 11 ("Yes" matches an accept control grammar. The recognition of the accept control has the effect of λgrounding' the system's belief in the value which it is trying to obtain, and so RunSpeech is now able to select other empty values to obtain. In turn 12, a new question control is output which .asks for a date value. The user's response this time (turn 13) is a command: "help". Command controls are typically activated in global fashion, that is, independently of the different question controls and confirmation controls on the part of the system. In this way the user is able to ask for help at any time, as he does in turn 13. Command controls may also be more sensitively enabled by a mechanism that scopes their activation according to which part of the primary control structure is being talked about.
Referring back to the algorithm, in one exemplary embodiment, the client-side script RunSpeech examines the values inside each of the primary controls and an attribute of the QA control, and any selection test of the QA controls on the current page, and selects a single QA control for execution. For example, within the selected QA control, a single question and its corresponding prompt are selected for output, and then a grammar is activated related to typical answers to the corresponding question. Additional grammars may also be activated, in parallel, allowing other commands (or other answers) , which are indicated 'as being
allowable. Assuming recognition has been made and any further processing on the input data is complete, the client-side script RunSpeech will begin again to ascertain which QA control should be executed next. An exemplary implementation and algorithm of RunSpeech is provided in Appendix B.
It should be noted that the use of the controls and the RunSpeech algorithm or module is not limited to the client/server application described above, but rather can be adapted for use with other application abstractions. For instance, an application such as VoiceXML, which runs only on the client device 30, could conceivably . include further elements or controls such as question and answer provided above as part of the VoiceXML browser and operating in the same manner. In this case the mechanisms of the RunSpeech algorithm described above could be executed by default by the browser without the necessity for extra script. Similarly, other platforms such' as finite state machines can be adapted to include the controls and RunSpeech algorithm or module herein described.
Synchronization
As noted above, the companion controls 306 are associated with the primary controls 302 (the existing controls on the page) . As such the companion controls 306 can re-use the business logic and presentation capabilities of the primary controls 302. This is done in two ways: storing values in the
primary controls 302 and notifying the primary controls of the changes 302.
The companion controls 306 synchronize or associates their values with the primary controls 302 via the mechanism called binding. Binding puts values retrieved from recognizer into the primary controls 302, for example putting text into a textbox, he'rein exemplified with the answer control. Since primary controls 302 are responsible for visual presentation, this provides visual feedback to the users in multimodal scenarios.
The companion controls 306 also offer a mechanism to notify the primary controls 302 that they have received an input via the recognizer. This allows the primary controls 302 to take actions, such as invoking the business logic. (Since the notification amounts to a commitment of the companion controls 306 to the values which they write into the primary controls 302, the implementation provides a mechanism to control this notification with a fine degree of control. This control is provided by the RejectThreshold and Conf irmThreshold properties on the answer control, which specify numerical acoustic confidence values below which the system should respectively reject or attempt to confirm a value.)
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention. ■
APPENDIX A
Introduction
The following tags are a set of markup elements that allows a document to use speech as an input or output medium. The tags are designed to be self-contained XML that can be imbedded into any SGML derived markup ' languages such as HTML, XHTML, cHTML, SMIL, WML and the like. The tags used herein are similar to SAPI 5.0, which are known methods available from Microsoft Corporation of Redmond, Washington. The tags, elements, events, attributes, properties, return values, etc. are merely exemplary and should not be considered limiting. Although exemplified herein for speech and DTMF recognition, similar tags can be provided for other forms of recognition.
The main elements herein discussed are:
<pro pt ...> for speech synthesis configuration and prompt playing
<reco ...> for recognizer configuration and recognition execution and post-processing
<grammar ...> for specifying input grammar resources <bind ...> for processing of recognition results <dtmf ...> for configuration and control of DTMF
Reco
The Reco element is used to specify possible user inputs and a means for dealing with the input results.
As such, its main elements are <grammar> and <bind>, and it contains resources for configuring recognizer properties .
Reco elements are activated programmatically in uplevel browsers via Start and Stop methods, or in SMIL-enabled browsers by using SMIL commands. They' are considered active declaratively in downlevel browsers (i.e. non script-supporting browsers) by their presence on the page. In order to permit the activation of multiple grammars in parallel, multiple Reco elements may be considered active simultaneously.
Recos may also take a partcular mode - ^automatic' , vsingle' or multiple' - to distinguish the kind of recognition scenarios which they enable and the behaviour of the recognition platform.
2. 1 Reco content
The Reco element contains one or more grammars and optionally a set of bind elements which inspect the results of recognition and copy the relevant portions to values in the containing page.
In uplevel browsers, Reco supports the programmatic activation and deactivation of individual grammar rules. Note also that all top-level rules in a grammar are active by default for a recognition context.
2.1.1 <grammar> element
The grammar element is used to specify grammars, either inline or referenced using the src attribute. At least one grammar (either inline or referenced) is typically specified. Inline grammars can be text-based grammar formats, while referenced grammars can be text-based or binary type. Multiple grammar elements may be specified. If more than one grammar element is specified, the rules within "grammars are added as extra rules ' within the same grammar. Any rules with the same name will be overwritten.
Attributes :
• src: Optional if inline grammar is specified. URI of the grammar to be included. Note that all top- level rules in a grammar are active by default for a recognition context.
• langID: Optional. String indicating which language speech engine should use. The string format follows the xml:lang definition. For example, langID="en-us" denotes US English. This attribute is only effective when the langID is not specified in the grammar URI. If unspecified, defaults to US English.
If the langID is specified in multiple places then langID follows a precedence order from the lowest scope - remote grammar file (i.e language
id is specified within the grammar file) followed by grammar element followed by reco element.
<grammar src="FromCity. l" /> 5 or <grammar>
<rule toplevel="active"> <p>from </p>
<ruleref name="cities" /> 10 </rule>
<rule name="cities"> <1>
<p> Cambridge </p> <p> Seattle </p> 15 <p> London </p>
</l> </rule> </grammar> '
20 If both a src-referenced grammar and an inline grammar are specified, the inline rules are added to the referenced rules, and any rules with the same name will be overwritten.
2.1.2 <bind> element
25. The bind element is used to bind values from the recognition results into the page.
The recognition results consumed by the bind element can be an XML document containing a semantic markup 30 language (SML) for specifying recognition results. Its contents include semantic values, actual words spoken, and confidence scores. SML could also include alternate recognition choices (as in an N-best recognition result) . A sample SML document for the
utterance "I'd like to travel from Seattle to Boston" is illustrated below:
<sml confidence=" 0"> <travel text="I'd like to travel from
Seattle to Boston">
<origin_city confidence="45"> Seattle </origin__city>
<dest_cify confidence="35"> Boston </dest_city>
</travel> </sml>
Since an in-grammar recognition is assumed to produce an XML document - in semantic markup language, or SML - the values to be bound from the SML .document are referenced using an XPath query. And since the elements in the page into which the values will be bound should be are uniquely identified (they are likely to be form controls), these target elements are referenced directly.
Attributes :
• targetElement : Required. The element to which the value content from the SML will be assigned (as in W3C SMIL 2.0) .
• targetAttribute : Optional. The attribute of the target element to which the value content from the SML will be assigned (as with the attrlbuteName attribute in SMIL 2.0) . If unspecified, defaults to "value".
• test: Optional. An XML Pattern (as in the W3C XML
DOM specification) string indicating the condition under which the recognition result will be assigned. Default condition is true. • value: Required. An XPATH (as in the W3C XML DOM specification) string that specifies the value from the recognition result document to be assigned to the target element.
Example :
So given the above SML return, the following reco element uses bind to transfer the values in origin_city and dest_city into the target page elements txtBoxOrigin and txtBoxDest:
<input name="txtBoxOrigin" type="text"/> <input name="txtBoxDest" type="text" />
<reco id="travel"> <grammar src=" . /city.xml" />
<bind targetElement="txtBoxOrigin" value="//origin_city" /> <bind targetElement="txtBoxDest" value="//dest_city" />
</reco>
This binding may be conditional, as in the following example, where a test is made on the confidence
attribute of the dest_city result as a pre-condition to the bind operation:
<bind targetEl.ement="txtBoxDest " 5 value="//dest_city" test="/sml/dest__city [Θconfidence $gt$ 40]" />
The bind element is a simple declarative means of 10 processing recognition results on downlevel or uplevel browsers. For more complex processing, the reco DOM object supported by uplevel browsers implements the onReco event handler to permit programmatic script analysis and post-processing of the recognition '15 return.
2.2 Attributes and properties
The following attributes are supported by all- browsers, and the properties by uplevel browsers.
2.2.1 Attributes
20 The following attributes of Reco are used to configure the speech recognizer for a dialog turn.
• initialTimeout: Optional. The time in milliseconds between start of recognition and 25 the detection of speech. This value is passed to the recognition platform, and if exceeded, an onSilence event will be provided from the recognition platform (see 2.4.2). If not
specified, the speech platform will use a default value.
• babbleTimeout: Optional. The period of time in milliseconds in which the recognizer must return a result after detection of speech. For recos in automatic and single mode, this applies to the period between speech detection and the stop call. For recos in ^multiple' mode, this timeout applies to the period between speech detection and each recognition return - i.e. the period is restarted after each return of results 'or other event. If exceeded, different events are thrown according to whether an error has occurred or not. If the recognizer is still processing audio - eg in the case of an exceptionally long utterance - the onNoReco event is thrown, with status code 13 (see 2.4.4). If the timeout is exceeded for any other reason, however, a recognizer error is more likely, and the onTimeout event is thrown. If not specified, the speech platform will default to an internal value.
• maxTimeout: Optional. The' period of time in milliseconds between recognition start and results returned to the browser. If exceeded, the onTimeout event is thrown by the browser - this caters for network or recognizer failure in distributed environments. For recos in
λmultiple' mode, as with babbleTimeout, the period is restarted after the return of each recognition or other event. Note that the maxTimeout attribute should be greater than or equal . to the sum of initialTimeout and babbleTimeout. If not specified, the value will be a browser default.
• endSilence: Optional. For Recos in automatic mode, the period of silence in milliseconds after the end of an utterance which must be free of speech after which the recognition results are returned. Ignored for recos of modes Other than automatic. If unspecified, defaults to platform internal value. • reject: Optional. The recognition rejection threshold, below which the platform will throw the Λno reco' event. If not specified, the speech platform will use a default value. Confidence scores range between 0 and 100 (integer)-. Reject values lie in between.
• server: Optional. URI of speech platform (for use when the tag interpreter and recognition platform are not co-located) . An example value might be server=protocol : AAyourspeechplatform . An application writer is also able to provide speech platform specific settings by adding a querystring to the URI string, eg protocol : AAyourspeechplatform ?barge±nEnergyThr eshold=0. 5.
• langID: Optional. String indicating which language speech engine should use. The string format follows the xml:lang definition. For example, langID="en-us" denotes US English. This attribute is only effective when the langID is not specified in the grammar element (see 2.1.1) .
• mode: Optional. String specifying the recognition mode to be followed. If unspecified, defaults to "automatic" mode.
2.2.2 Properties
The following properties contain the results returned by the recognition process (these are supported by uplevel browsers) .
• recoResult Read-only. The results of recognition, held in an XML DOM node object containing semantic markup language (SML) , as described in 2.1.2, In case of no recognition, the property returns null.
• text Read-only. A string holding the text of the words recognized (i.e., a shorthand for contents of the text attribute of the highest level element in the SML recognition return in recoResult. status: Read-only. Status code returned by the recognition platform. Possible values are 0 for successful recognition, or the failure values -1
to -4 (as defined in the exceptions possible on the Start method (section 2.3.1) and Activate method (section 2.3.4)), and statuses -11 to -15 set on the reception of recognizer events (see 2.4).
2. 3 Object methods
Reco activation and grammar activation may be controlled using the following methods in the Reco' s DOM object. With these methods, uplevel browsers can start and stop Reco objects, cancel recognitions in progress, and activate and deactivate individual grammar top-level rules (uplevel browsers only) .
2.3.1 Start
The Start method starts the recognition process, using as active grammars all the top-level rules for the recognition context which have not been explicitly deactivated.
Synta : Object. Start ( )
Return value:
None. Exception:
The method sets a non-zero status code and fires an. onNoReco event when fails. Possible failures include no grammar (reco status = - 1) , failure to load a grammar, which could be a variety of reasons like failure to
compile grammar, non-existent URI (reco status = -2), or speech platform errors (reco status = -3) .
2.3.2 Stop The Stop method is a call to end the recognition proces's. The Reco object stops recording audio, and the recognizer returns recognition results on the audio . received up to the point where recording was stopped. All the recognition resources used by Reco ' are released, and its grammars deactivated. (Note that this method need not be used explicitly for typical recognitions in automatic mode, since the recognizer itself will stop the reco object on endpoint detection after recognizing a complete sentence.) If the Reco has not been started, the call has no effect.
Synta :
Object. Stop ( ) R turn value : None .
Exceptio :
None.
2.3.3 Cancel
The Cancel method stops the audio feed to the recognizer, deactivates the grammar, and releases the recognizer and discards any recognition results. The browser will disregard a recognition result for
canceled recognition. If the recognizer has not been started, the call has no effect.
Syntax:
Object. Cancel ( ) Return value :
None. Exception:
None.
2.3.4 Activate
The Activate method activates a top-level rule in the context free grammar (CFG) . Activation must be called before recognition begins, since it will have no effect during a ΛStartedf recognition process. Note that all the grammar top-level rules for the recognition context which have not been explicitly deactivated are already treated as active.
Syntax :
Object. ctivate (strName) ; Parameters : o strName: Required. Rule name to be activated. Return value: None. Exception: None.
2.3.5 Deactivate
The method deactivates a top-level rule in the grammar. If the rule does not exist, the method has no effect.
Syntax :
Object. Deactivate (strName) ; Parameters : o strName: Required. Rule name to be deactivated. An empty string deactivates all rules . Return value
None. Exception None.
2. 4 Reco events
The Reco DOM object supports the following events, whose handlers may be specified as attributes of the reco element .
2.4.1 onReσo :
This event gets fired when the recognizer has a recognition result available for the browser. For recos in automatic mode, this event stops, the recognition process automatically and clears resources (see 2.3.2). OnReco is typically used for programmatic analysis of the recognition
result and processing of the result into the page.
Syntax:
Event Ob ect Info :
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event object for data (see the use of the event object in the example below) .
Example
The following XHTML fragment uses onReco to call a script to parse the recognition outcome and assign the values to the proper fields.
<input name="txtBoxOrigin" type="text" /> <input name="txtBoxDest" type="text" />
<reco onReco="processCityRecognition ( ) "/>
<grammar src="/grammars/cities .xml" /> </reco>
<script><! [CDATA[ function processCityRecognition () { smlResult = event . srcElement . recoResult;
origNode = smlResult . selectSingleNode ("//origin_city") ; if (origNode != null) txtBoxOrigin. value = origNode. text;
destNode = smlResult. selectSingleNode ("//dest_city") ; if (destNode != null) txtBoxDest . value = destNode. text;
} ]]></script>
2.4.2 onSilence : onSilence handles the event of no speech detected by the recognition platform before the duration of time specified in the initialTimeout attribute on the Reco (see 2.2.1). This event cancels the recognition process automatically for- the automatic recognition mode.
Syntax :
Inline HTML <reco onSilence="handler" ...>
Event property (in Object . onSilence = handler ECMAScript) Object . onSilence = GetRef ("handler") ;
Event Object Info:
'Event Properties: Although the event handler does not receive properties directly, the handler can query the event object for data.
2.4.3 onTimeout onTimeout handles two types of event which typically reflect errors from the speech platform.
It handles the event thrown by the tags interpreter which signals that the period specified in the maxtime attribute (see 2.2.1) expired before recognition was completed. This event will typically reflect problems that could occur in a distributed architecture.
It also handles (ii) the event thrown by the speech recognition platform when recognition has
begun but processing has stopped without a recognition within the period specified by babbleTimeout (see 2.2.1).
This event cancels the recognition process automatically.
Synta :
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event obj ect for data .
2 . 4 . 4 onNoReco : onNoReco is a handler for the event thrown by the speech recognition platform when . it is unable to
return valid recognition results. The different cases in which this may happen are distinguished by status code. The event stops the recognition process automaticall .
Syntax:
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event object for data.
Prompt
The prompt element is used to specify system output. Its content may be one or more of the following:
• inline or referenced text, which may be marked up with prosodic or other speech output information;
• variable values retrieved at render time from the containing document; • links to audio files.
Prompt elements may be interpreted declaratively by downlevel browsers (or activated by SMIL commands) , or by object methods on uplevel browsers.
3. 1 Prompt content
The prompt element contains the resources for system output, either as text or references to audio files, or both.
Simple prompts need specify only the text required for output, eg:
<prompt id="Welcome">
Thank you for calling ACME weather report. </prompt>
This simple text may also contain further markup of any of the kinds described below.
3.1.1 Speech Synthesis markup Any format of speech synthesis markup language can be used inside the prompt element. (This format may be specified in the Λtts' attribute described in 3.2.1.)
The following example shows text with an instruction to emphasize certain words within it:
<prompt id="giveBalance"> You have <emph> five dollars </emph> left in your account. </prompt>
3.1.2 Dynamic content
The actual content of the prompt may need to be computed on the client just before the prompt is output. In order to confirm a particular value, for example, the value needs to be dereferenced in a. variable. The value element may be used for this purpose.
Value Element value: Optional. Retrieves the values of an element in the document .
Attributes :
• targetElement: Optional. Either href or targetElement must be specified. _ The id of the element containing the value to be retrieved.
• targetAttribute: Optional. The attribute of the element from which the value will be retrieved.
• href: Optional. The URI of an audio segment, href will override targetElement if both are present.
The targetElement attribute is used to reference an element within the containing document. The content of the element whose id is specified by targetElement is inserted into the text to be synthesized. If the desired content is held in an attribute of the element, the targetAttribute attribute may be used to specify the necessary attribute on the targetElement. This is useful for dereferencing the values in HTML form controls, for example. In the following illustration, the "value" attributes of the "txtBoxOrigin" and "txtBoxDest" elements are inserted into the text before the prompt is output
<prompt id="Confirm"> Do you want to travel from
<value targetElement="txtBoxOrigin" targetAttribute="value" /> to
<value targetElement="txtBoxDest" targetAttribute="value" />
</prompt>
3.1.3 Audio files
The value element may also be used to refer to a pre- recorded audio file for playing instead of, or within, a synthesized prompt. The following example plays a beep at the end of the prompt: <prompt>
After the beep, please record your message.
<value href="/wav/beep. wav" /> </prompt>
3.1.4 Referenced prompts
Instead of specifying content inline, the src attribute may be used with an empty element to reference external content via URI, as in:
<prompt id="Welcome" src="/ACMEWeatherPrompts#Welcome" />
The target of the src attribute can hold any or all of the above content specified for inline prompts.
3.2 Attributes and properties
The prompt element holds the following attributes (downlevel browsers) and properties (downlevel and uplevel browsers) .
3.2.1 Attributes
• tts: Optional. The markup language type for text-to-speech synthesis. Default is "SAPI 5". • src: Optional if an inline prompt is specified. The URI of a referenced prompt (see 3.1.4) .
• bargein: Optional. Integer. The period of time in milliseconds from start of prompt to when playback can be interrupted by the human listener. Default is infinite, i.e., no bargein is allowed. Bargein=0 allows immediate
bargein. This applies to whichever kind of barge-in is supported by platform. Either keyword or energy-based bargein times can be configured in this way, depending on which is enabled at the time the reco is started.
• prefetch: Optional. A Boolean flag indicating whether the prompt should be immediately synthesized and cached at browser when the page is loaded. Default is false.
3.2.2 Properties
Uplevel browsers support the following properties in the prompt's DOM object.
• bookmark: Read-only. A string object recording the text of the last synthesis bookmark encountered. . status: Read-only. Status code returned by the speech platform.
3.3 Prompt methods Prompt playing may be controlled using the following methods in the prompt's DOM object. In this way, uplevel browsers can start and stop prompt objects, pause and resume prompts in progress, and change the speed and volume of the synthesized speech.
3.3. ' 1 Start
Start playback of the prompt. Unless an argument is given, the method plays the contents of the
object. Only a single prompt object is considered λstarted' at a given time, so if Start is called in succession, all playbacks are played in sequence.
Syntax:
Obj ect . Start ( '[strText] ) ; Parameters : o strText: the text to be sent to the synthesizer. If present, this argument overrides the contents of the object. Return value :
None. Exception : Set status = -1 and fires an onComplete event if the audio buffer is already released by the server.
3.3.2 Pause
Pause playback without flushing the audio buffer. This method has no effect if playback is paused or stopped.
Syntax:
Obj ect . Pause ( ) ; Return value :
None . Exception :
None .
3 . 3 . 3 Resume
Resume playback without flushing the audio buffer. This method has no effect if playback has not been paused.
Synta :
Obj ect . Resume ( ) ; Return value: None. Exception:
Throws an exception when resume fails.
3.3.4 Stop
Stop playback, if not already, and flush the audio buffer. If the playback has already been stopped, the method simply flushes the audio buffer.
Syntax:
Object. Stop ( ) ; Return value: None.
Exception :
None.
3.3.5 Change
Change speed and/or volume of playback. Change may be called during playback.
Syn ax :
Obj ect . Change (speed, volume) ; Parameters : o speed: Required. The factor to change. 5 Speed=2.0 means double the current rate, speed=0.5 means halve the current rate, speed=0 means to restore the default value, o volume: Required. The factor to change. Volume=2.0 means double the current volume, 0 volume =0.5 means halve the current volume, volume =0 means to restore the default value. Return value: None. 5 Exception: None.
3.3.6 Prompt control example
The following example shows how control of the prompt using the methods above might be authored for a 0 platform which does not support a keyword barge-in mechanism.
<html>
<title>Prompt control</title> Shead> <script> < ! — function checkKWBargein ( ) { news . change ( 1 . 0 , 0 . 5 ) ; // turn down the 0 volume while verifying if ( keyword . text == " " ) { / / result is below threshold
news . change (1.0, 2.0); // restore the volume keyword. Start () ; // restart the recognition 5 } else { news.StopO; // keyword detected! Stop the prompt
// Do whatever that is necessary
} 0 }
//
</script> <script for="window" event="onload"> <! — 5 news . Start () ; keyword. Start () ; // </script> </head> <body> 0 <prompt id="news" bargein="0">
Stocks turned in another lackluster performance
Wednesday as investors received little incentive to make any big moves ahead of next week ' s Federal
Reserve meeting. The tech-heavy Nasdaq Composite Index 5 dropped 42.51 points to close at 2156.26. The Dow
Jones Industrial Average fell 17.05 points to 10866.46 after an early-afternoon rally failed.
- <! — </prompt> 0 <reco id="keyword" reject="70" onReco="checkKWBargein() " > <grammar src=http : //denali/news bargein grammar . l /> 5 </reco> </body> </html>
3. 4 Prompt events
The prompt DOM object supports the following events, whose handlers may be specified as attributes of the prompt element.
3.4.1 onBookmark
Fires when a synthesis bookmark is encountered. The event does not pause the playback.
Syntax:
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event object for data.
3.4.2 onBargein:
Fires when a user's barge-in event is detected. (Note that determining what constitutes a bargein event, eg energy detection or keyword recognition, is up to the platform. ) A specification of this event handler does not automatically turn the barge-in on.
Syntax :
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event object for data.
3.4.3 onComplete :
Fires when the prompt playback reaches the end or exceptions (as defined above) are encountered.
Syntax :
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event object for data.
3.4.4 Using bookmarks and events
The following example shows how bookmark events can be used to determine the semantics of a user response - either a correction to a departure city or the provision of a destination city - in terms of when bargein happened during the prompt output. The onBargein handler calls a script which sets a global λmark' variable to the last bookmark encountered in the prompt, and the value of this Λmark' is used in
the reco's postprocessing function ( λheard' ) to set the correct value.
<script><! [CDATA[ var mark; function interrupt ( ) { mark = event . srcElement .bookmark; } function ProcessCityConfirm( ) { confirm. stop () ; // flush the audio buffer if (mark == "mark_origin_city") txtBoxOrigin. value = event . srcElement . text ; else txtBoxDest .value = event . srcElement . text ;
} ] ] ></script> <body>
<input name="txtBoxOrigin" value="Seattle" type="text"/>
<input name="txtBoxDest" type="text" /> <prompt id="confirm" onBargein="interrupt ( ) " bargein="0">
From <bookmark mark="mark_origin_city" /> <value targetElement="orgin" targetAttribute="value" />, please say <bookmark mar ="mark_dest_city'
/> the destination city you want to travel to. </prompt>
<reco onReco="ProcessCityConfirm ( ) " > <grammar src="/grm/1033/cities .xml" />
</reco>
</body>
DTMF
Creates a DTMF recognition object. The object can be instantiated using inline markup language syntax or in scripting. When activated, DTMF can cause prompt object to fire a barge-in event. It should be noted the tags and eventing discussed below with respect to DTMF recognition and call control discussed in Section 5 generally pertain to interaction between the voice browser 216 and media server 214.
4. 1 Content
• dtmfgrammar: for inline grammar.
. bind: assign DTMF conversion result to proper field.
Attributes :
• targetElement: Required. The element to which a partial recognition result will be assigned to
(cf. same as in W3C SMIL 2.0) . • targetAttribute : the attribute of the target element to which the recognition result will be assigned to (cf. .same as in SMIL 2.0) . Default is "value" .
• test: condition for the assignment. Default is true.
Example 1: map keys to text.
<input type="text" name="city"/> <DTMF id="city_choice" timeout="2000" numDigits="l">
<dtmfgrammar> <key value="l">Seattle</key>
<key value="2">Boston</key> </dtmfgrammar> <bind targetElement="city" targetAttribute="value" /> </DTMF>
When "city_choice" is activated, "Seattle" will be assigned to the input field if the user presses 1, "Boston" if 2, nothing otherwise.
Example 2: How DTMF can be used with multiple fields,
<input type="text" name="area_code" /> <input type="text" name="phone_number" /> <DTMF id="areacode" numDigits="3" onReco="extension.Activate ( ) ">
<bind targetElement="area_code" /> </DTMF>
<DTMF id="extension" numDigits="7"> <bind targetElement=//phone_number" />
</DTMF>
This example demonstrates how to allow users entering into multiple fields.
Example 3: How to allow both speech and DTMF inputs and disable speech when user starts DTMF.
<input type="text" name="credit_card_number" /> <pro.mpt onBookmark="dtmf. Start () ; speech. Start () bargein="0"> Please say <bookmark name="starting" /> or enter your credit card number now
</prompt>
<DTMF id="dtmf" escape="#" length="16" interdigitTimeout="2000" onkeypress="speech. Stop () "> <bind targetElement="credit_card_number" />
</DTMF> <reco id="speech" >
<grammar src="/grm/1033/digits . xml" /> <bind targetElement="credit_card_number" /> </reco> !
4.2 Attributes and properties
4.2.1 Attributes
• dt fgrammar: Required. The URI of a DTMF grammar.
4.2.2 Properties
. DT Fgrammar Read-Write.
An XML DOM Node object representing DTMF to string conversion matrix (also called DTMF grammar) . The default grammar is
<dtmfgrammar>
<key value="0">0</key> <key value="l">K/key> ' <key value="9">9</key>
<key value="*">*</key> <key value="#">#</key> </dtmfgrammar >
. flush
Read-write, a Boolean flag indicating whether to automatically flush the DTMF buffer on the underlying telephony interface card before
activation. Default is false to enable type- ahead.
escape
Read-Write. The escape key to end the DTMF reading session. Escape key is one key.
numDigits
Read-Write. Number of key strokes to end the DTMF reading session. If both escape and length are specified, the DTMF session is ended when either condition is met.
dtmfResult
Read-only string, storing the DTMF keys user has entered. Escape is included in result if typed.
text
Read-only string storing white space separated token string, where each token is converted according to DTMF grammar.
initialTimeout
Read-Write. Timeout period for receiving the first DTMF keystoke, in milliseconds. If unspecified, defaults to the telephony platform' s internal setting.
• interdigitTimeout
Read-Write. Timeout period for adjacent DTMF keystokes, in milliseconds. If unspecified, defaults to the telephony platform's internal setting.
4. 3 Object methods:
4.3.1 Start
Enable DTMF interruption and start a DTMF reading session.
Syntax:
Object. Start ( ) Return value: None Exception:
None
4.3.2 Stop
Disable DTMF. The key strokes entered by the user, however, remain in the buffer.
Syntax:
Obj ect . Stop ( ) ; Return value : None
Exception :
None
4.3.3 Flush
Flush the DTMF buffer. Flush can not be called during a DTMF session..
Syntax :
Object. Flush ( ) ; Return value :
None Exception :
None
4. 4 Events
4.4.1 onkeypress
Fires when a DTMF key is press. This overrides the default event inherited from the HTML control. When user hits the escape key, the onRec event fires, not onKeypress.
Syntax :
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler can query the event obj ect for data .
onReco
Fires when a DTMF session is ended. The event disables the current DTMF object automatically.
Synta :
Event Object Info:
Although the event handler does not receive properties directly, the handler can query the event object for data.
4.4.3 onTimeout
Fires when no phrase finish event is received before time out. The event halts the recognition process automatically.
Synta :
Event Object Info:
Event Properties :
Although the event handler does not receive properties directly, the handler- can query the event object for data.
CallControl Object
Represents the telephone interface (call, terminal, and connection) of the telephone voice browser. This object is as native as window object in a GUI browser. As such, the lifetime of the telephone object is the same as the browser instance itself. A voice browser for telephony instantiates the telephone object, one for each call. Users don't instantiate or dispose the object.
At this point, only features related to first-party call controls are exposed through this object.
5.1 Properties
• address
Read-only. XML DOM node object. Implementation specific. This is the address of the caller. For PSTN, may a combination of ANI and ALL For VoIP, this is the caller's IP address.
• ringsBeforeAnswer
Number of rings before answering an incoming call. Default is infinite, meaning the developer must specifically use the Answer ( ) method below to answer the- phone call. When the call center uses ACD to queue up the incoming phone calls, this number can be set to 0.
5.2 Methods
Note: all the methods here are synchronous.
5.2.1 Transfer Transfers the call. For a blind transfer, the system may terminate the original call and free system resources once the transfer completes.
Synta : telephone. Transfer (strText) ;
Parameters : o strText: Required. The address of the intended receiver. Return value: None.
Exception:
Throws an exception when the call transfer fails, e.g., when end party is busy, no such number, fax or answering machine answers.
5.2.2 Bridge
Third party transfer. After the call is transferred, the browser may release resources allocated for the call. It is up to the application to recover the session state when the transferred call returns using strUID. The underlying telephony platform may route the returning call to a different browser. The call
can return only when the recipient terminates the call.
Syntax: telephone . Bridge ( strText , strUID, [imaxTime]
);
Parameters : o strText: Required. The address of the intended receiver. o strUID: Required. The session ID uniquely identifying the current call. When the transferred call is routed back, the srtUID will appear in the address attribute. o imaxTime: Optional. Maximum duration in seconds of the transferred call. If unspecified, defaults to platform-internal value Return value : None.
Exception : None.
5.2.3 Answer Answers the phone call.
Synta : telephone. Answer ( ); Return value:
None. Exception :
Throws an exception when there is no connection. No onAnswer event will be fired in this case.
5.2.4 Hangup
Terminates the phone call. Has no effect if no call currently in progress.
Syntax: telephone. Hangup ( ); Return value:
None. Exception : None.
5.2.5 Connect
Starts a first-party outbound phone call.
Syntax : telephone. Connect (strText, [iTimeout] ); Parameters : o strText: Required. The address of the intended receiver. o iTimeout: Optional. The time in milliseconds before abandoning the attempt. If
unspecified, defaults to platform-internal value. Return value: None. Exception :
Throws an exception when the call cannot be completed, including encountering busy signals or reaching a FAX or answering machine (Note: hardware may not support this feature) .
5.2.6 Record
Record user audio to file.
Syntax : telephone. Record (url, endSilence,
[maxTimeout] , [initialTimeout] ) ; Parameters : o url: Required. The url of the recorded results . o endSilence: Required. Time in milliseconds to stop recording after silence is detected, o maxTimeout: Optional. The maximum time in seconds for the recording. Default is platform-specific. o initialTimeout: Optional. Maximum time (in milliseconds) of silence allowed at the' beginning of a recording.
Return value :
None. Exception :
Throws an exception when the recording can not be written to the url.
5. 3 Event Handlers
App developers using telephone voice browser may implement the following event handlers .
5.3.1 onlncoming ( ) Called when the voice browser receives an incoming phone call. All developers can use this handler to read caller's address and invoke customized features before answering the phone call .
5.3.2 onAnswer ( )
Called when the voice browser answers an incoming phone call.
5.3.3 onHangup ( )
Called when user hangs up the phone. This event is NOT automatically fired when the program calls the Hangup or Transfer methods.
5. 4 Example
This example shows scripting wired to the call control events to manipulate the telephony session.
5HTML> <HEAD>
<TITLE>Logon Page</TITLE> </HEAD> <SCRIPT> 0 var focus; function RunSpeech ( ) { if (logon. user .value == "") { focus="user" ; p_uid. Start () ; g_login. Start () ; 5 dtmf. Start () ; return; } if (logon. pass .value == "") { focus="pin"; p_pin. Start ( ) ; g_login. Start ( ) ; 0 dtmf. Start () ; return;
} p_thank . Start ( ) ; logon . submit ( ) ;
} function login_reco() { 5 res = event . srcElement . recoResult; pNode = res . selectSingleNode ("//uid") ; if (pNode != null) logon. user. value = pNode.xml; pNode = res. selectSingleNode ("//password") ; 0 if (pNode != null) logon. pass .value = pNode.xml;
} function dtmf_reco ( ) { res = event . srcElement .dtmfResult; if (focus == "user") logon. user. alue = res; else logon. pin. value = res;
} </SCRIPT>
<SCRIPT for="callControl" event="onIncoming"> <! —
// read address, prepare customized stuff if any callControl.Answer () ; // 5 </SCRIPT>
<SCRIPT for="callControl" e ent="onOffhook"> <! — p_main. Start () ; g_login. Start () ; dtmf . Start () ; focus="user"; 0 //
</SCRIPT> <SCRIPT for="window" event="onload"> <! — if (logon. user. value != "") { 5 p_retry. Start () ; logon. user. alue = ""; logon.pass .value = ""; checkFields ( ) ;
} 0 //
</SCRIPT> <BODY>
<reco id="g_login" onReco="login_reco () ; runSpeechO" 5 timeout="5000" onTimeout="p_miss . Start () ; RunSpeech () " > <grammar src=http : //kokaneel/etradedemo/speechonly/login . ml/> </ reco > 0 <dtmf id="dtmf" escape="#" onkeypress="g_login.Stop() ;" onReco="dtmf_reco () ;RunSpeech () " interdigitTimeout="5000" 5 onTimeout="dtmf .Flush () ; p_miss. Start () ;RunSpeech () " />
<prompt id="p_main">Please say your user I D and pin number</prompt> 0 <prompt id="p_uid">Please just say your user I D</prompt> <prompt id="p_pin">Please just say your pin number</prompt>
<prompt id="p_miss">Sorry, I missed that</prompt> <prompt id="p_thank">Thank you. Please wait while I verify your identity</prompt> <prompt id="p_retry">Sorry, your user I D and pin 5 number do not match</prompt>
<H2>Login</H2> <form id="logon">
UID: <input name="user" type="text" 0 onChange="runSpeech() " />
PIN: <input name="pass" type="password" onChange="RunSpeech () " /> </form> </B0DY> 5 </HTML>
Controlling dialog flow
6. 1 Using HTML and script to implement dialog- flow 0 This example shows how to implement a simple dialog flow which seeks values for input boxes and offers context- sensitive help for the input. It uses the title attribute on the HTML input mechanisms (used in a visual browser as a "tooltip" mechanism) to help 5 form the content of the help prompt .
<html>
<title>Context Sensitive Help</title> <head> 0 <script> var focus; function RunSpeech () { if (trade. stock.value == "") focus=" trade . stock" ; p_stock. Start () ; 5 return;
} if (trade. op. alue == "") { focus="trade . op" ; p_cp . Start () ; 5 return;
}
// .. repeat above for all fields trade . submit () ;
} 0 function handle () { res = event. srcElement. recoResult; if (res. text == "help") { text = "Please just say" ; text += documen .all [focus] . itle; 5 p_help. Start (text) ;
} else {
// proceed with value assignments } } 0 </script>
</head> <body>
<prompt id="p_help" onComplete="checkFileds () " /> <prompt id="p_stock" 5 onComplete="g_stoσk. Start () ">Please say the stock name</prompt>
<prompt id="p_op" onComplete="g_op. Start () ">Do you want to buy or sell</prompt> <prompt id="p_quantity" 0 onComplete="g_quantity. Start () ">How many shares?</prompt> <prompt id="p_price" onComplete="g_price. Start () ">What' s the price</prompt> 5 <reco id="g_stock" onReco="handle () ; checkFields () " > <grammar src=" . /g_stock.xml" />
</ reco >
<reco id="g_op" onReco="handle ( ) ; checkFields () " /> <grammar src=" . /g_op . xml" />
</ reco >
<reco id="g_quantity" onReco="handl () ; checkFields () " />
<grammar src=" ./g_quant.xml" />
</ reco >
<reco id="g_price" onReco="handle () ; checkFields () " />
<grammar s'rc=" . /g_quant.xml" /> </ reco >
<form id="trade">
<input name="stock" title="stock name" /> <select name="op" title="buy or sell"> <option value="buy" /> <option value="sell" />
</select> <input name="quantity" title="number of shares" />
<input name="price" title="price" /> </form> </body> </html>
6.2 Using SMIL
The following example shows activation of prompt and reco elements using SMIL mechanisms.
<html xmlns : t="urn: schemas-microsoft-com: time" xmlns : sp="urn : schemas-microsoft- com: speech"> <head> <style>
.time { behavior: url (#default#time2) ; } </style> </head> <body>
<input name="txtBoxOrigin" type="text"/> <input name="txtBoxDest" type="text" /> <sp:prompt class="time" t:begin="0">
Please say the origin and destination cities
</sp :prompt>
<t:par t :begin="time. end" t : repeatCount="indefinitely" <sp:reco class="time" >
<grammar src=" . /cit . xml" /> <bind targetElement="txtBoxOrigin" value="//origin_city" /> <bind targetElement="txtBoxDest" test="/sml/dest_city [Θconfidence $gt$ 40]" value="//dest_city" /> </sp:reco> </t :par> </body> </html>
APPENDIX B
1 QA speech control
The QA control adds speech functionality to the primary control to which it is attached. Its object model is an abstraction of the content model of the exemplary tags in Appendix A.
1.1 QA control
<Speech:QA id="..." controlsToSpeechEnable="..." speechlndex="..." ClientTest="..." runat="server" > <Question ...>
<Statement ...>
<Answer ...> <Confirm ...> <Command ...> </Speech:QA>
1.1.1 Core properties string ControlsToSpeechEnable
ControlsToSpeechEnable specifies the list of IDs of the primary controls to speech enable. IDs are comma delimited.
1.1.2 Activation mechanisms int Speechlndex
Speechlndex specifies the ordering information of the QA control - this is used by RunSpeech. Note: If more than one
QA control has the same Speechlndex, RunSpeech will execute them in source order. In situations where some QA controls
have Speechlndex specified and some QA controls do not, RunSpeech will order the QA controls first by Speechlndex, then by source order.
string ClientTest
ClientTest specifies a client-side script function which returns a boolean value to determine when the QA control is considered available for selection by the RunSpeech algorithm. The system strategy can therefore be changed by using this as a condition to activate or de-activate QA controls more sensitively than Speechlndex. If not specified, the QA control is considered available for activation.
1.1.3 Questions, Statements, Answers, Confirms and Commands
Question [] Questions
QA control contains an array of question objects or controls, defined by the dialog author. Each question control will typically relate to a function of the system, eg asking for a value, etc. Each question control may specify an activation function using the ClientTest attribute, so an active QA control may ask different kinds of questions about its primary control under different circumstances. For example, the activation condition for main question Q_Main may be that the corresponding primary control has no value, and the activation condition for a Q_GiveHelp may be that the user has just requested help. Each Question may specify answer controlss from within the QA control which are activated when the question control is outputted.
Statement [] Statement
QA control contains an -array of statement objects or controls. Statements are used to provide information to the listener, such as welcome prompts.
Answer [] Answers
QA control contains an array of answer objects or controls. An answer control is activated directly by a question control within the QA control, or by a StartEvent from the Primary control. Where multiple answers are used, they will typically reflect answers to the system functions, e.g. A_Main might provide a value in response to Q_Main, and A_Confirm might providing a yes/no + correction to Confirm.
Confirm[] Confirm
QA control may contain a confirm object or control. This object is a mechanism provided to the dialog authors which simplify the authoring of common confirmation subdialogs .
Command [] Command
A Command array holds a set of command controls. Command controls can be thought of as answer controls without question controls, whose behavior on recognition can be scoped down the control tree.
1.2 Question control
The question control is used for the speech output relating to a given primary control. It contains a set of prompts for presenting information or asking a question, and a list of ids of the answer controls, which may provide an answer to that question. If multiple • answer controls are specified, these grammars are loaded in parallel when the
question is activated. An exception will be thrown if no answer control is specified in the question control.
<Question id="„."
ClientTest="..."
Answers="..."
Count="..." initialTimeout=".„" babbleTimeout="..." maxTimeout="..."
Modal="..."
PromptFunction="..."
OnClientNoReco="..." >
<prompt ... />
</Question>
string ClientTest
ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a question control is considered active within its QA control (the QA control itself must be active for the question to be evaluated) . For a given QA control, the first question control with a true condition is selected for output. For example, the function may be used to determine whether to output a question which asks for a value ("Which city do you want?") or which attempts to confirm it ("Did you say London?") . If not specified, the question condition is considered true.
Prompt [] Prompts
The prompt . array specifies a list of prompt objects, discussed below. Prompts are also able to specify conditions of selection (via client functions) , and during RunSpeech execution only the first prompt with a true condition is selected for playback.
String Answers
Answers is an array of references by ID to controls that are possible answers to the question. The behavior is to activate the grammar from each valid answer control in response to the prompt asked by the question control.
Integer initialTimeout The time in milliseconds between start of , recognition and the detection of speech. This value is passed to the recognition platform, and if exceeded, an onSilence event will be thrown from the recognition platform. If not specified, the speech platform will use a default value.
Integer babbleTimeout
The period of time in milliseconds in which- the recognition server or other recognizer must return a result after detection of speech. For recos in "tap-and-talk" scenarios this applies to the period between speech detection and the recognition result becoming available. For recos -in dictation scenarios, this timeout applies to the period between speech detection and each recognition return - i.e. the period is restarted after each return of results or other event. If exceeded, the onClientNoReco event is thrown but different status codes are possible. If there has been any kind of recognition platform error that is
detectable and the babbleTimeout period has elapsed, then an onClientNoReco is thrown but with a status code -3. Otherwise if the recognizer is still processing audio - e.g. in the case of an exceptionally long utterance or if the user has kept the pen down for an excessive amount of time - the onClientNoReco event is thrown, with status code -15. If babbleTimeout is not specified, the speech- platform will default to an internal value.
Integer maxTimeout
The period of time in milliseconds between recognition start and results returned to the client device browser. If exceeded, the onMaxTimeout event is thrown by the browser - this caters for network or recognizer failure in distributed environments,. For recos in dictation scenarios, as with babbleTimeout, the period is restarted after the return of each recognition or other event. Note that the maxTimeout attribute should be greater than or equal to the sum of initialTimeout and babbleTimeout. If not specified, the value will be a browser default.
bool modal
When modal is set to true, no answers except the immediate set of answers to the question are activated (i.e. no scoped Answers are considered) . The defaults is false. For Example, this attribute allows the application developer to force the user of the client device to answer a particular question.
String PromptFunction ( prompt)
PromptFunction specifies a client-side function that will be called once the question has been selected but before
the prompt is played. This gives a chance to the application developer to perform last minute modifications to the prompt that may be required. PromptFunction takes the ID of the target prompt as a required parameter.
string OnClientNoReco
OnClientNoReco specifies the name of the client-side function to call when the NoReco (mumble) event is received.
1.2.1 Prompt Object
The prompt object contains information on how to play prompts. All the properties defined are read/write properties .
<prompt id="„." count="..."
ClientTest="..." source="..." bargeln="..." onClientBargein="..." onClientComplete="..." onClientBookmark="..." >
...text/markup of the prompt... </prompt>
int count
Count specifies an integer which is used for prompt selection. When the value of the count specified on a prompt matches the value of the count of its question control, the prompt is selected for playback. Legal values are 0 - 100.
<Question id=Q_Ask">
<prompt count="l"> Hello </prompt> <prompt count="2"> Hello again </prompt>
</Question>
In the example, when Q_Ask. count is equal to 1, the first prompt is played, and if it is equal to 2 (i.e. the question has already been output before) , the second prompt is then played.
string ClientTest
ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a prompt within an active question control will be selected for output. For a given question control, the first prompt with a true condition is selected. For example, the function may be used to implement prompt tapering, eg ("Which city would you like to depart from?" for a function returning true if the user if a first-timer, or "Which city?" for an old hand) . If not specified, the prompt's condition is considered true.
string InlinePro pt The prompt property contains the text of, the prompt to play. This is defined as the content of the prompt element. It may contain further markup, as in TTS rendering information, or <value> elements. As with all parts of the page, it may also be specified as script code within <script> tags, for dynamic rendering of prompt output.
string Source
Source specifies the URL from which to retrieve the text of the prompt to play. If an inline prompt is specified, this property is ignored.
Bool Bargein
Bargein is used to specify whether or not barge-in (wherein the user of the client device begins speaking when a prompt is being played) is allowed on the prompt. The defaults is true.
string onClientBargein onClientBargein specifies the client-side script function which is invoked by the bargein event.
string onClientComplete onClientComplete specifies the client-side script function which is invoked when the playing of the prompt has competed.
string OnClientBookmark
OnClientBookmark accesses the name of the client-side function to call when a bookmark is encountered.
1.2.2 Prompt selection On execution by RunSpeech, a QA control selects its prompt in the following way:
ClientTest and the count attribute of each prompt are evaluated in order. The first prompt with both ClientTest and count true is played. A missing count is considered true. A missing ClientTest is considered true.
1.3 Statement Control
Statement controls are used for information-giving system output when the activation of grammars is not required. This is common in voice-only dialogs. Statements are played only once per page if the playOnce attribute is true.
<Statement id="..." playOnce="..." ClientTest= _'7 --* PromptFunction="..." > <prompt ... />
</Statement >
bool playOnce-
The playOnce attribute specifies whether or not a statement control may be activated more than once per page. playOnce is a Boolean attribute with a default (if not specified) of TRUE, i.e., the statement control is executed only once. For example, the playOnce attribute may be used on statement controls whose purpose is to output email messages to the end user. Setting playOnce="False" will provide dialog authors with the capability to enable a "repeat" functionality on a page that reads email messages.
string ClientTest
ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances a statement control will be selected for output. RunSpeech will activate the first Statement with ClientTest equal to true. If not specified, the ClientTest condition is considered true.
String PromptFunction
PromptFunction specifies a client-side function that will be called once the statement control has been selected but before the prompt is played. This gives a chance to the authors to do last minute modifications to the prompt that may be required.
Prompt [] Prompt
The prompt array specifies a list of prompt objects. Prompts are also able to specify conditions of selection (via client functions), and during RunSpeech execution only the first prompt with a true condition is selected for playback.
<Speech:QA id="QA_Welcome"
ControlsToSpeechEnable="Labell" runat="server" >
<Statement id="WelcomePrompt" >
<prompt bargeIn="False"> Welcome </prompt> </Statement> </Speech:QA>
1.4 Confirm Control
Confirm controls are special types of question controls. They may hold all the properties and objects of other questions controls, but they are activated differently. The RunSpeech algorithm will check the confidence score found in the confirmThreshold of the answer control of the ControlsToSpeechEnable. If it is too low, the confirm control is activated. If the confidence score of the answer control is below the confirmThreshold, then the binding is done but the onClientReco method is not called. The dialog author may specify more than one confirm control per QA control. RunSpeech will determine which confirm control to activate based on the function specified by ClientTest.
<Answer ConfirmThreshold=... /> <Confirm>
...all attributes and objects of Question...
</Confirm> 1.5 Answer control
The answer control is used to specify speech input resources and features. It contains a set of grammars related to the primary control. JSIote that an answer may be used independently of a question, in multimodal applications without prompts, for example, or in telephony applications where user initiative may be enabled by extra- answers. Answer controls are activated directly by question controls, by a triggering event, or by virtue of explicit scope. An exception will be thrown if no grammar object is' specified in the answer control.
<Answer id="..." scope="..." StartEvent="..." StopEvent="..." ClientTest="..." onClientReco= onClientDTMF= ri rr autobind="..." server="..." ConfirmThreshold="..." RejectThreshold="..." >
<grammar ... /> <grammar ... /> <dtmf ... />
<dtmf ... />
<bind ... /> <bind ... />
</Answer>
string Scope
Scope holds the id of any named element on the page. Scope is used in answer control for scoping the availability of user initiative (mixed task initiative: i.e. service jump digressions) grammars. If scope is specified in an answer control, then it will be activated whenever a QA control corresponding to a primary control within the subtree of the contextual control is activated.
string StartEvent
StartEvent specifies the name of the event from the primary control that will activate the answer control (start the Reco object) . This will be typically used in multi-modal applications, eg onMouseDown, for tap-and-talk.
string StopEvent
StopEvent specifies the name of the event from the primary control that will de-activate the answer control (stop the Reco object) . This will be typically used in multi-modal applications, eg onMouseUp, for tap-and-talk.
string ClientTest
ClientTest specifies the client-side script function returning a boolean value which determines under which circumstances an answer control otherwise selected by scope or by a question control will be considered active. For example, the test could be used during confirmation for a Correction' answer control to disable itself when activated by a question control, but mixed initiative is not desired (leaving only accept/deny answers controls active) . Or a scoped answer control which permits a service jump can determine more flexible means of activation by
specifying a test which is true or false depending on another part of the dialog. If not specified, the answer control's condition is considered true.
Grammar [] Grammars
Grammars accesses a list of grammar objects.
DTMF[] DTMFs
DTMFs holds an array of DTMF objects.
Bind[] Binds
Binds holds a list of the bind objects necessary to map the answer control grammar results (dtmf or spoken) into control values. All binds specified for an answer will be executed when the relevant output is recognized. If no bind is specified, the SML output returned by recognition will be bound to the control specified in the ControlsToSpeechEnable of the QA control
string OnClientReco
OnClientReco specifies the name of the client-side function to call when spoken recognition results become available.
string OnClientDTMF OnClientDTMF holds the name of the client-side function to call when DTMF recognition results become available.
boolean autobind
The value of autobind determines whether or not the system default bindings are implemented for a recognition return from the answer control. If unspecified, the default is
true. Setting autobind to false is an instruction to the system not to perform the automatic binding.
string server The server attribute is an optional attribute specifying the URI of the speech server to perform the recognition. This attribute over-rides the URI of the global speech server attribute,.
integer ConfirmThreshold
Holds a value representing the confidence level below which a confirm control question will be automatically triggered immediately after an answer is recognized within the QA control. Legal values are 0-100.
Note that where bind statements and OnClientReco scripts are both specified, the semantics of the resulting Tags are that binds are implemented before the script specified in OnClientReco.
integer RejectThreshold
RejectThreshold specifies the minimum confidence score to consider returning a recognized utterance. If overall confidence is below this level, a NoReco event will be thrown. Legal values are 0-100.
1.5.1 Grammar
The grammar object contains information on the selection and content of grammars, and the means for processing recognition results. All the properties defined 'are read/write properties.
<Grammar
ClientTest="..."
Source="..."
> ...grammar rules...
</Grammar>
string ClientTest
The ClientTest property references a client-side boolean function which determines under which conditions a grammar is active. If multiple grammars are specified within an answer control (e.g. to implement a system/mixed initiative strategy, or to reduce the perplexity of possible answers when the dialog is going badly) , only the first grammar with a true ClientTest function will be selected for activation during RunSpeech execution. If this property is unspecified, true is assumed.
string Source Source accesses the URI of the grammar to load, if specified.
string InlineGrammar
InlineGrammar accesses the text of the grammar if specified inline. If that property is not empty, the Source attribute is ignored.
1.5.2 Bind
The object model for bind follows closely its counterpart client side tags. Binds may be specified both for spoken grammar and for DTMF recognition returns in a single answer control.
<bind
Value="..." TargetElement - _=_ /'/ r/ TargetAttribute="..."
Test="..." />
string Value Value specifies the text that will be bound into the target element. It is specified as an XPath on the SML . output from recognition.
string TargetElement TargetElement specifies the id of the primary control to which the bind statement applies. If not specified, this is assumed to be the ControlsToSpeechEnable of the , relevant QA control.
string TargetAttribute
TargetAttribute specifies the attribute on the TargetElement control in which bind the value. If not specified, this is assumed to be the Text property of the target element.
string Test
The Test attribute specifies a condition which must evaluate to true on the binding mechanism. This is specified as an XML Pattern on the SML output from recognition.
1.5.2.1 Automatic binding
The default behavior on the recognition return to a speech- enabled primary control is to bind certain properties into that primary control. This is useful for the dialog controls to examine the recognition results from the primary controls across turns (and even pages) . Answer controls will perform the following actions upon receiving recognition results:
1. bind the SML output tree into the SML attribute of the primary control
2. bind the text of the utterance into the SpokenText attribute of the primary control
3. bind the confidence score returned by the recognizer into the Confidence attribute of the primary control.
Unless autobind="False" attribute is specified on an answer control, the answer control will perform the following actions on the primary control:
1. bind the SML output tree into the SML attribute;
2. bind the text of the utterance into the SpokenText attribute;
3. bind the confidence score returned by the recognizer into the Confidence attribute;
Any values already held in the attribute will be - overwritten. Automatic binding occurs before any author- specified bind commands, and hence before any OnClientReco script (which may also bind to these properties) .
1.5.3 DTMF
DTMF may be used by answer controls in telephony applications. The DTMF object essentially applies a different modality of grammar (a keypad input grammar rather than a speech input grammar) to the same answer. The DTMF content model closely matches that of the client side output Tags DTMF element. Binding mechanisms for DTMF returns are specified using the targetAttribute attribute of DTMF object.
<DTMF firstTimeOut="..." interDigitTimeOut=' numDigits="..." flush="..." escape="..." targetAttribute="...'
ClientTest="...">
<dtmfGrammar ...> </DTMF>
integer firstTimeOut
The number of milliseconds to wait between activation and the first key press before raising a timeout event.
integer interDigitTimeOut
The number of milliseconds to wait between key presses before raising a timeout event.
int numDigits
The maximum number of key inputs permitted during DTMF recognition.
bool flush
A flag which states whether or not to flush the telephony server's DTMF buffer before recognition begins. Setting flush to false permits DTMF key input to be stored between recognition/page calls, which permits the user to type- ahead' .
string escape
Holds the string value of the key which will be used to end DTMF recognition (eg λ#' ) .
string targetAttribute
TargetAttribute specifies the property on the primary control in which to bind the value. If not specified, this is assumed to be the Text property of, the primary control.
string ClientTest
The ClientTest property references a client-side boolean function which determines under which conditions a DTMF grammar is active. If multiple grammars are specified within a DTMF object, only the first grammar with a true ClientTest function will be selected for activation during RunSpeech execution. If this property is unspecified, true is assumed.
1.5.4 DTMFGrammar
DTMFGrammar maps a key to an output value associated with the key. The following sample shows how to map the "1" and "2" keys to text output values.
<dtmfgrammar>
<key value="l">Seattle</key> <key value="2">Boston</key>
</dtmfgrammar>
1.6 Command control
The command control is a 'special variation of answer control which can be defined in any QA control. Command controls are forms of user input which are not answers to the question at hand (eg, Help, Repeat, Cancel) , and which do not need to bind recognition results into primary controls. If the, QA control specifies an activation scope, the command grammar is active for every QA control within that scope. Hence a command does not need to be activated directly by a question control or an event, and its grammars are activated in parallel independently of answer controls building process. Command controls of the same type at QA controls lower in scope can override superior commands with context-sensitive behavior (and even different / extended grammars if necessary) .
<Command id="..." scope="..." type="..."
Rej ectThreshold="..." onClientReco="..." > <Grammar ...>
<dtmf ... >
</Command>
string Scope
Scope holds the id of a primary control. Scope is used in command controls for scoping the availability of the command grammars. If scope is specified for a command control, the command's grammars will be activated whenever
a QA control corresponding to a primary control within the subtree of the contextual control is activated.
string type
Type specifies the type of command (eg help' , Cancel' etc.) in order to allow the overriding of identically typed commands at lower levels of the scope tree. Any string value is possible in this attribute, so it is up to the author to ensure that types are used correctly;
integer RejectThreshold
RejectThreshold specifies the minimum confidence level .of recognition that is necessary to trigger the command in recognition (this is likely to be used when higher than usual confidence is required, eg before executing the result of a ΛCancel' command). Legal values are 0-100.
string OnClientReco onCommand specifies the client-side script function to execute on recognition of the command control's grammar.
Grammar Grammar
The grammar object which will listen for the command.
DTMF DTMF
The dtmf object which will activate the command.
2 Types of Initiatives and Dialog Flows
Using the control described above, various forms of initiatives can be developed, some examples are provided below:
2.1 Mixed initiative Dialogs
Mixed initiative dialogs provide the capability of accepting input for multiple controls with the asking of a single question. For example, the answer to the question "what are your travel plans" may provide values for an origin city textbox control, a destination city textbox control and a calendar control ("Fly from Puyallup to Yakima on September 30th") .
A robust way to encode mixed initiative dialogs is to handwrite the mixed initiative grammar and relevant binding statements, and apply these to a single control.
The following example shows a single page used for a simple mixed initiative voice interaction about travel. The first QA control specifies the mixed initiative grammar and binding, and a relevant prompt asking for two items. The second and third QA controls are not mixed initiative, and so bind directly to their respective primary control by default (so no bind statements are required) . The RunSpeech algorithm will select the QA controls based on an attribure "Speechlndex" and whether or not their primary controls hold valid values.
<%@ Page language="c#" AutoEventWireup="false" inherits="SDN.Page" %>
<%@ Register tagPrefix="SDN" Namespace="SDN" Assembly="SDN"
%>
<html> <body>
<Form id="WebForml" method=post runat="server"> <ASP:Label id="Labell" runat="server">Departure city</ASP:Label>
<ASP:TextBox id="TextBoxl" runat="server" /> <br>
<ASP:Label id="Label2" runat="server">Arrival city</ASP : Label>
<ASP:TextBox id="TextBox2" textchanged="TextChanged" runat="server" />
<!—speech information —>
<Speech:QA id="QAmixed" controlsToSpeechEnable="TextBoxl' speechlndex="l" runat="server"> <Question id="Ql" Answers="Al">
<prompt>"Please say the cities you want to fly from and to"</prompt> </Question> <Answer id="Al" >
<grammar src="..."/> <bind targetElement="TextBoxl" value="/sml/pathl"/>
<bind targetElement="TextBox2" value="/sml/path2"/> </Answer> </Speech:QA>
<Speech:QA id="QAl" controlsToSpeechEnable="TextBoxl" speechlndex="2" runat="server">
<Question id="Ql" Answers="Al">
<prompt>"What' s the departure city?"</prompt> </Question>
<Answer id="Al">
<grammar src="..."/> </Answer> </Speech:QA>
<Speech:QA id="QA2" controlsToSpeechEnable="TextBox2' speechlndex="3" runat="server">
<Question id="Ql" Answer="Al">
<prompt>"What' s the arrival city"</prompt> </Question> <Answer id="Al" >
<grammar src="..."/> </Answer> </Speech:QA> </Form> </body> </html>
2.2 Complex Mixed Initiative
Application developers can specify several answer to the same question control with different levels of initiatives. Conditions are specified that will select one of the answers when the question is asked, depending on the initiative settings that they require. An example is provided below:
<Speech:QA id="QA_Panel2"
ControlsToSpeechEnable="Panel2' runat="server" >
<Question answers="systemlnitiative, mixedlnitiative" .../>
<Answer id="systemlnitiative"
ClientTest="systemInitiativeCond" onClientReco="Simpleϋρdate" >
<grammar src="systemlnitiative .'gram" /> </Answer> <Answer id="mixedlnitiative"
ClientTest="mixedInitiativeCond" onClientReco="Mixedϋpdate" >
<grammar src="mixedlnitiative. gram" /> </Answer> </Speech:QA>
Application developers can also specify several question controls in a QA control. Some question controls can allow
a mixed initiative style of answer, whilst others are more directed. By authoring conditions on these question controls, application developer can select between the questions depending on the dialogue situation.
In the following example the mixed initiative question asks the value of the two textboxes at the same time (e.g., what are your travel plans?' ) and calls the mixed initiative answer (e.g., λfrom London to Seattle'). If this fails, then the value of each textbox is asked separately
(e.g., ""where do you leave from' and ^where are you going to' ) but, depending on the conditions, the mixed-initiative grammar may still be activated, thus allowing users to provide both values .
<Speech:QA id="QA_Panel2"
ControlsToSpeechEnable="TextBoxl, TextBox2' runat="server" >
<Question
ClientTest="AUEmpty ( ) " answer s="AnsAll"
.../> <Question
ClientTest="TextBoxlIsEmpty ( ) " answers="AnsAll, AnsTextBoxl" .../> <Question
ClientTest="TextBox2IsEmpty ( ) " answers="AnsAll, AnsTextBox2" .../>
<Answer id="AnsTextBoxl" onClientReco="SimpleUpdate"> <grammar src="AnsTextBoxl . gram" />
</Answer> <Answer id="AnsTextBox2" onClientReco="SimpleUpdate" > <grammar src=" AnsTextBox2. gram" />
</Answer> <Answer id="AnsAll"
ClientTest="IsMixedInitAllowed ( ) " onClientReco="Mixedϋpdate"
>
<grammar src="AnsAll . gram" /> </Answer> </Speech:QA> 2.3 User initiative
Similar to the command control, ,a standard QA control can specify a scope for the activation of its grammars. Like a command control, this QA control will activate the grammar from a relevant answer control whenever another QA control is activated within the scope of this context . Note that its question control will only be asked if the QA control itself is activated.
<Speech:QA id="QA__Panel2"
ControlsToSpeechEnable="Panel2" runat="server" >
<Question ... /> <Answer id="AnswerPanel2" scope="Panel2" onClientReco="UpdatePanel2 () " > <grammar src="Panel2. gram" /> </Answer> </Speech:QA>
This is useful for' dialogs which allow ^service jumping' - user responses about some part of the dialog which is not directly related to the question control at hand.
2.4 Short time-out confirms
Application developers can write a confirmation as usual but set a short time-out. In the timeout handler, code is provided to that accept the current value as exact.
<Speech:QA id="QA_Panel2"
ControlsToSpeechEnable="Panel2' >
<Confirm timeOut="20" onClientTimeOut="AcceptConfirmation"... /> <Answer id="CorrectPanel2" onClientReco="UpdatePanel2 ( ) " > <grammar src="Panel2.gram" />
</Answer> . </Speech:QA>
2.5 Dynamic prompt building and editing
The promptFunctiόn script is called after a question control is selected but before a prompt is chosen and played. This lets application developers build or modify the prompt at the last minute. In the example below, this is used to change the prompt depending on the level of experience of the users.
<script language=javascript> function GetPrompt () { if (experiencedUser == true)
Promptl.Text = "What service do you want?"; else
Promptl.Text = "Please choose between e-mail, calendar and news"; return;
} </script> <Speech:QA id="QA Panel2"
ControlsToSpeechEnable="Panel2" runat="server" >
<Question PromptFunction="GetPrompt"... > <Prompt id="Promptl" />
</Question> <Answer ... /> </Speech:QA>
2.6 Using semantic relationships Recognition and use of semantic relationships can be done by studying the result of the recognizer inside the onReco event handler.
<script language="javascript"> function Reco ( ) { /*
Application developers can access the SML returned by the recogniser or recognition server. If a semantic relationship (like sport-news) is identified, the confidence of the individual elements can be increased or take any other appropriate action.
*/
} </script>
<Speech : QA id="QA_Panel2"
ControlsToSpeechEnable="Panel2 " runat="server" >
<Question ... />
<Answer onClientReco="Reco" >
<grammar src="Panel2 . gram" /> </Answer> </Speech : QA>
3 Implementation and Application of RunSpeech
A mechanism is needed to provide voice-only clients with the information necessary to properly render speech-enabled pages. Such a mechanism must provide the execution of dialog logic and maintain state of user prompting and grammar activation as specified by the application developer.
Such a mechanism is not needed for multimodal clients. In the multimodal case, the page containing speech-enabled controls is visible to the user of the client device. The user of the client device may provide speech input into any visible speech-enabled control in any desired order using the a multimodal paradigm.
The mechanism used by voice-only clients to render speech- enabled pages is the RunSpeech script or algorithm. The RunSpeech script relies upon the Speechlndex attribute of the QA control and the SpeechGroup control discussed below.
3.1 SpeechControl
During run time, the system parses a control script or webpage having the server controls and creates a tree structure of server controls. Normally the root of the tree -..is the Page control. If the control script uses custom o.r user control, the children tree of this custom or user control is expanded. Every node in the tree has an ID and it is easy to have name conflict in the tree when it expands. To deal with possible name conflict, the system includes a concept of NamingContainer. Any node in the tree
can implement NamingContainer and its children lives within that name space.
The QA controls can appear anywhere in the server control tree. In order to easily deal with Speechlndex and manage client side rendering, a SpeechGroup control is provided. The Speechgroup control is hidden from application developer.
One SpeechGroup control is created and logically attached to every NamingContainer node that contain QA controls in its children tree. QA and SpeechGroup controls are considered members of its direct NamingContainer' s SpeechGroup. The top level SpeechGroup control is attached to, the Page object. This membership logically constructs a tree - a logical speech tree - of QA controls and SpeechGroup controls.
For simple speech-enabled pages or script (i.e., pages that do not contain other NamingContainers) , only the root SpeechGroup control is generated and placed in the page' s server control tree before the page is sent to the voice- only client. The SpeechGroup control maintains information regarding the number and rendering order of QA controls on the page.
For pages containing a combination of QA control (s) and NamingContainer (s) , multiple SpeechGroup controls are generated: one SpeechGroup control for the page (as described above) and a SpeechGroup control for each NamingContainer. For a page containing NamingContainers, the page-level SpeechGroup control maintains QA control
information as described above as well as number and rendering order of composite controls. The SpeechGroup control associated with each NamingContainer maintains the number and rendering order of QAs within each composite.
The main job of the SpeechGroup control is to maintain the list of QA controls and SpeechGroups on each page and/or the list of QA controls comprising a composite control. When the client side markup script (e.g. HTML) is generated, each SpeechGroup writes out a QACollection object on the client side. A QACollection has a list of QA controls and QACollections . This corresponds to the logical server side speech tree. The RunSpeech script will query the page-level QACollection object for the next QA control to invoke during voice-only dialog processing.
The page level SpeechGroup control located on each page is also responsible for:
■ Determining that the requesting client is a voice-only client; and
■ Generating common script and supporting structures for all QA controls on each page.
When the first SpeechGroup control renders, it queries the System. Web. UI . Page. Request .Browser property for the browser string. This property is then passed to the RenderSpeechHTML and RenderSpeechScript methods for each QA control on the page. The QA control will then render for the appropriate client (multimodal or voice-only) .
3.2 Creation of SpeechGroup controls
During server-side page loading, the onLoad event is sent to each control on the page. The page-level SpeechGroup control is created by the first QA control receiving the onLoad event. The creation of SpeechGroup controls is done in the following manner: (assume a page containing composite controls)
Every QA control will receive onLoad event from run time code. onLoad for a QA:
• Get the QA' s NamingContainer Nl
• Search for SpeechGroup in the Nl's children o If already exists, register QA control with this SpeechGroup. onLoad returns. o If not found:
■ Create a new SpeechGroup Gl, inserts it into the Nl's children
■ If Nl is not Page, find Nl's NamingContainer N2
■ Search for SpeechGroup in N2's children, if exists, say G2, add Gl to G2. If not, create a new one G2, inserts in to N2's children
■ Recursion until the NamingContainer is the Page (top level)
During server-side page rendering, the Render event is sent to the speech-enabled page. When the page-level SpeechGroup control receives the Render event, it generates client side script to include RunSpeech. js and inserts it into the page that is eventually sent to the client device. It also calls
all its direct children to render speech related HTML and scripts. If a child is SpeechGroup, the child in turn calls its children again. In this manner, the server rendering happens along the server side logical speech tree.
When a SpeechGroup renders, it lets its children (which can be either QA or SpeechGroup) render speech HTML and scripts in the order of their Speechlndex. But a SpeechGroup is hidden and doesn't naturally have a Speechlndex. In fact, a SpeechGroup will have the same Speechlndex as its NamingContainer, the one it attaches to. The NamingContainer is usually a UserControl or other visible control, and an author can set Speechlndex to it.
3.3 RunSpeech The purpose of RunSpeech is to permit dialog flow via logic which is specified in script or logic on the client. In one embodiment, RunSpeech is specified in an external script file, and loaded by a single line generated by the server- side rendering of the SpeechGroup control, e.g.:
<script language="javascript" src="/scripts/RunSpeech. js" />
The RunSpeech. js script file should expose a means for validating on the client that the script has loaded correctly and has the right version id, etc. The actual validation script will be automatically generated by the page class as inline functions that are executed after the attempt to load the file.
Linking to an external script is functionally equivalent to specifying it inline, yet it is both more efficient, since
browsers are able to cache the file, and cleaner, since the page is not cluttered with generic functions.
3.4 Events
3.4.1 Event wiring
Tap-and-talk multimodality can be enabled by coordinating the activation of grammars with the onMouseDown event. The wiring script to do this will be generated by the Page based on the relationship between controls (as specified in the ControlsToSpeechEnable property of the QA control in) .
For example, given an asp:TextBox and its companion QA control adding a grammar, the <input> and <reco> elements are output by each control's Render method. The wiring mechanism to add the grammar activation command is performed by client-side script generated by the Page, which changes the attribute of the primary control to add the activation command before any existing handler for the activation event:
<!— Control output —> <input id="TextBoxl" type="text" .../> <reco id="Recol" ... />
<grammar src=".„" /> </reco> <!— Page output —> <script>
TextBoxl .onMouseDown = "Recol . Start ( ) ; "+TextBoxl . onMouseDown; </script>
By default, hook up is via onmousedown and onmouseup events, but both StartEvent and StopEvent can be set by web page author.
The textbox output remains independent of this modification and the event is processed as normal if other handlers were present.
3.4.2 Page Class properties The Page also contains the following properties which are available to the script at runtime:
SML -7 a name/value pair for the ID of the control and it' s associated SML returned by recognition. SpokenText - a name/value pair for the ID of the control and it's associated recognized utterance
Confidence - a name/value pair for the ID of the control and it's associated confidence returned by the recognizer.
4 RunSpeech Algorithm
The RunSpeech algorithm is used to drive dialog flow on the client device. This may involve system prompting and dialog management (typically for voice-only dialogs) , and/or processing of speech input (voice-only and multimodal dialogs) . It is specified as a script file referenced by URI from every relevant speech-enabled page (equivalent to inline embedded script) .
Rendering of the page for voice only browsers is done in the following manner:
The RunSpeech module or function works as follows (RunSpeech is called in response to document . onreadystate becoming "complete") :
(1) Find the first active QA control in speech index order (determining whether a QA control is active is explained below) .
(2) If there is no active QA control, submit the page.
(3) Otherwise, run the QA control.
A QA control is considered active if and only if:
(1) The QA control's ClientTest either is not present or returns true, AND (2) The QA control contains an active question control or statement control (tested in source order) ,
AND (3) Either: a. The QA control contains only statement controls, OR b. At least one of the controls referenced by the QA control's ControlsToSpeechEnable has an empty or default value.
A question control is considered active if and only if:
(1) The question control's ClientTest either is not present or returns true, AND
(2) The question control contains an active prompt object .
A prompt object is considered active if and only if:
(1) The prompt object's ClientTest either is not present or returns true, AND
(2) The prompt object's Count is either not present, or is less than or equal to the Count of the parent question control.
A QA control is run as follows:
(1) Determine which question control or statement control is active and increment its Count.
(2) If a statement control is active, play the prompt and exit.
(3) If a question control is active, play the prompt and start the Recos for each active answer control and command control.
An answer control is considered active if and only if:
(1) The answer control's ClientTest either is not present or returns true, AND
(2) Either: a. The answer control was referenced in the active question contol's Answers string, OR b. The answer control is in Scope
A command control is considered active if and only if:
(1) It is in Scope, AND
(2) There is not another command control of the same Type lower in the scope tree.
RunSpeech relies on events to continue driving the dialog - as described so far it would stop after running a single QA control. Event handlers are included for
Prompt .OnComplete, Reco. OnReco, Reco . OnSilence, Reco.OnMaxTimeout, and Reco. OnNoReco . Each of these will be described in turn.
RunSpeechOnComplete works as follows :
(1) If the active Prompt object has an OnClientComplete function specified, it is called.
(2) If the active Prompt object was contained within a statement control, or a question control which had no active answer controls, RunSpeech is called.
RunSpeechOnReco works as follows:
(1) Some default binding happens - the SML tree is bound to the SML attribute and the text is bound to the SpokenText attribute of each control in ControlsToSpeechEnable .
(2) If the confidence value of the recognition result is below the ConfidenceThreshold of the active answer control, the Confirmation logic is run.
(3) Otherwise, if the active answer control has on OnClientReco function specified, it is called, and then RunSpeech is called.
RunSpeechOnReco is responsible for creating and setting the SML, SpokenText and Confidence properties of the ControlsToSpeechEnable. The SML, SpokenText and Confidence properties are then available to scripts at runtime.
RunSpeechOnSilence, RunSpeechOnMaxTimeout, and RunSpeechOnNoReco all work the same way:
(1) The appropriate OnClientXXX function is called, if specified.
(2) RunSpeech is called.
Finally, the Confirmation logic works as follows:
(1) If the parent QA control of the active answer control contains any confirm controls, the first active confirm control is found (the activation of a confirm control is determined in exactly the same way as the activation of a question control) .
(2) If no active confirm control is found, RunSpeech is called. (3) Else, the QA, control is run, with the selected confirm control as the active question control.
For multi-modal browsers, only the grammar loading and event dispatching steps are carried out.
Claims
1. A computer readable medium having instructions, which when executed on a computer generate client side markup for a client in a client/server system, the instructions comprising:
1 a set of controls for rendering, each control having a first set of attributes related to visual rendering and a second set of attributes related to at least one of recognition and audibly prompting.
2. > The computer readable medium of claim 1 wherein one of the second set of attributes for one of the controls relates to a grammar to use for recognition.
3. The computer readable medium of claim 2 wherein said one of the second set of attributes provides a' reference to a location of the grammar.
4. The computer readable medium of claim 2 wherein the grammar is for one of speech recognition, handwriting recognition, gesture recognition and visual recognition.
5. 'The computer readable medium of claim 4 wherein the controls relate to one of HTML, XHTML, cHTML, XML and WML.
6. The computer readable medium of claim 1 wherein the controls relate to one of HTML, XHTML, cHTML, XML and WML.
7. The computer readable medium of claim 1 wherein one of the second set of attributes for one of the controls provides instructions related to generating audible output.
8. The computer readable medium of claim 7 wherein the instructions comprise text and the attribute relates to converting the text to audible output .
9. The computer readable medium of claim 1 wherein one of the second set of attributes for one of the controls relates to a location of data for audible output .
10. The computer readable medium of claim 9 wherein the data comprises a prerecorded audio data file and the attribute relates to playing the prerecorded audio data file.
11. The computer readable medium of claim 9 wherein the data comprises text and the attribute relates to converting the text to audible output.
12. A computer readable medium having instructions, which when executed on a computer generate client side markup for a client in a client/server system, the instructions comprising: a first set of visual controls having attributes for visual rendering on the client device; and a second set of controls having attributes related to at least one of recognition and audibly prompting, the second set of controls using at least one of the first set of controls.
13. The computer readable medium of claim 12 wherein one of the attributes for the second set of controls relates to a grammar to use for recognition.
14. The computer readable medium of claim 13 wherein said one of the attributes provides a reference to a location of the grammar.
15. The computer readable medium of claim 13 wherein the grammar is for one of speech recognition, handwriting recognition, gesture recognition and visual recognition.
16. The computer readable medium of claim 15 wherein the first set of controls and the second set of controls relate to one of HTML, XHTML, cHTML, XML and WML.
17. The computer readable medium of claim 12 wherein the controls relate to one of HTML, XHTML, cHTML, XML and WML.
18. The computer readable medium of1 claim 12 wherein one of the second set of attributes for one of the controls provides instructions related to generating audible output.
19. The computer readable medium of claim 18 wherein the instructions comprise text' and the attribute relates to converting the text to audible output .
20. The computer readable medium of claim 12 wherein one of the second set of attributes for one of the controls relates to a location of data for audible output .
21. The computer readable medium of claim 20 wherein the data comprises a prerecorded audio data file and the attribute relates to playing the prerecorded audio data file.
22. The computer readable medium of claim 20 wherein the data comprises text and the attribute relates to converting the text to audible output.
23. A computer readable medium having instructions, which when executed on a computer generate client side markup for a client in a client/server system, the instructions comprising: a first set of visual controls having attributes for visual rendering on the client device; and a second set of controls having attributes related to at least one of recognition and audibly prompting, the second set of controls are selectively associated with the first set of controls.
24. The computer readable medium of claim 23 wherein one of the attributes for the second set of controls relates to a grammar to use for recognition.
25. The computer readable medium of claim 24 wherein said one of the attributes provides a reference to a location of the grammar.
26. The computer readable medium of claim 24 wherein the grammar is for one of speech recognition, handwriting recognition, gesture recognition and visual recognition.
27. The computer readable medium of claim 26 wherein the first set of controls and the second set of controls relate to one of HTML, XHTML, cHTML, XML and WML.
28. The computer readable medium of claim 23 wherein the controls relate to one of HTML, XHTML, cHTML, XML and WML.
29.. The computer readable medium of claim 23 wherein one of the second set of attributes for one of the controls provides instructions related to generating audible output.
30. The computer readable medium of claim 29 wherein the instructions comprise text and the attribute relates to converting the text to audible output .
31. The computer readable medium of claim' 23 wherein one of the second set of attributes for one of the controls relates to a location of data for audible output .
32. The computer readable medium of claim 31 wherein the data comprises a prerecorded audio data file and the attribute relates to playing the prerecorded audio data file.
33. The computer readable medium of claim 31 wherein the data comprises text and the attribute relates to converting the text to audible output.
34. The computer readable medium of claim 23 wherein one of the attributes of the second set of controls relates to an identifier of the associated control of the first set of controls to form the association.
35. The computer readable medium of claim 23 wherein one of the attributes relates to whether the associated control of the second set is available for activation.
36. The computer readable medium of claim 35 wherein activation relates to generating markup.
37. The computer readable medium of claim 35 wherein activation relates to execution on the client.
38. The computer readable medium of claim 23 wherein the second set of controls activates another control of the second set.
39. The computer readable medium of claim 23 wherein the second set of controls comprise: a question control for generating markup related to audible prompting of a question; and an answer control for generating markup related to a grammar for recognition.
40. The computer readable medium of claim 39 wherein the question control activates the answer control.
41. The computer readable medium of claim 40 wherein the answer control includes a mechanism to associate a received result with one of the first set of controls.
42. The computer readable medium of claim 41 wherein mechanism includes binding the recognition value.
43. The computer readable medium of claim 42 wherein the mechanism includes issuing an event related to operation of binding.
44. The computer readable medium of claim 40 wherein the second set of controls comprise: a command control for generating markup related to a grammar for one of navigation in the markup, help with a task, and repeating an audible prompt.
45. The computer readable medium of claim 40 wherein the second set of controls comprise: a confirmation control for generating markup related to confirming that a recognized result is correct.
46. The computer readable medium of claim 45 wherein the confirmation control includes an attribute related to the recognized result to be confirmed.
47. The computer readable medium of claim'46 wherein the answer control includes an attribute related to a confidence level.
48. The computer readable medium of claim 46 wherein the confirmation control is activated as a function of a confidence level of a received result.
49. The computer readable medium of claim 48 wherein the confirmation control activates an accept control to accept the recognized result.
50. The computer readable medium of claim 48 wherein the confirmation control activates a deny control to deny the recognized result.
51. The computer readable medium of claim 48 wherein the confirmation control activates a correct control to correct the recognized result.
52. A computer implemented method for defining a website application on a server in a server/client architecture, the website application providing markup to a client for performing recognition and/or audible prompting on the client, fhe method comprising: defining the website application with a first set of visual controls having attributes for visual rendering on the client device with a second set1 of controls related -to at least one of recognition and audibly prompting; and selectively associating controls of the second set of controls with at least one control of the first set of visual controls.
53. The computer implemented method of claim 52 wherein each of the controls of the second set include an identifier attribute for identifying a control of the first set of visual controls, and wherein associating includes providing an identifier of at least one control of the first set of controls in the corresponding identifier attribute of each of the second set of controls.
54. The computer implemented method of claim 52 wherein the second set of controls includes a question control related to audible prompting of a question, and an answer control related to a grammar for recognition; and wherein defining the website application with a second set of controls related to at least one of redognition and audibly prompting includes associating the answer control with the question control.
55. The computer implemented method of claim 54 wherein the second set of controls includes a confirmation control related to confirming that 'a recognized result is correct; and wherein defining the website application with a second set of controls related to at least one of recognition and audibly prompting includes associating the confirmation control with a recognized result to be received.
56. The computer implemented method of claim 55 wherein the second set of controls includes a command control for generating markup related to a grammar for one of navigation on the .computer, help with a task, and repeating an audible prompt; and wherein defining the website application with a second set of controls related to at least one of recognition and audibly prompting includes associating the command control with a question control.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/046,131 US8229753B2 (en) | 2001-10-21 | 2001-10-21 | Web server controls for web enabled recognition and/or audible prompting |
US10/046,131 | 2001-10-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003036930A1 true WO2003036930A1 (en) | 2003-05-01 |
Family
ID=21941785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/033245 WO2003036930A1 (en) | 2001-10-21 | 2002-10-17 | Web server controls for web enabled recognition and/or audible prompting |
Country Status (2)
Country | Link |
---|---|
US (2) | US8229753B2 (en) |
WO (1) | WO2003036930A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007004069A2 (en) * | 2005-06-02 | 2007-01-11 | Texthelp Systems Limited | Client-based speech enabled web content |
EP2277171A1 (en) * | 2008-04-07 | 2011-01-26 | Nuance Communications, Inc. | Automated voice enablement of a web page |
US8311835B2 (en) * | 2003-08-29 | 2012-11-13 | Microsoft Corporation | Assisted multi-modal dialogue |
US11594218B2 (en) * | 2020-09-18 | 2023-02-28 | Servicenow, Inc. | Enabling speech interactions on web-based user interfaces |
Families Citing this family (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7610547B2 (en) * | 2001-05-04 | 2009-10-27 | Microsoft Corporation | Markup language extensions for web enabled recognition |
US7506022B2 (en) * | 2001-05-04 | 2009-03-17 | Microsoft.Corporation | Web enabled recognition architecture |
US7409349B2 (en) | 2001-05-04 | 2008-08-05 | Microsoft Corporation | Servers for web enabled speech recognition |
US6985865B1 (en) * | 2001-09-26 | 2006-01-10 | Sprint Spectrum L.P. | Method and system for enhanced response to voice commands in a voice command platform |
US7711570B2 (en) * | 2001-10-21 | 2010-05-04 | Microsoft Corporation | Application abstraction with dialog purpose |
US8229753B2 (en) | 2001-10-21 | 2012-07-24 | Microsoft Corporation | Web server controls for web enabled recognition and/or audible prompting |
JP3542578B2 (en) * | 2001-11-22 | 2004-07-14 | キヤノン株式会社 | Speech recognition apparatus and method, and program |
US8566102B1 (en) * | 2002-03-28 | 2013-10-22 | At&T Intellectual Property Ii, L.P. | System and method of automating a spoken dialogue service |
US7869998B1 (en) | 2002-04-23 | 2011-01-11 | At&T Intellectual Property Ii, L.P. | Voice-enabled dialog system |
US7103551B2 (en) * | 2002-05-02 | 2006-09-05 | International Business Machines Corporation | Computer network including a computer system transmitting screen image information and corresponding speech information to another computer system |
US7698642B1 (en) * | 2002-09-06 | 2010-04-13 | Oracle International Corporation | Method and apparatus for generating prompts |
US20080313282A1 (en) | 2002-09-10 | 2008-12-18 | Warila Bruce W | User interface, operating system and architecture |
US20040090458A1 (en) * | 2002-11-12 | 2004-05-13 | Yu John Chung Wah | Method and apparatus for previewing GUI design and providing screen-to-source association |
US8645122B1 (en) | 2002-12-19 | 2014-02-04 | At&T Intellectual Property Ii, L.P. | Method of handling frequently asked questions in a natural language dialog service |
US7003464B2 (en) * | 2003-01-09 | 2006-02-21 | Motorola, Inc. | Dialog recognition and control in a voice browser |
US7260535B2 (en) * | 2003-04-28 | 2007-08-21 | Microsoft Corporation | Web server controls for web enabled recognition and/or audible prompting for call controls |
US20040230637A1 (en) * | 2003-04-29 | 2004-11-18 | Microsoft Corporation | Application controls for speech enabled recognition |
US20050010892A1 (en) * | 2003-07-11 | 2005-01-13 | Vocollect, Inc. | Method and system for integrating multi-modal data capture device inputs with multi-modal output capabilities |
US20050050093A1 (en) * | 2003-08-29 | 2005-03-03 | International Business Machines Corporation | Customized selection of a voice file for a web page |
US7389235B2 (en) * | 2003-09-30 | 2008-06-17 | Motorola, Inc. | Method and system for unified speech and graphic user interfaces |
US7529815B2 (en) * | 2003-11-24 | 2009-05-05 | Cisco Technology, Inc. | Methods and apparatus supporting configuration in a network |
US7356472B2 (en) * | 2003-12-11 | 2008-04-08 | International Business Machines Corporation | Enabling speech within a multimodal program using markup |
GB2409087A (en) * | 2003-12-12 | 2005-06-15 | Ibm | Computer generated prompting |
US7552055B2 (en) | 2004-01-10 | 2009-06-23 | Microsoft Corporation | Dialog component re-use in recognition systems |
US8160883B2 (en) | 2004-01-10 | 2012-04-17 | Microsoft Corporation | Focus tracking in dialogs |
US8768711B2 (en) * | 2004-06-17 | 2014-07-01 | Nuance Communications, Inc. | Method and apparatus for voice-enabling an application |
US7739117B2 (en) * | 2004-09-20 | 2010-06-15 | International Business Machines Corporation | Method and system for voice-enabled autofill |
US7660431B2 (en) * | 2004-12-16 | 2010-02-09 | Motorola, Inc. | Image recognition facilitation using remotely sourced content |
US7751431B2 (en) * | 2004-12-30 | 2010-07-06 | Motorola, Inc. | Method and apparatus for distributed speech applications |
US7865362B2 (en) * | 2005-02-04 | 2011-01-04 | Vocollect, Inc. | Method and system for considering information about an expected response when performing speech recognition |
US7805300B2 (en) * | 2005-03-21 | 2010-09-28 | At&T Intellectual Property Ii, L.P. | Apparatus and method for analysis of language model changes |
US20060235694A1 (en) * | 2005-04-14 | 2006-10-19 | International Business Machines Corporation | Integrating conversational speech into Web browsers |
US8467506B2 (en) * | 2005-04-21 | 2013-06-18 | The Invention Science Fund I, Llc | Systems and methods for structured voice interaction facilitated by data channel |
US9087507B2 (en) * | 2006-09-15 | 2015-07-21 | Yahoo! Inc. | Aural skimming and scrolling |
US7821941B2 (en) * | 2006-11-03 | 2010-10-26 | Cisco Technology, Inc. | Automatically controlling operation of a BRAS device based on encapsulation information |
EP2095250B1 (en) * | 2006-12-05 | 2014-11-12 | Nuance Communications, Inc. | Wireless server based text to speech email |
US8499276B2 (en) * | 2006-12-28 | 2013-07-30 | Ca, Inc. | Multi-platform graphical user interface |
US8069047B2 (en) * | 2007-02-12 | 2011-11-29 | Nuance Communications, Inc. | Dynamically defining a VoiceXML grammar in an X+V page of a multimodal application |
US8788620B2 (en) * | 2007-04-04 | 2014-07-22 | International Business Machines Corporation | Web service support for a multimodal client processing a multimodal application |
US9274847B2 (en) * | 2007-05-04 | 2016-03-01 | Microsoft Technology Licensing, Llc | Resource management platform |
US20090037170A1 (en) * | 2007-07-31 | 2009-02-05 | Willis Joe Williams | Method and apparatus for voice communication using abbreviated text messages |
US8635069B2 (en) | 2007-08-16 | 2014-01-21 | Crimson Corporation | Scripting support for data identifiers, voice recognition and speech in a telnet session |
US20090100340A1 (en) * | 2007-10-10 | 2009-04-16 | Microsoft Corporation | Associative interface for personalizing voice data access |
US9177551B2 (en) | 2008-01-22 | 2015-11-03 | At&T Intellectual Property I, L.P. | System and method of providing speech processing in user interface |
US8868424B1 (en) * | 2008-02-08 | 2014-10-21 | West Corporation | Interactive voice response data collection object framework, vertical benchmarking, and bootstrapping engine |
US9047869B2 (en) * | 2008-04-07 | 2015-06-02 | Nuance Communications, Inc. | Free form input field support for automated voice enablement of a web page |
US20090275366A1 (en) * | 2008-05-05 | 2009-11-05 | Schilling Donald L | Personal portable communication devices with deployable display systems for three dimensional visual representations and/or privacy and methods of use |
US20110115702A1 (en) * | 2008-07-08 | 2011-05-19 | David Seaberg | Process for Providing and Editing Instructions, Data, Data Structures, and Algorithms in a Computer System |
US8463053B1 (en) | 2008-08-08 | 2013-06-11 | The Research Foundation Of State University Of New York | Enhanced max margin learning on multimodal data mining in a multimedia database |
CN101923856B (en) * | 2009-06-12 | 2012-06-06 | 华为技术有限公司 | Audio identification training processing and controlling method and device |
JP2011123677A (en) * | 2009-12-10 | 2011-06-23 | Canon Inc | Information processing apparatus and control method for the same |
US8407319B1 (en) | 2010-03-24 | 2013-03-26 | Google Inc. | Event-driven module loading |
US8453049B1 (en) * | 2010-05-19 | 2013-05-28 | Google Inc. | Delayed code parsing for reduced startup latency |
US20110313762A1 (en) * | 2010-06-20 | 2011-12-22 | International Business Machines Corporation | Speech output with confidence indication |
US20120089392A1 (en) * | 2010-10-07 | 2012-04-12 | Microsoft Corporation | Speech recognition user interface |
US9081550B2 (en) * | 2011-02-18 | 2015-07-14 | Nuance Communications, Inc. | Adding speech capabilities to existing computer applications with complex graphical user interfaces |
US8954317B1 (en) | 2011-07-01 | 2015-02-10 | West Corporation | Method and apparatus of processing user text input information |
US8798995B1 (en) | 2011-09-23 | 2014-08-05 | Amazon Technologies, Inc. | Key word determinations from voice data |
US9711137B2 (en) * | 2011-11-10 | 2017-07-18 | At&T Intellectual Property I, Lp | Network-based background expert |
US8577671B1 (en) * | 2012-07-20 | 2013-11-05 | Veveo, Inc. | Method of and system for using conversation state information in a conversational interaction system |
US9465833B2 (en) | 2012-07-31 | 2016-10-11 | Veveo, Inc. | Disambiguating user intent in conversational interaction system for large corpus information retrieval |
US9799328B2 (en) | 2012-08-03 | 2017-10-24 | Veveo, Inc. | Method for using pauses detected in speech input to assist in interpreting the input during conversational interaction for information retrieval |
KR102150289B1 (en) * | 2012-08-30 | 2020-09-01 | 삼성전자주식회사 | User interface appratus in a user terminal and method therefor |
US10031968B2 (en) | 2012-10-11 | 2018-07-24 | Veveo, Inc. | Method for adaptive conversation state management with filtering operators applied dynamically as part of a conversational interface |
US10381001B2 (en) | 2012-10-30 | 2019-08-13 | Google Technology Holdings LLC | Voice control user interface during low-power mode |
US9584642B2 (en) | 2013-03-12 | 2017-02-28 | Google Technology Holdings LLC | Apparatus with adaptive acoustic echo control for speakerphone mode |
US10373615B2 (en) | 2012-10-30 | 2019-08-06 | Google Technology Holdings LLC | Voice control user interface during low power mode |
US10304465B2 (en) | 2012-10-30 | 2019-05-28 | Google Technology Holdings LLC | Voice control user interface for low power mode |
KR101995428B1 (en) * | 2012-11-20 | 2019-07-02 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
US9390166B2 (en) * | 2012-12-31 | 2016-07-12 | Fujitsu Limited | Specific online resource identification and extraction |
PT2994908T (en) | 2013-05-07 | 2019-10-18 | Veveo Inc | Incremental speech input interface with real time feedback |
US9495965B2 (en) * | 2013-09-20 | 2016-11-15 | American Institutes For Research | Synthesis and display of speech commands method and system |
US8768712B1 (en) | 2013-12-04 | 2014-07-01 | Google Inc. | Initiating actions based on partial hotwords |
US9251139B2 (en) * | 2014-04-08 | 2016-02-02 | TitleFlow LLC | Natural language processing for extracting conveyance graphs |
US9645703B2 (en) * | 2014-05-14 | 2017-05-09 | International Business Machines Corporation | Detection of communication topic change |
US10033797B1 (en) | 2014-08-20 | 2018-07-24 | Ivanti, Inc. | Terminal emulation over HTML |
US9953646B2 (en) | 2014-09-02 | 2018-04-24 | Belleau Technologies | Method and system for dynamic speech recognition and tracking of prewritten script |
US20160092159A1 (en) * | 2014-09-30 | 2016-03-31 | Google Inc. | Conversational music agent |
US9401949B1 (en) * | 2014-11-21 | 2016-07-26 | Instart Logic, Inc. | Client web content cache purge |
US9852136B2 (en) | 2014-12-23 | 2017-12-26 | Rovi Guides, Inc. | Systems and methods for determining whether a negation statement applies to a current or past query |
US9854049B2 (en) | 2015-01-30 | 2017-12-26 | Rovi Guides, Inc. | Systems and methods for resolving ambiguous terms in social chatter based on a user profile |
CN104683456B (en) * | 2015-02-13 | 2017-06-23 | 腾讯科技(深圳)有限公司 | Method for processing business, server and terminal |
US20160321285A1 (en) * | 2015-05-02 | 2016-11-03 | Mohammad Faraz RASHID | Method for organizing and distributing data |
US9666192B2 (en) * | 2015-05-26 | 2017-05-30 | Nuance Communications, Inc. | Methods and apparatus for reducing latency in speech recognition applications |
US10559303B2 (en) * | 2015-05-26 | 2020-02-11 | Nuance Communications, Inc. | Methods and apparatus for reducing latency in speech recognition applications |
US10614162B2 (en) * | 2016-05-27 | 2020-04-07 | Ricoh Company, Ltd. | Apparatus, system, and method of assisting information sharing, and recording medium |
US11100278B2 (en) | 2016-07-28 | 2021-08-24 | Ivanti, Inc. | Systems and methods for presentation of a terminal application screen |
JP7202853B2 (en) * | 2018-11-08 | 2023-01-12 | シャープ株式会社 | refrigerator |
DE102021119682A1 (en) | 2021-07-29 | 2023-02-02 | Audi Aktiengesellschaft | System and method for voice communication with a motor vehicle |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6269336B1 (en) * | 1998-07-24 | 2001-07-31 | Motorola, Inc. | Voice browser for interactive services and methods thereof |
Family Cites Families (146)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4831550A (en) * | 1986-03-27 | 1989-05-16 | International Business Machines Corporation | Apparatus and method for estimating, from sparse data, the probability that a particular one of a set of events is the next event in a string of events |
DE3723078A1 (en) * | 1987-07-11 | 1989-01-19 | Philips Patentverwaltung | METHOD FOR DETECTING CONTINUOUSLY SPOKEN WORDS |
DE3739681A1 (en) * | 1987-11-24 | 1989-06-08 | Philips Patentverwaltung | METHOD FOR DETERMINING START AND END POINT ISOLATED SPOKEN WORDS IN A VOICE SIGNAL AND ARRANGEMENT FOR IMPLEMENTING THE METHOD |
US5263117A (en) | 1989-10-26 | 1993-11-16 | International Business Machines Corporation | Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer |
US5303327A (en) * | 1991-07-02 | 1994-04-12 | Duke University | Communication test system |
US5477451A (en) | 1991-07-25 | 1995-12-19 | International Business Machines Corp. | Method and system for natural language translation |
EP0543329B1 (en) | 1991-11-18 | 2002-02-06 | Kabushiki Kaisha Toshiba | Speech dialogue system for facilitating human-computer interaction |
US5502774A (en) * | 1992-06-09 | 1996-03-26 | International Business Machines Corporation | Automatic recognition of a consistent message using multiple complimentary sources of information |
US5384892A (en) * | 1992-12-31 | 1995-01-24 | Apple Computer, Inc. | Dynamic language model for speech recognition |
CA2115210C (en) | 1993-04-21 | 1997-09-23 | Joseph C. Andreshak | Interactive computer system recognizing spoken commands |
DE69423838T2 (en) | 1993-09-23 | 2000-08-03 | Xerox Corp., Rochester | Semantic match event filtering for speech recognition and signal translation applications |
US5566272A (en) | 1993-10-27 | 1996-10-15 | Lucent Technologies Inc. | Automatic speech recognition (ASR) processing using confidence measures |
US5615296A (en) * | 1993-11-12 | 1997-03-25 | International Business Machines Corporation | Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors |
US5699456A (en) | 1994-01-21 | 1997-12-16 | Lucent Technologies Inc. | Large vocabulary connected speech recognition system and method of language representation using evolutional grammar to represent context free grammars |
US5675819A (en) | 1994-06-16 | 1997-10-07 | Xerox Corporation | Document information retrieval using global word co-occurrence patterns |
US5752052A (en) * | 1994-06-24 | 1998-05-12 | Microsoft Corporation | Method and system for bootstrapping statistical processing into a rule-based natural language parser |
US6442523B1 (en) * | 1994-07-22 | 2002-08-27 | Steven H. Siegel | Method for the auditory navigation of text |
US5689617A (en) | 1995-03-14 | 1997-11-18 | Apple Computer, Inc. | Speech recognition system which returns recognition results as a reconstructed language model with attached data values |
IT1279171B1 (en) * | 1995-03-17 | 1997-12-04 | Ist Trentino Di Cultura | CONTINUOUS SPEECH RECOGNITION SYSTEM |
US6965864B1 (en) * | 1995-04-10 | 2005-11-15 | Texas Instruments Incorporated | Voice activated hypermedia systems using grammatical metadata |
US5774628A (en) * | 1995-04-10 | 1998-06-30 | Texas Instruments Incorporated | Speaker-independent dynamic vocabulary and grammar in speech recognition |
US5710866A (en) * | 1995-05-26 | 1998-01-20 | Microsoft Corporation | System and method for speech recognition using dynamically adjusted confidence measure |
US5890123A (en) * | 1995-06-05 | 1999-03-30 | Lucent Technologies, Inc. | System and method for voice controlled video screen display |
US5680511A (en) | 1995-06-07 | 1997-10-21 | Dragon Systems, Inc. | Systems and methods for word recognition |
US5737489A (en) * | 1995-09-15 | 1998-04-07 | Lucent Technologies Inc. | Discriminative utterance verification for connected digits recognition |
JP3126985B2 (en) * | 1995-11-04 | 2001-01-22 | インターナシヨナル・ビジネス・マシーンズ・コーポレーション | Method and apparatus for adapting the size of a language model of a speech recognition system |
GB9603582D0 (en) * | 1996-02-20 | 1996-04-17 | Hewlett Packard Co | Method of accessing service resource items that are for use in a telecommunications system |
KR100208772B1 (en) * | 1996-01-17 | 1999-07-15 | 서정욱 | Interactive system for guiding the blind and its control method |
US5913193A (en) * | 1996-04-30 | 1999-06-15 | Microsoft Corporation | Method and system of runtime acoustic unit selection for speech synthesis |
JP2753577B2 (en) * | 1996-04-30 | 1998-05-20 | 工業技術院長 | Silicon nitride porous body composed of oriented columnar particles and method for producing the same |
US5937384A (en) * | 1996-05-01 | 1999-08-10 | Microsoft Corporation | Method and system for speech recognition using continuous density hidden Markov models |
US5835888A (en) | 1996-06-10 | 1998-11-10 | International Business Machines Corporation | Statistical language model for inflected languages |
US5905972A (en) * | 1996-09-30 | 1999-05-18 | Microsoft Corporation | Prosodic databases holding fundamental frequency templates for use in speech synthesis |
US5819220A (en) | 1996-09-30 | 1998-10-06 | Hewlett-Packard Company | Web triggered word set boosting for speech interfaces to the world wide web |
US5797123A (en) | 1996-10-01 | 1998-08-18 | Lucent Technologies Inc. | Method of key-phase detection and verification for flexible speech understanding |
US5829000A (en) | 1996-10-31 | 1998-10-27 | Microsoft Corporation | Method and system for correcting misrecognized spoken words or phrases |
US5915001A (en) | 1996-11-14 | 1999-06-22 | Vois Corporation | System and method for providing and using universally accessible voice and speech data files |
US5960399A (en) | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US6456974B1 (en) | 1997-01-06 | 2002-09-24 | Texas Instruments Incorporated | System and method for adding speech recognition capabilities to java |
US6188985B1 (en) * | 1997-01-06 | 2001-02-13 | Texas Instruments Incorporated | Wireless voice-activated device for control of a processor-based host system |
ATE269575T1 (en) * | 1997-01-27 | 2004-07-15 | Entropic Res Lab Inc | A SYSTEM AND METHOD FOR PROSODY ADJUSTMENT |
GB9701866D0 (en) | 1997-01-30 | 1997-03-19 | British Telecomm | Information retrieval |
DE19708183A1 (en) | 1997-02-28 | 1998-09-03 | Philips Patentverwaltung | Method for speech recognition with language model adaptation |
US6078886A (en) * | 1997-04-14 | 2000-06-20 | At&T Corporation | System and method for providing remote automatic speech recognition services via a packet network |
US6101472A (en) * | 1997-04-16 | 2000-08-08 | International Business Machines Corporation | Data processing system and method for navigating a network using a voice command |
US6363301B1 (en) | 1997-06-04 | 2002-03-26 | Nativeminds, Inc. | System and method for automatically focusing the attention of a virtual robot interacting with users |
US6073091A (en) * | 1997-08-06 | 2000-06-06 | International Business Machines Corporation | Apparatus and method for forming a filtered inflected language model for automatic speech recognition |
US6192338B1 (en) * | 1997-08-12 | 2001-02-20 | At&T Corp. | Natural language knowledge servers as network resources |
US6154722A (en) | 1997-12-18 | 2000-11-28 | Apple Computer, Inc. | Method and apparatus for a speech recognition system language model that integrates a finite state grammar probability and an N-gram probability |
US6138139A (en) * | 1998-10-29 | 2000-10-24 | Genesys Telecommunications Laboraties, Inc. | Method and apparatus for supporting diverse interaction paths within a multimedia communication center |
US6182039B1 (en) * | 1998-03-24 | 2001-01-30 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus using probabilistic language model based on confusable sets for speech recognition |
US6141641A (en) | 1998-04-15 | 2000-10-31 | Microsoft Corporation | Dynamically configurable acoustic model for speech recognition system |
US6610917B2 (en) * | 1998-05-15 | 2003-08-26 | Lester F. Ludwig | Activity indication, external source, and processing loop provisions for driven vibrating-element environments |
US6689947B2 (en) * | 1998-05-15 | 2004-02-10 | Lester Frank Ludwig | Real-time floor controller for control of music, signal processing, mixing, video, lighting, and other systems |
US6499013B1 (en) | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6434524B1 (en) | 1998-09-09 | 2002-08-13 | One Voice Technologies, Inc. | Object interactive user interface using speech recognition and natural language processing |
US6405170B1 (en) * | 1998-09-22 | 2002-06-11 | Speechworks International, Inc. | Method and system of reviewing the behavior of an interactive speech recognition application |
US6539359B1 (en) * | 1998-10-02 | 2003-03-25 | Motorola, Inc. | Markup language for interactive services and methods thereof |
US7003463B1 (en) * | 1998-10-02 | 2006-02-21 | International Business Machines Corporation | System and method for providing network coordinated conversational services |
US6587822B2 (en) * | 1998-10-06 | 2003-07-01 | Lucent Technologies Inc. | Web-based platform for interactive voice response (IVR) |
US6188976B1 (en) * | 1998-10-23 | 2001-02-13 | International Business Machines Corporation | Apparatus and method for building domain-specific language models |
CA2287768C (en) * | 1998-11-02 | 2004-01-13 | Ahmed Abdoh | Method for automated data collection, analysis and reporting |
US6564263B1 (en) * | 1998-12-04 | 2003-05-13 | International Business Machines Corporation | Multimedia content description framework |
US6718015B1 (en) * | 1998-12-16 | 2004-04-06 | International Business Machines Corporation | Remote web page reader |
US6909874B2 (en) * | 2000-04-12 | 2005-06-21 | Thomson Licensing Sa. | Interactive tutorial method, system, and computer program product for real time media production |
US6445776B1 (en) * | 1998-12-31 | 2002-09-03 | Nortel Networks Limited | Abstract interface for media and telephony services |
DE19910236A1 (en) * | 1999-03-09 | 2000-09-21 | Philips Corp Intellectual Pty | Speech recognition method |
US6526380B1 (en) * | 1999-03-26 | 2003-02-25 | Koninklijke Philips Electronics N.V. | Speech recognition system having parallel large vocabulary recognition engines |
US6463413B1 (en) | 1999-04-20 | 2002-10-08 | Matsushita Electrical Industrial Co., Ltd. | Speech recognition training for small hardware devices |
US6314402B1 (en) | 1999-04-23 | 2001-11-06 | Nuance Communications | Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system |
US6081799A (en) * | 1999-05-05 | 2000-06-27 | International Business Machines Corporation | Executing complex SQL queries using index screening for conjunct or disjunct index operations |
US6604075B1 (en) * | 1999-05-20 | 2003-08-05 | Lucent Technologies Inc. | Web-based voice dialog interface |
US6240391B1 (en) * | 1999-05-25 | 2001-05-29 | Lucent Technologies Inc. | Method and apparatus for assembling and presenting structured voicemail messages |
US6312378B1 (en) * | 1999-06-03 | 2001-11-06 | Cardiac Intelligence Corporation | System and method for automated collection and analysis of patient information retrieved from an implantable medical device for remote patient care |
US6978238B2 (en) * | 1999-07-12 | 2005-12-20 | Charles Schwab & Co., Inc. | Method and system for identifying a user by voice |
US6493719B1 (en) * | 1999-07-26 | 2002-12-10 | Microsoft Corporation | Method and system for scripting for system management information |
US6311151B1 (en) * | 1999-07-28 | 2001-10-30 | International Business Machines Corporation | System, program, and method for performing contextual software translations |
US6365203B2 (en) * | 1999-08-16 | 2002-04-02 | Warner-Lambert Company | Continuous coating of chewing gum materials |
US6453290B1 (en) | 1999-10-04 | 2002-09-17 | Globalenglish Corporation | Method and system for network-based speech recognition |
JP4140878B2 (en) | 1999-10-12 | 2008-08-27 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Method and system for implementing multimodal browsing and conversational markup languages |
GB9926134D0 (en) | 1999-11-05 | 2000-01-12 | Ibm | Interactive voice response system |
US6384829B1 (en) * | 1999-11-24 | 2002-05-07 | Fuji Xerox Co., Ltd. | Streamlined architecture for embodied conversational characters with reduced message traffic |
US6349132B1 (en) * | 1999-12-16 | 2002-02-19 | Talk2 Technology, Inc. | Voice interface for electronic documents |
GB9930731D0 (en) | 1999-12-22 | 2000-02-16 | Ibm | Voice processing apparatus |
US6785649B1 (en) * | 1999-12-29 | 2004-08-31 | International Business Machines Corporation | Text formatting from speech |
US6690772B1 (en) * | 2000-02-07 | 2004-02-10 | Verizon Services Corp. | Voice dialing using speech models generated from text and/or speech |
EP1275042A2 (en) * | 2000-03-06 | 2003-01-15 | Kanisa Inc. | A system and method for providing an intelligent multi-step dialog with a user |
US20020035474A1 (en) * | 2000-07-18 | 2002-03-21 | Ahmet Alpdemir | Voice-interactive marketplace providing time and money saving benefits and real-time promotion publishing and feedback |
US6662163B1 (en) | 2000-03-30 | 2003-12-09 | Voxware, Inc. | System and method for programming portable devices from a remote computer system |
US6785653B1 (en) * | 2000-05-01 | 2004-08-31 | Nuance Communications | Distributed voice web architecture and associated components and methods |
US20020003547A1 (en) * | 2000-05-19 | 2002-01-10 | Zhi Wang | System and method for transcoding information for an audio or limited display user interface |
US20020010584A1 (en) * | 2000-05-24 | 2002-01-24 | Schultz Mitchell Jay | Interactive voice communication method and system for information and entertainment |
US6865528B1 (en) | 2000-06-01 | 2005-03-08 | Microsoft Corporation | Use of a unified language model |
TW472232B (en) * | 2000-08-11 | 2002-01-11 | Ind Tech Res Inst | Probability-base fault-tolerance natural language understanding method |
US6717593B1 (en) * | 2000-09-12 | 2004-04-06 | Avaya Technology Corp. | Mark-up language implementation of graphical or non-graphical user interfaces |
US6785651B1 (en) | 2000-09-14 | 2004-08-31 | Microsoft Corporation | Method and apparatus for performing plan-based dialog |
US6745163B1 (en) * | 2000-09-27 | 2004-06-01 | International Business Machines Corporation | Method and system for synchronizing audio and visual presentation in a multi-modal content renderer |
US20020077823A1 (en) * | 2000-10-13 | 2002-06-20 | Andrew Fox | Software development systems and methods |
US6728679B1 (en) * | 2000-10-30 | 2004-04-27 | Koninklijke Philips Electronics N.V. | Self-updating user interface/entertainment device that simulates personal interaction |
US6950850B1 (en) * | 2000-10-31 | 2005-09-27 | International Business Machines Corporation | System and method for dynamic runtime partitioning of model-view-controller applications |
JP3581648B2 (en) | 2000-11-27 | 2004-10-27 | キヤノン株式会社 | Speech recognition system, information processing device, control method thereof, and program |
US7487440B2 (en) | 2000-12-04 | 2009-02-03 | International Business Machines Corporation | Reusable voiceXML dialog components, subdialogs and beans |
US7028306B2 (en) * | 2000-12-04 | 2006-04-11 | International Business Machines Corporation | Systems and methods for implementing modular DOM (Document Object Model)-based multi-modal browsers |
US7203651B2 (en) * | 2000-12-07 | 2007-04-10 | Art-Advanced Recognition Technologies, Ltd. | Voice control system with multiple voice recognition engines |
GB0030330D0 (en) | 2000-12-13 | 2001-01-24 | Hewlett Packard Co | Idiom handling in voice service systems |
US20020107891A1 (en) * | 2001-02-06 | 2002-08-08 | Leamon Andrew P. | Device-independent content acquisition and presentation |
US7062437B2 (en) | 2001-02-13 | 2006-06-13 | International Business Machines Corporation | Audio renderings for expressing non-audio nuances |
GB0104120D0 (en) * | 2001-02-20 | 2001-04-11 | Pace Micro Tech Plc | Remote control |
US20020154124A1 (en) * | 2001-02-22 | 2002-10-24 | Han Sang-Yong | System and method of enhanced computer user interaction |
US20020173961A1 (en) | 2001-03-09 | 2002-11-21 | Guerra Lisa M. | System, method and computer program product for dynamic, robust and fault tolerant audio output in a speech recognition framework |
AU2002251205A1 (en) | 2001-03-30 | 2002-10-15 | British Telecommunications Public Limited Company | Multi-modal interface |
US7778816B2 (en) | 2001-04-24 | 2010-08-17 | Microsoft Corporation | Method and system for applying input mode bias |
CN1266625C (en) | 2001-05-04 | 2006-07-26 | 微软公司 | Server for identifying WEB invocation |
US7409349B2 (en) * | 2001-05-04 | 2008-08-05 | Microsoft Corporation | Servers for web enabled speech recognition |
US7610547B2 (en) | 2001-05-04 | 2009-10-27 | Microsoft Corporation | Markup language extensions for web enabled recognition |
US7506022B2 (en) * | 2001-05-04 | 2009-03-17 | Microsoft.Corporation | Web enabled recognition architecture |
CN1279465C (en) | 2001-05-04 | 2006-10-11 | 微软公司 | Identifying system structure of WEB invocation |
US7020841B2 (en) * | 2001-06-07 | 2006-03-28 | International Business Machines Corporation | System and method for generating and presenting multi-modal applications from intent-based markup scripts |
US6941268B2 (en) * | 2001-06-21 | 2005-09-06 | Tellme Networks, Inc. | Handling of speech recognition in a declarative markup language |
US6801604B2 (en) * | 2001-06-25 | 2004-10-05 | International Business Machines Corporation | Universal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources |
US6839896B2 (en) * | 2001-06-29 | 2005-01-04 | International Business Machines Corporation | System and method for providing dialog management and arbitration in a multi-modal environment |
US6868383B1 (en) * | 2001-07-12 | 2005-03-15 | At&T Corp. | Systems and methods for extracting meaning from multimodal inputs using finite-state devices |
US20020010715A1 (en) * | 2001-07-26 | 2002-01-24 | Garry Chinn | System and method for browsing using a limited display device |
CA2397451A1 (en) * | 2001-08-15 | 2003-02-15 | At&T Corp. | Systems and methods for classifying and representing gestural inputs |
US8229753B2 (en) * | 2001-10-21 | 2012-07-24 | Microsoft Corporation | Web server controls for web enabled recognition and/or audible prompting |
US7711570B2 (en) * | 2001-10-21 | 2010-05-04 | Microsoft Corporation | Application abstraction with dialog purpose |
US6941265B2 (en) | 2001-12-14 | 2005-09-06 | Qualcomm Inc | Voice recognition system method and apparatus |
US7610556B2 (en) * | 2001-12-28 | 2009-10-27 | Microsoft Corporation | Dialog manager for interactive dialog with computer user |
US7546382B2 (en) | 2002-05-28 | 2009-06-09 | International Business Machines Corporation | Methods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms |
US7640164B2 (en) | 2002-07-04 | 2009-12-29 | Denso Corporation | System for performing interactive dialog |
US7302383B2 (en) * | 2002-09-12 | 2007-11-27 | Luis Calixto Valles | Apparatus and methods for developing conversational applications |
US7257575B1 (en) * | 2002-10-24 | 2007-08-14 | At&T Corp. | Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs |
US20040192273A1 (en) | 2003-01-02 | 2004-09-30 | Auyeung Al T. | Speed answers to voice prompts |
US7003464B2 (en) * | 2003-01-09 | 2006-02-21 | Motorola, Inc. | Dialog recognition and control in a voice browser |
US7260535B2 (en) * | 2003-04-28 | 2007-08-21 | Microsoft Corporation | Web server controls for web enabled recognition and/or audible prompting for call controls |
US20040230637A1 (en) | 2003-04-29 | 2004-11-18 | Microsoft Corporation | Application controls for speech enabled recognition |
US8311835B2 (en) * | 2003-08-29 | 2012-11-13 | Microsoft Corporation | Assisted multi-modal dialogue |
US7363027B2 (en) * | 2003-11-11 | 2008-04-22 | Microsoft Corporation | Sequential multimodal input |
US7158779B2 (en) * | 2003-11-11 | 2007-01-02 | Microsoft Corporation | Sequential multimodal input |
US7660400B2 (en) * | 2003-12-19 | 2010-02-09 | At&T Intellectual Property Ii, L.P. | Method and apparatus for automatically building conversational systems |
US7552055B2 (en) * | 2004-01-10 | 2009-06-23 | Microsoft Corporation | Dialog component re-use in recognition systems |
US8160883B2 (en) * | 2004-01-10 | 2012-04-17 | Microsoft Corporation | Focus tracking in dialogs |
US7805704B2 (en) * | 2005-03-08 | 2010-09-28 | Microsoft Corporation | Development framework for mixing semantics-driven and state-driven dialog |
US7853453B2 (en) | 2005-06-30 | 2010-12-14 | Microsoft Corporation | Analyzing dialog between a user and an interactive application |
US7873523B2 (en) * | 2005-06-30 | 2011-01-18 | Microsoft Corporation | Computer implemented method of analyzing recognition results between a user and an interactive application utilizing inferred values instead of transcribed speech |
US7814501B2 (en) * | 2006-03-17 | 2010-10-12 | Microsoft Corporation | Application execution in a network based environment |
-
2001
- 2001-10-21 US US10/046,131 patent/US8229753B2/en not_active Expired - Fee Related
-
2002
- 2002-10-17 WO PCT/US2002/033245 patent/WO2003036930A1/en not_active Application Discontinuation
-
2003
- 2003-04-28 US US10/426,057 patent/US8224650B2/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6269336B1 (en) * | 1998-07-24 | 2001-07-31 | Motorola, Inc. | Voice browser for interactive services and methods thereof |
Non-Patent Citations (1)
Title |
---|
W3C (MIT ET AL: "Grammar Representation Requirements for Voice Markup Languages", W3C WORKING DRAFT, 23 December 1999 (1999-12-23), XP002204634 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8311835B2 (en) * | 2003-08-29 | 2012-11-13 | Microsoft Corporation | Assisted multi-modal dialogue |
WO2007004069A2 (en) * | 2005-06-02 | 2007-01-11 | Texthelp Systems Limited | Client-based speech enabled web content |
WO2007004069A3 (en) * | 2005-06-02 | 2007-07-12 | Texthelp Systems Ltd | Client-based speech enabled web content |
EP2277171A1 (en) * | 2008-04-07 | 2011-01-26 | Nuance Communications, Inc. | Automated voice enablement of a web page |
US11594218B2 (en) * | 2020-09-18 | 2023-02-28 | Servicenow, Inc. | Enabling speech interactions on web-based user interfaces |
Also Published As
Publication number | Publication date |
---|---|
US8224650B2 (en) | 2012-07-17 |
US20040113908A1 (en) | 2004-06-17 |
US8229753B2 (en) | 2012-07-24 |
US20030200080A1 (en) | 2003-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8229753B2 (en) | Web server controls for web enabled recognition and/or audible prompting | |
US7711570B2 (en) | Application abstraction with dialog purpose | |
US7260535B2 (en) | Web server controls for web enabled recognition and/or audible prompting for call controls | |
US8311835B2 (en) | Assisted multi-modal dialogue | |
US8160883B2 (en) | Focus tracking in dialogs | |
US7409349B2 (en) | Servers for web enabled speech recognition | |
US7506022B2 (en) | Web enabled recognition architecture | |
US7610547B2 (en) | Markup language extensions for web enabled recognition | |
US7552055B2 (en) | Dialog component re-use in recognition systems | |
US20040230637A1 (en) | Application controls for speech enabled recognition | |
US7853453B2 (en) | Analyzing dialog between a user and an interactive application | |
US7873523B2 (en) | Computer implemented method of analyzing recognition results between a user and an interactive application utilizing inferred values instead of transcribed speech | |
CA2467220C (en) | Semantic object synchronous understanding implemented with speech application language tags | |
US20070006082A1 (en) | Speech application instrumentation and logging | |
US20020178182A1 (en) | Markup language extensions for web enabled recognition | |
EP1255193B1 (en) | Web enabled speech recognition | |
EP1255192B1 (en) | Web enabled recognition architecture | |
EP2128757A2 (en) | Markup language extensions for web enabled recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |