EP1733383B1 - A method for driving multiple applications and a dialog management system - Google Patents
A method for driving multiple applications and a dialog management system Download PDFInfo
- Publication number
- EP1733383B1 EP1733383B1 EP05709048A EP05709048A EP1733383B1 EP 1733383 B1 EP1733383 B1 EP 1733383B1 EP 05709048 A EP05709048 A EP 05709048A EP 05709048 A EP05709048 A EP 05709048A EP 1733383 B1 EP1733383 B1 EP 1733383B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sub
- application
- auditory
- dialog
- management system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Not-in-force
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 238000004891 communication Methods 0.000 claims abstract description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 claims description 2
- 238000009434 installation Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 abstract 1
- 238000007726 management method Methods 0.000 description 99
- 238000012545 processing Methods 0.000 description 6
- 238000010438 heat treatment Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001620634 Roger Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000008672 reprogramming Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- This invention relates in general to a method for driving multiple applications by a common, at least partially speech-based, dialog management system and to a dialog management system for driving multiple applications.
- dialog management systems are based on the display of visual information and manual interaction on the part of the user. For instance, a user can enter into a dialog or dialog flow with a personal digital assistant in order to plan appointments or read incoming mails.
- the dialog can be carried out by the dialog management system issuing prompts to which the user responds by means of a pen or keyboard input.
- Such an application can be requested by the user to report events which are occurring or which will occur in the near future. For example, the personal digital assistant can remind the user of an upcoming appointment or important date.
- the reminder might be graphically presented on a display, and accompanied by an audible reminder such a beep, ping or similar artificial sound, to attract the user's attention and remind him look at the display to see the message or reminder conveyed by the application.
- an audible reminder such as a beep, ping or similar artificial sound
- the same type of beep or ping might be used as a general attention-getting device, or several different types of sound might be used to indicate different types of events.
- Such a beep is commonly referred to in a play of words as an "earcon", being the audible equivalent of an icon.
- An at least partially speech-based dialog management system however allows a user to enter into a one-way or two-way spoken dialog with an application.
- the user can issue spoken commands and receive visual and/or audible feedback from the dialog system.
- One such example might be a home electronics management system, where the user issues spoken commands to activate a device e.g. the video recorder.
- Another example might be the operation of a navigation device or another device in a vehicle in which the user asks questions of or directs commands at the device, which gives a response or asks a question in return.
- More advanced dialog management systems can issue spoken prompts and interpret spoken user input.
- US 6,513,009 B1 describes a system in which distinct voices are used to supply synthesised spoken feedback to a user. The user can identify the application form which the feedback is originating on the basis of the voice characteristics. However, such spoken feedback can be irritating, even when limited to terse phrases, especially if the dialog management system is driving a number of applications simultaneously.
- the dialog management system is controlling the dialog between a personal digital assistant, a personal computer, a telephone, a home entertainment system and a news and weather service
- the user might be continually bombarded with speech feedback like "Incoming call from Mr. So-and-so", “Weather is set to stay fine”, "The match between invention München and Real Madrid is due to start in 5 minutes on channel XYZ - shall I record it?", "Check-up due at dentist in the next two weeks - do you want an appointment?" and "Internet connection timeout after 5 minutes", etc. etc.
- the user might be eventually driven to distraction by the volume of messages being output, even though the messages are relevant and the information has been specifically requested.
- an object of the present invention is to provide an easy and inexpensive method for ensuring comfortable and uncomplicated distinction by the user between different applications with which he is interacting using a common dialog management system and in particular to ensure that the user will not issue a command intended for one application to another by mistake
- the present invention provides a method for driving numerous applications by a common dialog management system where a unique set of auditory icons is assigned to each application, and where the common dialog management system informs a user of the status of an application by audible playback, at a specific point in a dialog flow, of a relevant auditory icon selected from among the unique set of auditory icons of the application, and wherein an application submits a set of auditory icons and associated instructions concerning the use thereof to the dialog management system, and/or the dialog management system supplies an application with a unique set of auditory icons by modifying non-unique auditory icons in a set of auditory icons of the application and/or choosing unique auditory icons for the application from a collection of auditory icons.
- An “auditory icon” can be any type of sound or dedicated sound chunk used to describe a particular type of feedback from the application, such as an artificial short sound chunk (earcon) or a sound chunk resembling a real-world sound, such as a recording of a relevant sound.
- an artificial short sound chunk earcon
- a sound chunk resembling a real-world sound such as a recording of a relevant sound.
- a dialog management system comprises an input detection arrangement for detecting user input to the system, a sound output arrangement for outputting audible prompts, a core dialog engine for coordinating a dialog flow by interpreting user input and generating output prompts, an application interface for communication between the dialog management system and the applications, a source of unique sets of auditory icons assigned to the applications, and an auditory icon management unit for selecting relevant auditory icons from the unique sets of auditory icons corresponding to the applications for playback at appropriate points in the dialog flow.
- the dialog management system is realised to obtain a set of auditory icons and associated instructions concerning the use thereof from an application and/or to supply an application with a unique set of auditory icons by modifying non-unique auditory icons in a set of auditory icons of the application and/or by choosing unique auditory icons for the application from a collection of auditory icons.
- the dialog management system is able to provide an application with any auditory icons which it might require.
- the dialog management system might choose a set of suitable auditory icons from a selection available, and assign these to the application.
- two or more applications have similar or identical auditory icons in their repertoire.
- these auditory icons might be modified by the dialog management system in some way, or might be replaced by different, equally suitable auditory icons.
- dialog management system to drive numerous applications, the user can easily distinguish between the different types of feedback from the different applications. Since each type of feedback reported back from an application is accompanied by a unique meaningful audible sound, easily associated by the user with the corresponding application, the user does not run the risk of becoming confused, and will not mistake one type of feedback with another.
- the unique auditory icons keep the user constantly informed about the application with which he is currently interacting. This ensures that the user cannot issue a command intended for one application to another by mistake.
- the invention is therefore particularly advantageous for a exclusively speech-controlled dialog management system, or in an application where it is impracticable or dangerous for the user to have to look at a screen to follow the dialog, such as an automobile navigation system where the user should not be distracted from concentrating on the traffic, or a computer-aided surgical procedure, where the surgeon must remain focussed on the operative procedure taking place while being constantly informed of the status of the procedure.
- the invention therefore allows numerous separate applications, even of differing natures, to be driven by a common dialog system and to be monitored and controlled by a user.
- a dialog management system might be incorporated in an already existing device such as a PC, television, video recorder etc., and might inform the user of the status of various applications running in a home and/or office environment.
- the dialog management system is implemented, as a stand-alone device, with a physical aspect such as that of a robot or preferably a human.
- the dialog system might be realised as a dedicated device as described, for example, in DE 10249060 A1 , constructed in such a way that a moveable part with schematic facial features can turn to face the user, giving the impression that the device is listening to the user.
- Such a dialog management system might even be constructed in such a fashion that it can accompany the user as he moves from room to room.
- the interfaces between the dialog management system and the undividual applications may be realised by means of cables.
- the interfaces and realised in a wireless manner such an unfra-red, bluetooth, etc., so thatthe dialog management system remains essentially moble, and in not restricted to being positioned in the vicinity of the applications which it is used to drive. If the wireless interfaces have sufficient reach, the dialog management system can easily be used for controlling numerous applications for devices located in different rooms of a building, such as an office block or private house.
- the interfaces between the dialog management system and the individual applications are preferably managed in a dedicated application interface unit.
- the communication between the applications and the dialog management system is managed by forwarding to each application any commands or instructions interpreted from the spoken user input, and by receiving from an application any feedback intended for the user.
- the application interface unit can deal with several applications in a parallel manner.
- An application driven by the dialog management system might be a program running as software on a personal computer, a network, or any electronic device controlled by a processor or simple circuitry, such as a heating system for a household, a microwave oven, etc..
- an application can be understood to control a mechanical or physical device or object not ordinarily controlled by a processor.
- a device or object might be a purely mechanical device or object such as, for example, a letterbox.
- Such an object might be provided with appropriate sensors and an interface to the dialog management system, so that the dialog management system is informed when, for example, letters are dropped into the letterbox. This event might then be communicated to the user by an appropriate auditory icon, such as a post horn sound.
- the user of the dialog management system can thus tell whether he has received a postal delivery without having to actually go and see.
- Such an application of a dialog management system according to the invention might be particularly advantageous for a user living in a high-rise apartment block, or for a physically disabled user.
- a heating system such as the household type of heating system that can be re-programmed by the user according to season, might be controlled by dialog management system according to the invention.
- the user might avail of the dialog management system to easily reprogram the heating system by means of spoken commands before going on vacation, thus being spared the necessity of a time-consuming manual reprogramming.
- the dialog management system can report the status of the heating system to the user, whereby the relevant prompts may be accompanied by appropriate auditory icons.
- An application can also be understood to be an essentially electronic device such as an intercom or telephone.
- the dialog management system could be connected to the intercom or telephone by means of a suitable interface, and can assist the user in dealing with a visitor or an incoming call by informing the user of the event by emitting an appropriate auditory icon - for example the sound of knocking on wood for a visitor at the door - without the user actually having to first open the door or pick up the telephone receiver.
- User input to the dialog management system can be vocal, whereby spoken commands or comments of the user are recorded by means of the input detection arrangement, for example, a microphone.
- the input detection arrangement might - if the dialog management system is not exclusively speech-controlled - additionally comprise a keyboard, mouse, or a number of buttons by means of which the user can input commands to the system.
- An advanced input detection arrangement might even feature cameras for sensing movement of the user, so that the user might communicate with the dialog management system by means of gestures, for example by waving his hand or shaking his head.
- the dialog management system interprets the user input, determines the application for which the user input is intended, and converts the user input to a form suitable for understanding by that application.
- Spoken user input is analysed for content, and feedback from the application is converted to an output prompt by a core dialog engine.
- the dialog management system communicates with the user by means of a sound output arrangement, preferably one or more loudspeakers, for outputting audible prompts which are generated by the core dialog engine in response to feedback from an application.
- the core dialog engine comprises several units or modules for performing the usual steps of speech recognition and speech synthesis, such as an language understanding unit, a speech synthesis unit etc.
- a dialog control unit interprets the text identified by the language understanding unit, identifies the application for which it is intended, and converts it into a form suitable for processing by that application. Furthermore, the dialog control unit might analyse incoming feedback from an application and forward a suitable auditory icon, chosen from the unique set of auditory icons associated with that application, to the output sound arrangement.
- the audible prompts comprise auditory icons, which are understood to be dedicated sound chunks describing a particular type of feedback from an application.
- the auditory icons are used by the application to indicate any event during the dialog flow, or that a particular event has occurred - probably of interest to the user - such as the arrival of an electronic mail. Furthermore, the auditory icons might be used to indicate that an application is awaiting a user response, for example if the user has overheard a prompt. Auditory icons are preferably used to indicate any change in operational status of an application about which the user should be informed.
- An application might feature a complete set of auditory icons for use in any situation where the application can give the user feedback concerning its status or activities.
- an application might supply the dialog management system with a copy of its set of auditory icons, along with any associated instructions or accompanying information regarding the suitable use or playback of each auditory icon.
- These icons are managed by the dialog management system in an auditory icon management unit, which keeps track of which auditory icon is assigned to which application, and the type of feedback for which each auditory icon is to be used.
- the dialog management system might acquire the complete set of auditory icons at the outset of a dialog flow between the user and the application, or upon a first activation or installation of the application, and the auditory icon management unit might store all information regarding the auditory icons and their associated instructions in a local memory for use at a later point in time. In this way, the dialog management system ensures that it has any auditory icon that it might require for providing appropriate feedback to the user, regardless of what might arise during the dialog flow.
- the dialog management system might first request an application to supply only the relevant identifying information for each auditory icon in its set, such as a unique descriptive name or number, and any usage instructions associated with the different auditory icons.
- the dialog management system might then request each auditory icon only as the necessity arises, in order to reduce memory costs.
- the dialog management system might equally decide, on the basis of the preceding dialog flow, which type of auditory icon it might require for a particular application in the near future, and it might request this auditory icon in advance from the application.
- the dialog management system can provide an appropriate set.
- the dialog management system might be able to determine the nature of the application and decide on a suitable set of auditory icons, or the user might choose to define the auditory icons himself. He might do this by locating a sound chunk in digital form, for example by downloading from the internet or extracting a suitable sound chunk from a soundtrack or song, or he might record a sound chunk using a recording apparatus and communicate the recording to the dialog management system.
- the dialog management system For example, he might record or obtain a recording of a Formula One racing car being driven at speed, transfer the recording to the dialog management system where it is stored in a local memory by the auditory icon management unit, and specify that this sound chunk be played whenever an application for providing sports news reports an update about a Formula One race.
- the user might also advantageously use the microphone of the dialog management system to record a suitable sound chunk.
- the dialog management system is equipped with a suitable interface for connection to a portable memory such as a USB stick, memory card etc., or to any external network such as the internet, for the purpose of locating and downloading sound chunks for use as auditory icons.
- the dialog management system is able to provide an application with any auditory icons which it might require. For example, it might be that an application only disposes of one or two auditory icons, for example to indicate the start of a process, or to indicate that an error has occurred, requiring the attention of the user. However, such a small selection might not be sufficient for an intuitive and easily understood dialog flow between the user and the application. In this case, the dialog management system might choose a set of suitable auditory icons from a selection available, and assign these to the application. In another example, on loading a new application, the dialog management system examines the auditory icons associated with the new application, and compares them to the auditory icons already assigned to the other applications.
- the dialog management system preferably informs the user, and suggests suitable alternatives if it has any available. If no suitable alternative auditory icons are available, the dialog management system might prompt the user to enter suitable replacements.
- auditory icons which an application might use to provide audible feedback to the user are start auditory icons, to be played when a dialog flow between the user and the application is activated or reactivated from stand-by, and end auditory icons, to be played when the dialog flow between the user and the application is concluded, deactivated, or placed in a stand-by mode.
- the start auditory icon itself should reflect the nature of the application, while the end auditory icon might simply be the sounds of the start icon, played in reverse order.
- An application might also use informative auditory icons, whose sound contains some clue as to the nature of the application or the actual feedback type associated with this auditory icon.
- an application for supplying weather forecast updates might play an auditory icon with weather-associated sounds such as wind for stormy weather, raindrops for rainy weather and birdsong for fair weather.
- Other examples of auditory icons might be those used to provide status or information updates during the time that an application is active.
- an application running a personal digital assistant might have several auditory icons for supplying the user with different types of status feedback concerning appointments, incoming emails, due-dates for reports, etc.
- the personal digital assistant might repeatedly remind the user of an upcoming appointment using an appropriate audible icon, with the reminders becoming more and more persistent as the appointment draws near.
- the user might specify which audible icons of which applications he would like to hear during a dialog flow, by entering suitable information into a user profile. He might also specify the loudness of the auditory icons, and the number of times an auditory icon is to be played during the dialog flow. In addition, he can assign priorities to the various applications, so that feedback from an intercom takes priority over an application such as a personal digital assistant. In this way, the user ensures that he will always be informed of the higher-priority application in the event that higher- and lower-priority applications simultaneously report feedback in the dialog flow.
- the user profile can be consulted regularly or after every modification by the auditory icon management unit to determine whether an auditory icon should be played back, the desired loudness, and the number of times this auditory icon can be played back during this dialog flow.
- the dialog management system can deduce user preferences by interpreting dialog flow. For example, if an application has reported a reminder for an upcoming appointment by means of an appropriate auditory icon, and the user replies "I know, I know", the dialog management system can interpret this to mean that the user does not need reminding again, and might suppress the auditory icon for this feedback the next time it is initiated by the application. This level of "intelligent" interpretation on the part of the dialog management system might also be specified by the user in the user profile. For a dialog management system used by more than one user, a number of user profiles can preferably be configured, so that each user has his own private user profile in which he can specify his own personal preferences.
- a dialog management system might perform some of the processing steps described above by implementing software modules or a computer program product.
- a computer program product might be directly loadable into the memory of a programmable dialog management system.
- Some of the units or modules such as the core dialog engine, application interface unit and auditory icon management unit can thereby be realised in the form of computer program modules. Since any required software or algorithms might be encoded on a processor of a hardware device, an existing electronic device might easily be adapted to benefit from the features of the invention.
- the units or blocks for processing user input and the output prompts in the manner described can equally be realised using hardware modules.
- Fig.1 is a schematic block diagram of a dialog management system in accordance with an embodiment of the present invention.
- system is shown as part of a user device, for example a home dialog system.
- a user device for example a home dialog system.
- the interface between the user and the present invention has not been included in the diagram.
- the dialog management system 1 features an application interface 10 for handling incoming and outgoing information passed between the dialog management system 1 and the applications A 1 , A 2 , A 3 , ..., A n . Furthermore, the dialog management system 1 can obtain information from each application A 1 , A 2 , A 3 , ..., A n regarding any auditory icons it might feature, and when these auditory icons should be played. This information is stored in an auditory icon management unit 11. In this example, one of the applications A 1 might automatically provide the dialog management system 1 with all relevant information concerning its set of auditory icons, for example when the application A 1 is started or booted.
- Another application A 3 might only submit descriptive information regarding its auditory icons in advance, and submit a single auditory icon upon request in the event that the auditory icon is actually required in the dialog flow.
- the dialog management system 1 can request an application A 1 , A 2 , A 3 , ..., A n to provide information regarding one or more auditory icons as required, or when the application A 1 , A 2 , A 3 , ..., A n is started.
- the auditory icon management unit 11 can assign auditory icons to an application A 2 by choosing suitable ones from a collection of pre-defined auditory icons 13. For such an application, the user might prefer to have the auditory icon management unit 11 assign a particular sound recording to the application A 2 . For example, the user might like to hear the sound of birdsong when the weather service A 2 reports fair weather. If stormy weather is forecast, the user might like to hear the sound of thunder.
- the user can input these recordings as audio data in a suitable format via a user interface 15, and have the auditory icon management unit 11 assign them to the weather service application A 2 .
- Another way of supplying the auditory icon management unit 11 with such recordings is to download them from an external computer or a network 12 such as the internet, via a suitable interface 14.
- the dialog flow in this example consists of communication between the user, not shown in the diagram, and the various applications A 1 , A 2 , A 3 , ..., A n driven by the dialog management system 1.
- the user issues spoken commands or requests to the dialog management system 1 through a microphone 5.
- the spoken commands or requests are recorded and digitised in an input detection arrangement 4, which passes the recorded speech input to a core dialog engine 8.
- This engine 8 comprises several blocks for performing the usual steps involved in speech recognition - an audio interface block 20 performs some necessary digital signal processing on the input speech signal before forwarding it to an automatic speech recogniser 21. This extracts any recognisable speech components from the input audio signal and forwards these to a language understanding block 22.
- the spoken commands or requests of the user are analysed for relevance and passed on as appropriate to the dialog controller 23, which converts the user input into commands or requests that can be executed by the appropriate application A 1 , A 2 , A 3 , ..., A n .
- the dialog controller 23 If it be necessary to obtain some further information from the user, for example if the spoken commands can not be parsed or understood by the automatic speech recogniser 21 and language understanding 22 blocks, or if the spoken commands cannot be applied to any of the applications A 1 , A 2 , A 3 , ..., A n that are active, the dialog controller 23 generates appropriate requests and forwards these to a speech generator 24 where they are synthesized to speech.
- the audio interface block 20 performs the necessary digital signal processing on the output speech signal which is then converted in an sound output arrangement 6 such as a loudspeaker to give audible sound 7.
- the user might wish to enter an appointment into the diary of his personal digital assistant A 1 . All he needs to do is to say "Enter appointment with tax advisor next Monday at 11am".
- the core dialog engine 8 converts the command into the appropriate form and submits it to the personal digital assistant application A 1 . If the appointment can be entered without any problem into the personal digital assistant A 1 , the appropriate feedback is reported to the dialog management system 1, which chooses the appropriate confirmatory feedback - such as a spoken "OK" or "Roger” - to be output.
- the personal digital assistant A 1 reports back to the dialog management system 1, where the application interface 10 and/or the dialog controller 23 interprets the application's response, and chooses the appropriate auditory icon - for example the sound of clashing cymbals to indicate to the user that the new appointment clashes with an appointment already entered. Additionally, the dialog controller 23 triggers generation of a suitable prompt, e.g. "You already have an appointment at 11am with Mr. So-and-so". Optionally, the user may deactivate the prompt output if detailed feedback is not desired by the user.
- a suitable prompt e.g. "You already have an appointment at 11am with Mr. So-and-so"
- the user may deactivate the prompt output if detailed feedback is not desired by the user.
- the user has specified his preferences regarding the playback of auditory icons in a user profile, to customise or configure the extent to which he would like to be informed about events occurring in the applications he uses, and which applications are to be accorded a higher priority in the dialog flow. These preferences might endure until changed at some later time by the user, or they might be of a transitory nature. For example, the user might tell the dialog management system how to react within a certain period of time.
- the dialog management system suppresses the reporting of minor events occurring during the following two hours, such as an automatic weather update, and postpones for two hours all relatively unimportant events such as 24-hour reminders for upcoming scheduled appointments "Dentist tomorrow afternoon at 3pm".
- minor events such as an automatic weather update
- postpones for two hours all relatively unimportant events such as 24-hour reminders for upcoming scheduled appointments "Dentist tomorrow afternoon at 3pm”.
- the user would only be interrupted by a relatively important event such as a scheduled appointment during the specified time "Meeting with director in 15 minutes” or a telephone call from an client tagged in the telephone application A 3 as being important.
- the dialog management system decides what is important and what is relatively unimportant by examining the information specified in the user profile 3.
- Other preferences might specify the priority given to the applications if two or more applications indicate that auditory icons are to the played at the same time.
- the user has specified in the user profile 13 that the telephone A 3 is to be assigned a higher priority than the news and weather service A 2 . If the news and weather service A 2 is about to give its automatic news update, and an incoming call arrives at the same time, the application interface 10 acknowledges that the telephone application A 3 has the higher priority, and suppresses the auditory icon of the news and weather service A 2 , which may be postponed for output at a later point in time.
- the auditory icon management unit might be realised as part of the core dialog engine, or be incorporated in another module such as the dialog controller.
- the dialog system might be able to determine the quality of the current user's voice after processing a few utterances, or the user might make himself known to the system by entering an identification code which might then be used to access stored user profile information which in turn would be used to generate appropriate control parameters for the audio interface.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
- Mobile Radio Communication Systems (AREA)
- Selective Calling Equipment (AREA)
Abstract
Description
- This invention relates in general to a method for driving multiple applications by a common, at least partially speech-based, dialog management system and to a dialog management system for driving multiple applications.
- Recent developments in the area of man-machine interfaces have led to widespread use of technical devices or applications which are managed or driven by means of a dialog between an application and the user of the application. Most dialog management systems are based on the display of visual information and manual interaction on the part of the user. For instance, a user can enter into a dialog or dialog flow with a personal digital assistant in order to plan appointments or read incoming mails. The dialog can be carried out by the dialog management system issuing prompts to which the user responds by means of a pen or keyboard input. Such an application can be requested by the user to report events which are occurring or which will occur in the near future. For example, the personal digital assistant can remind the user of an upcoming appointment or important date. The reminder might be graphically presented on a display, and accompanied by an audible reminder such a beep, ping or similar artificial sound, to attract the user's attention and remind him look at the display to see the message or reminder conveyed by the application. The same type of beep or ping might be used as a general attention-getting device, or several different types of sound might be used to indicate different types of events. Such a beep is commonly referred to in a play of words as an "earcon", being the audible equivalent of an icon.
- As long as such a dialog is carried out between the user and only one application, it is not particularly difficult to remember which earcon or beep is associated with which event. However, if the dialog management system is managing the dialog between a user and a number of applications, it can become quite confusing since the sounds used to indicate the various types of events are generally limited to beeps and other artificial sounding electronic noises. The user might be confused and mistake one type of sound for another, thereby misinterpreting the dialog flow. The paper "The design of sonically enhanced widgets" by Stephen Brewster, Glasgow Interactive Systems Group , Department of Comuting Science, University of Glasgow, describes the use of earcons to provide a user with distinct audible feedback when interacting with more than one application.
US 2003/0098892 describes a mobile device having a small display in which different icons are shown in the display in conjunction with audible icons to emphasise their relative relevance in the display. - An at least partially speech-based dialog management system however allows a user to enter into a one-way or two-way spoken dialog with an application. The user can issue spoken commands and receive visual and/or audible feedback from the dialog system. One such example might be a home electronics management system, where the user issues spoken commands to activate a device e.g. the video recorder. Another example might be the operation of a navigation device or another device in a vehicle in which the user asks questions of or directs commands at the device, which gives a response or asks a question in return. More advanced dialog management systems can issue spoken prompts and interpret spoken user input. For example, if the user wishes to check the status of his electronic mailbox, he might say "Check my mailbox", and the dialog management system, after forwarding the necessary commands to the application and interpreting the result reported back, might reply "You've got mail" or "Mailbox is empty" as appropriate.
US 6,513,009 B1 describes a system in which distinct voices are used to supply synthesised spoken feedback to a user. The user can identify the application form which the feedback is originating on the basis of the voice characteristics. However, such spoken feedback can be irritating, even when limited to terse phrases, especially if the dialog management system is driving a number of applications simultaneously. For example, if the dialog management system is controlling the dialog between a personal digital assistant, a personal computer, a telephone, a home entertainment system and a news and weather service, the user might be continually bombarded with speech feedback like "Incoming call from Mr. So-and-so", "Weather is set to stay fine", "The match between Bayern München and Real Madrid is due to start in 5 minutes on channel XYZ - shall I record it?", "Check-up due at dentist in the next two weeks - do you want an appointment?" and "Internet connection timeout after 5 minutes", etc. etc. The user might be eventually driven to distraction by the volume of messages being output, even though the messages are relevant and the information has been specifically requested. - An attempt at providing a dialog management system which informs the user of the status of an application via auditory icons as an accompaniment to speech feedback has been made in "Contextual Awareness, Messaging and Communication in Nomadic Audio Environments" from Nitin Sawnhey, M. SC. Thesis, Massachusetts Institute of Technology, 1998. This draft describes a portable device which is able to interface to a remote server. The status of one or more programs active on the server can be reported by the portable audio device, typically worn on the user's lapel. This device is limited to receiving messages only from different programs running on this remote server and to monitoring the activity of these programs - all of a similar nature -, so that these can in effect be regarded as a single application. Actual driving of numerous separate applications, even of differing natures, by a common dialog system wherein the user can not only monitor but also control these different applications, is not foreseen in this draft.
- Therefore, an object of the present invention is to provide an easy and inexpensive method for ensuring comfortable and uncomplicated distinction by the user between different applications with which he is interacting using a common dialog management system and in particular to ensure that the user will not issue a command intended for one application to another by mistake
- To this end, the present invention provides a method for driving numerous applications by a common dialog management system where a unique set of auditory icons is assigned to each application, and where the common dialog management system informs a user of the status of an application by audible playback, at a specific point in a dialog flow, of a relevant auditory icon selected from among the unique set of auditory icons of the application, and wherein an application submits a set of auditory icons and associated instructions concerning the use thereof to the dialog management system, and/or the dialog management system supplies an application with a unique set of auditory icons by modifying non-unique auditory icons in a set of auditory icons of the application and/or choosing unique auditory icons for the application from a collection of auditory icons.
- An "auditory icon" can be any type of sound or dedicated sound chunk used to describe a particular type of feedback from the application, such as an artificial short sound chunk (earcon) or a sound chunk resembling a real-world sound, such as a recording of a relevant sound.
- A dialog management system according to the invention comprises an input detection arrangement for detecting user input to the system, a sound output arrangement for outputting audible prompts, a core dialog engine for coordinating a dialog flow by interpreting user input and generating output prompts, an application interface for communication between the dialog management system and the applications, a source of unique sets of auditory icons assigned to the applications, and an auditory icon management unit for selecting relevant auditory icons from the unique sets of auditory icons corresponding to the applications for playback at appropriate points in the dialog flow. The dialog management system is realised to obtain a set of auditory icons and associated instructions concerning the use thereof from an application and/or to supply an application with a unique set of auditory icons by modifying non-unique auditory icons in a set of auditory icons of the application and/or by choosing unique auditory icons for the application from a collection of auditory icons.
- Advantageously, the dialog management system according to the invention is able to provide an application with any auditory icons which it might require. For example, the dialog management system might choose a set of suitable auditory icons from a selection available, and assign these to the application. Furthermore, it might be that two or more applications have similar or identical auditory icons in their repertoire. To avoid any confusion on the part of the user that might arise should both applications be simultaneously active, these auditory icons might be modified by the dialog management system in some way, or might be replaced by different, equally suitable auditory icons.
- Using a dialog management system according to the present invention to drive numerous applications, the user can easily distinguish between the different types of feedback from the different applications. Since each type of feedback reported back from an application is accompanied by a unique meaningful audible sound, easily associated by the user with the corresponding application, the user does not run the risk of becoming confused, and will not mistake one type of feedback with another. The unique auditory icons keep the user constantly informed about the application with which he is currently interacting. This ensures that the user cannot issue a command intended for one application to another by mistake. The invention is therefore particularly advantageous for a exclusively speech-controlled dialog management system, or in an application where it is impracticable or dangerous for the user to have to look at a screen to follow the dialog, such as an automobile navigation system where the user should not be distracted from concentrating on the traffic, or a computer-aided surgical procedure, where the surgeon must remain focussed on the operative procedure taking place while being constantly informed of the status of the procedure. The invention therefore allows numerous separate applications, even of differing natures, to be driven by a common dialog system and to be monitored and controlled by a user.
- The dependent claims disclose particularly advantageous embodiments and features of the invention whereby the system could be further developed according to the features of the method claims.
- A dialog management system according to the present invention might be incorporated in an already existing device such as a PC, television, video recorder etc., and might inform the user of the status of various applications running in a home and/or office environment. In a preferred embodiment, the dialog management system is implemented, as a stand-alone device, with a physical aspect such as that of a robot or preferably a human. The dialog system might be realised as a dedicated device as described, for example, in
DE 10249060 A1 , constructed in such a way that a moveable part with schematic facial features can turn to face the user, giving the impression that the device is listening to the user. Such a dialog management system might even be constructed in such a fashion that it can accompany the user as he moves from room to room. The interfaces between the dialog management system and the undividual applications may be realised by means of cables. Preferably, the interfaces and realised in a wireless manner, such an unfra-red, bluetooth, etc., so thatthe dialog management system remains essentially moble, and in not restricted to being positioned in the vicinity of the applications which it is used to drive. If the wireless interfaces have sufficient reach, the dialog management system can easily be used for controlling numerous applications for devices located in different rooms of a building, such as an office block or private house. The interfaces between the dialog management system and the individual applications are preferably managed in a dedicated application interface unit. Here, the communication between the applications and the dialog management system is managed by forwarding to each application any commands or instructions interpreted from the spoken user input, and by receiving from an application any feedback intended for the user. The application interface unit can deal with several applications in a parallel manner. - An application driven by the dialog management system might be a program running as software on a personal computer, a network, or any electronic device controlled by a processor or simple circuitry, such as a heating system for a household, a microwave oven, etc.. Equally, an application can be understood to control a mechanical or physical device or object not ordinarily controlled by a processor. Such a device or object might be a purely mechanical device or object such as, for example, a letterbox. Such an object might be provided with appropriate sensors and an interface to the dialog management system, so that the dialog management system is informed when, for example, letters are dropped into the letterbox. This event might then be communicated to the user by an appropriate auditory icon, such as a post horn sound. The user of the dialog management system can thus tell whether he has received a postal delivery without having to actually go and see. Such an application of a dialog management system according to the invention might be particularly advantageous for a user living in a high-rise apartment block, or for a physically disabled user. A heating system, such as the household type of heating system that can be re-programmed by the user according to season, might be controlled by dialog management system according to the invention. The user might avail of the dialog management system to easily reprogram the heating system by means of spoken commands before going on vacation, thus being spared the necessity of a time-consuming manual reprogramming. The dialog management system can report the status of the heating system to the user, whereby the relevant prompts may be accompanied by appropriate auditory icons. An application can also be understood to be an essentially electronic device such as an intercom or telephone. Here, the dialog management system could be connected to the intercom or telephone by means of a suitable interface, and can assist the user in dealing with a visitor or an incoming call by informing the user of the event by emitting an appropriate auditory icon - for example the sound of knocking on wood for a visitor at the door - without the user actually having to first open the door or pick up the telephone receiver.
- User input to the dialog management system can be vocal, whereby spoken commands or comments of the user are recorded by means of the input detection arrangement, for example, a microphone. The input detection arrangement might - if the dialog management system is not exclusively speech-controlled - additionally comprise a keyboard, mouse, or a number of buttons by means of which the user can input commands to the system. An advanced input detection arrangement might even feature cameras for sensing movement of the user, so that the user might communicate with the dialog management system by means of gestures, for example by waving his hand or shaking his head. The dialog management system interprets the user input, determines the application for which the user input is intended, and converts the user input to a form suitable for understanding by that application.
- Spoken user input is analysed for content, and feedback from the application is converted to an output prompt by a core dialog engine. The dialog management system communicates with the user by means of a sound output arrangement, preferably one or more loudspeakers, for outputting audible prompts which are generated by the core dialog engine in response to feedback from an application.
- The core dialog engine comprises several units or modules for performing the usual steps of speech recognition and speech synthesis, such as an language understanding unit, a speech synthesis unit etc. A dialog control unit interprets the text identified by the language understanding unit, identifies the application for which it is intended, and converts it into a form suitable for processing by that application. Furthermore, the dialog control unit might analyse incoming feedback from an application and forward a suitable auditory icon, chosen from the unique set of auditory icons associated with that application, to the output sound arrangement. The audible prompts comprise auditory icons, which are understood to be dedicated sound chunks describing a particular type of feedback from an application.
- The auditory icons are used by the application to indicate any event during the dialog flow, or that a particular event has occurred - probably of interest to the user - such as the arrival of an electronic mail. Furthermore, the auditory icons might be used to indicate that an application is awaiting a user response, for example if the user has overheard a prompt. Auditory icons are preferably used to indicate any change in operational status of an application about which the user should be informed.
- An application might feature a complete set of auditory icons for use in any situation where the application can give the user feedback concerning its status or activities. As already indicated, an application might supply the dialog management system with a copy of its set of auditory icons, along with any associated instructions or accompanying information regarding the suitable use or playback of each auditory icon. These icons are managed by the dialog management system in an auditory icon management unit, which keeps track of which auditory icon is assigned to which application, and the type of feedback for which each auditory icon is to be used. The dialog management system might acquire the complete set of auditory icons at the outset of a dialog flow between the user and the application, or upon a first activation or installation of the application, and the auditory icon management unit might store all information regarding the auditory icons and their associated instructions in a local memory for use at a later point in time. In this way, the dialog management system ensures that it has any auditory icon that it might require for providing appropriate feedback to the user, regardless of what might arise during the dialog flow.
- Alternatively, the dialog management system might first request an application to supply only the relevant identifying information for each auditory icon in its set, such as a unique descriptive name or number, and any usage instructions associated with the different auditory icons. The dialog management system might then request each auditory icon only as the necessity arises, in order to reduce memory costs. The dialog management system might equally decide, on the basis of the preceding dialog flow, which type of auditory icon it might require for a particular application in the near future, and it might request this auditory icon in advance from the application.
- For an application that does not avail of a pre-defined set of auditory icons, the dialog management system can provide an appropriate set. To this end, the dialog management system might be able to determine the nature of the application and decide on a suitable set of auditory icons, or the user might choose to define the auditory icons himself. He might do this by locating a sound chunk in digital form, for example by downloading from the internet or extracting a suitable sound chunk from a soundtrack or song, or he might record a sound chunk using a recording apparatus and communicate the recording to the dialog management system. For example, he might record or obtain a recording of a Formula One racing car being driven at speed, transfer the recording to the dialog management system where it is stored in a local memory by the auditory icon management unit, and specify that this sound chunk be played whenever an application for providing sports news reports an update about a Formula One race. The user might also advantageously use the microphone of the dialog management system to record a suitable sound chunk. In a preferred embodiment of the invention, the dialog management system is equipped with a suitable interface for connection to a portable memory such as a USB stick, memory card etc., or to any external network such as the internet, for the purpose of locating and downloading sound chunks for use as auditory icons.
- As already indicated, the dialog management system is able to provide an application with any auditory icons which it might require. For example, it might be that an application only disposes of one or two auditory icons, for example to indicate the start of a process, or to indicate that an error has occurred, requiring the attention of the user. However, such a small selection might not be sufficient for an intuitive and easily understood dialog flow between the user and the application. In this case, the dialog management system might choose a set of suitable auditory icons from a selection available, and assign these to the application. In another example, on loading a new application, the dialog management system examines the auditory icons associated with the new application, and compares them to the auditory icons already assigned to the other applications. If any of the new auditory icons is identical or very similar to any existing auditory icon, the dialog management system preferably informs the user, and suggests suitable alternatives if it has any available. If no suitable alternative auditory icons are available, the dialog management system might prompt the user to enter suitable replacements.
- Examples of auditory icons which an application might use to provide audible feedback to the user are start auditory icons, to be played when a dialog flow between the user and the application is activated or reactivated from stand-by, and end auditory icons, to be played when the dialog flow between the user and the application is concluded, deactivated, or placed in a stand-by mode. The start auditory icon itself should reflect the nature of the application, while the end auditory icon might simply be the sounds of the start icon, played in reverse order. An application might also use informative auditory icons, whose sound contains some clue as to the nature of the application or the actual feedback type associated with this auditory icon. For example an application for supplying weather forecast updates might play an auditory icon with weather-associated sounds such as wind for stormy weather, raindrops for rainy weather and birdsong for fair weather. Other examples of auditory icons might be those used to provide status or information updates during the time that an application is active. For example, an application running a personal digital assistant might have several auditory icons for supplying the user with different types of status feedback concerning appointments, incoming emails, due-dates for reports, etc. For example, the personal digital assistant might repeatedly remind the user of an upcoming appointment using an appropriate audible icon, with the reminders becoming more and more persistent as the appointment draws near.
- In a preferred embodiment of the invention, the user might specify which audible icons of which applications he would like to hear during a dialog flow, by entering suitable information into a user profile. He might also specify the loudness of the auditory icons, and the number of times an auditory icon is to be played during the dialog flow. In addition, he can assign priorities to the various applications, so that feedback from an intercom takes priority over an application such as a personal digital assistant. In this way, the user ensures that he will always be informed of the higher-priority application in the event that higher- and lower-priority applications simultaneously report feedback in the dialog flow. The user profile can be consulted regularly or after every modification by the auditory icon management unit to determine whether an auditory icon should be played back, the desired loudness, and the number of times this auditory icon can be played back during this dialog flow.
- In a further preferred embodiment, the dialog management system can deduce user preferences by interpreting dialog flow. For example, if an application has reported a reminder for an upcoming appointment by means of an appropriate auditory icon, and the user replies "I know, I know", the dialog management system can interpret this to mean that the user does not need reminding again, and might suppress the auditory icon for this feedback the next time it is initiated by the application. This level of "intelligent" interpretation on the part of the dialog management system might also be specified by the user in the user profile. For a dialog management system used by more than one user, a number of user profiles can preferably be configured, so that each user has his own private user profile in which he can specify his own personal preferences.
- A dialog management system according to the present invention might perform some of the processing steps described above by implementing software modules or a computer program product. Such a computer program product might be directly loadable into the memory of a programmable dialog management system. Some of the units or modules such as the core dialog engine, application interface unit and auditory icon management unit can thereby be realised in the form of computer program modules. Since any required software or algorithms might be encoded on a processor of a hardware device, an existing electronic device might easily be adapted to benefit from the features of the invention. Alternatively, the units or blocks for processing user input and the output prompts in the manner described can equally be realised using hardware modules.
- Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawing. It is to be understood, however, that the drawing is designed solely for the purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims.
- The sole figure,
Fig.1 , is a schematic block diagram of a dialog management system in accordance with an embodiment of the present invention. - In the description of the figure, which does not exclude other possible realisations of the invention, the system is shown as part of a user device, for example a home dialog system. For the sake of clarity, the interface between the user and the present invention has not been included in the diagram.
-
Fig. 1 shows adialog management system 1 with a number of interfaces for communicating with multiple external applications A1, A2, A3, ..., An. The applications A1, A2, A3, ..., An, shown in a simplified manner as blocks, can in reality be any kind of "application" or "function" about which a user would like to be informed, or which a user would like to control in some way. In this example, the applications A1, A2, A3, ..., An might include, among others, a personal digital assistant A1, a news and weather service A2, and a telephone A3. - The
dialog management system 1 features anapplication interface 10 for handling incoming and outgoing information passed between thedialog management system 1 and the applications A1, A2, A3, ..., An. Furthermore, thedialog management system 1 can obtain information from each application A1, A2, A3, ..., An regarding any auditory icons it might feature, and when these auditory icons should be played. This information is stored in an auditoryicon management unit 11. In this example, one of the applications A1 might automatically provide thedialog management system 1 with all relevant information concerning its set of auditory icons, for example when the application A1 is started or booted. Another application A3 might only submit descriptive information regarding its auditory icons in advance, and submit a single auditory icon upon request in the event that the auditory icon is actually required in the dialog flow. Thedialog management system 1 can request an application A1, A2, A3, ..., An to provide information regarding one or more auditory icons as required, or when the application A1, A2, A3, ..., An is started. - Not all applications will have a complete set of suitable auditory icons at its disposal. Some applications may not have any auditory icons at all, and some applications might even have identical auditory icons. To deal with such situations, the auditory
icon management unit 11 can assign auditory icons to an application A2 by choosing suitable ones from a collection of pre-definedauditory icons 13. For such an application, the user might prefer to have the auditoryicon management unit 11 assign a particular sound recording to the application A2. For example, the user might like to hear the sound of birdsong when the weather service A2 reports fair weather. If stormy weather is forecast, the user might like to hear the sound of thunder. The user can input these recordings as audio data in a suitable format via auser interface 15, and have the auditoryicon management unit 11 assign them to the weather service application A2. Another way of supplying the auditoryicon management unit 11 with such recordings is to download them from an external computer or anetwork 12 such as the internet, via asuitable interface 14. - These different ways of obtaining auditory icon information allow the
dialog management system 1 to collect all the information it requires in order to playback the relevant auditory icons as required in the dialog flow. - The dialog flow in this example consists of communication between the user, not shown in the diagram, and the various applications A1, A2, A3, ..., An driven by the
dialog management system 1. The user issues spoken commands or requests to thedialog management system 1 through amicrophone 5. The spoken commands or requests are recorded and digitised in aninput detection arrangement 4, which passes the recorded speech input to a core dialog engine 8. This engine 8 comprises several blocks for performing the usual steps involved in speech recognition - anaudio interface block 20 performs some necessary digital signal processing on the input speech signal before forwarding it to anautomatic speech recogniser 21. This extracts any recognisable speech components from the input audio signal and forwards these to alanguage understanding block 22. In thelanguage understanding block 22, the spoken commands or requests of the user are analysed for relevance and passed on as appropriate to thedialog controller 23, which converts the user input into commands or requests that can be executed by the appropriate application A1, A2, A3, ..., An. - Should it be necessary to obtain some further information from the user, for example if the spoken commands can not be parsed or understood by the
automatic speech recogniser 21 and language understanding 22 blocks, or if the spoken commands cannot be applied to any of the applications A1, A2, A3, ..., An that are active, thedialog controller 23 generates appropriate requests and forwards these to aspeech generator 24 where they are synthesized to speech. Theaudio interface block 20 performs the necessary digital signal processing on the output speech signal which is then converted in ansound output arrangement 6 such as a loudspeaker to giveaudible sound 7. - In a typical example of a dialog flow controlled by the dialog management system of
Fig. 1 , the user might wish to enter an appointment into the diary of his personal digital assistant A1. All he needs to do is to say "Enter appointment with tax advisor next Monday at 11am". The core dialog engine 8 converts the command into the appropriate form and submits it to the personal digital assistant application A1. If the appointment can be entered without any problem into the personal digital assistant A1, the appropriate feedback is reported to thedialog management system 1, which chooses the appropriate confirmatory feedback - such as a spoken "OK" or "Roger" - to be output. - If an appointment is already scheduled for the same time on that day, the personal digital assistant A1 reports back to the
dialog management system 1, where theapplication interface 10 and/or thedialog controller 23 interprets the application's response, and chooses the appropriate auditory icon - for example the sound of clashing cymbals to indicate to the user that the new appointment clashes with an appointment already entered. Additionally, thedialog controller 23 triggers generation of a suitable prompt, e.g. "You already have an appointment at 11am with Mr. So-and-so". Optionally, the user may deactivate the prompt output if detailed feedback is not desired by the user. - In this example, the user has specified his preferences regarding the playback of auditory icons in a user profile, to customise or configure the extent to which he would like to be informed about events occurring in the applications he uses, and which applications are to be accorded a higher priority in the dialog flow. These preferences might endure until changed at some later time by the user, or they might be of a transitory nature. For example, the user might tell the dialog management system how to react within a certain period of time. For example, when the user says "Don't interrupt me for the next two hours unless it's really important", the dialog management system suppresses the reporting of minor events occurring during the following two hours, such as an automatic weather update, and postpones for two hours all relatively unimportant events such as 24-hour reminders for upcoming scheduled appointments "Dentist tomorrow afternoon at 3pm". The user would only be interrupted by a relatively important event such as a scheduled appointment during the specified time "Meeting with director in 15 minutes" or a telephone call from an client tagged in the telephone application A3 as being important. The dialog management system decides what is important and what is relatively unimportant by examining the information specified in the user profile 3.
- Other preferences might specify the priority given to the applications if two or more applications indicate that auditory icons are to the played at the same time. In this case, the user has specified in the
user profile 13 that the telephone A3 is to be assigned a higher priority than the news and weather service A2. If the news and weather service A2 is about to give its automatic news update, and an incoming call arrives at the same time, theapplication interface 10 acknowledges that the telephone application A3 has the higher priority, and suppresses the auditory icon of the news and weather service A2, which may be postponed for output at a later point in time. - Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention, for example the auditory icon management unit might be realised as part of the core dialog engine, or be incorporated in another module such as the dialog controller. In one embodiment of the invention, the dialog system might be able to determine the quality of the current user's voice after processing a few utterances, or the user might make himself known to the system by entering an identification code which might then be used to access stored user profile information which in turn would be used to generate appropriate control parameters for the audio interface.
- For the sake of clarity, throughout this application, it is to be understood that the use of "a" or "an" does not exclude a plurality, and "comprising" does not exclude other steps or elements. The use of "unit" or "module" does not limit realisation to a single unit or module.
Claims (11)
- A method for driving multiple applications (A1, A2, A3, ..., An) by a common dialog management system (1) wherein- a unique set of auditory icons (S1, S2, S3, ..., Sn) is assigned to each application (A1, A2, A3, ..., An),- and wherein the common dialog management system (1) informs a user of the status of an application (A1, A2, A3, ..., An) by playback, at a specific point in a dialog flow, of a relevant auditory icon (I1, I2, I3, ..., In) selected from the unique set of auditory icons (S1, S2, S3, ..., Sn) of the respective application (A1, A2, A3, ..., An);
and wherein- an application (A1, A2, A3, ..., An) submits a set of auditory icons (S1, S2, S3, ..., Sn) and associated instructions concerning the use thereof to the dialog management system (1);- the dialog management system (1) supplies an application (A1, A2, A3, ..., An) with a unique set of auditory icons (S1, S2, S3, ..., Sn) by modifying non-unique auditory icons (I1, I2, I3, ..., In) in a set of auditory icons (S1, S2, S3, ..., Sn) of the application (A1, A2, A3, ..., An) and/or choosing unique auditory icons (I1, I2, I3, ..., In) for the application (A1, A2, A3, ..., An) from a collection (13) of auditory icons. - A method according to claim 1, wherein the auditory icons (I1, I2, I3, ..., In) of an application (A1, A2, A3, ..., An) are played back to indicate to the user a change in operational status of an application (A1, A2, A3, ..., An).
- A method according to claim 3, wherein identifying information for the individual auditory icons (I1, I2, I3, ..., In) of an application (A1, A2, A3, ..., An) and associated instructions are obtained by the dialog management system (1), and the auditory icons (I1, I2, I3, ..., In) are retrieved by the dialog management system (1), from the application (A1, A2, A3, ..., An) upon request.
- A method according to claim 3, wherein the complete set of auditory icons (S1, S2, S3, ..., Sn) of an application (A1, A2, A3, ..., An) is acquired by the dialog management system (1) at the outset of a dialog flow between the user and the application (A1, A2, A3, ..., An) or upon activation or installation of the application (A1, A2, A3, ..., An).
- A method according to any of the preceding claims, wherein the set of auditory icons (S1, S2, S3, ..., Sn) for playback in a dialog flow between a user and an application (A1, A2, A3, ..., An) comprises at least one unique start auditory icon, for playback at commencement of the dialog flow and/or at least one unique end auditory icon, for playback at conclusion of a dialog flow.
- A method according to any of the preceding claims, wherein the set of auditory icons (S1, S2, S3, ..., Sn) for playback in a dialog flow between a user and an application (A1, A2, A3, ..., An) comprises a number of unique informative auditory icons (I1, I2, I3, ..., In), for playback at specific points during the dialog flow where each auditory icon (I1, I2, I3, ..., In) describes a particular type of feedback from the application (A1, A2, A3, ..., An).
- A method according to any of the preceding claims, wherein auditory icons (I1, I2, I3, ..., In) and/or playback characteristics of the auditory icons (I1, I2, I3, ..., In) are specified for a user in a user profile (3).
- A dialog management system (1) for driving a number of applications (A1, A2, A3, ..., An), comprising- an input detection arrangement (4) for detecting user input (5) to the system;- a sound output arrangement (6) for outputting an audible prompt (7) ;- a core dialog engine (8) for coordinating a dialog flow by interpreting user input (5) and generating output prompts;- an application interface (10) for communication between the dialog management system (1) and the applications (A1, A2, A3, ..., An);- a source of unique sets of auditory icons (S1, S2, S3, ..., Sn) assigned to the applications (A1, A2, A3, ..., An);- and an auditory icon management unit (11) for selecting relevant auditory icons (I1, I2, I3, ..., In) from the unique sets of auditory icons (S1, S2, S3, ..., Sn) corresponding to the applications (A1, A2, A3, ..., An) for playback at specific points in the dialog flow;
which dialog management system (1) is realised to obtain a set of auditory icons (S1, S2, S3, ..., Sn) and associated instructions concerning the use thereof from an application (A1, A2, A3, ..., An) and to supply an application (A1, A2, A3, ..., An) with a unique set of auditory icons (S1, S2, S3, ..., Sn) by modifying non-unique auditory icons (I1, I2, I3, ..., In) in a set of auditory icons (S1, S2, S3, ..., Sn) of the application (A1, A2, A3, ..., An) and/or by choosing unique auditory icons (I1, I2, I3, ..., In) for the application (A1, A2, A3, ..., An) from a collection (13) of auditory icons. - A dialog management system (1) according to claim 8, comprising a means (15) for allowing the user to input auditory icons (I1, I2, I3, ..., In).
- A dialog management system (1) according to claim 8 or claim 9, comprising an interface (14) for obtaining sets of auditory icons (S1, S2, S3, ..., Sn) or individual auditory icons (I1, I2, I3, ..., In) from an external source (12)
- A computer program product directly loadable into the memory of a programmable dialog management system (1) comprising software code portions for performing the steps of a method according to claims 1 to 7 when said product is run on the dialog management system (1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05709048A EP1733383B1 (en) | 2004-03-29 | 2005-03-21 | A method for driving multiple applications and a dialog management system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04101295 | 2004-03-29 | ||
PCT/IB2005/050956 WO2005093715A1 (en) | 2004-03-29 | 2005-03-21 | A method for driving multiple applications by a common dialog management system |
EP05709048A EP1733383B1 (en) | 2004-03-29 | 2005-03-21 | A method for driving multiple applications and a dialog management system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1733383A1 EP1733383A1 (en) | 2006-12-20 |
EP1733383B1 true EP1733383B1 (en) | 2009-04-15 |
Family
ID=34961270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05709048A Not-in-force EP1733383B1 (en) | 2004-03-29 | 2005-03-21 | A method for driving multiple applications and a dialog management system |
Country Status (8)
Country | Link |
---|---|
US (1) | US20080263451A1 (en) |
EP (1) | EP1733383B1 (en) |
JP (1) | JP2007531141A (en) |
KR (1) | KR20060131929A (en) |
CN (1) | CN1938757B (en) |
AT (1) | ATE429010T1 (en) |
DE (1) | DE602005013938D1 (en) |
WO (1) | WO2005093715A1 (en) |
Families Citing this family (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040162637A1 (en) | 2002-07-25 | 2004-08-19 | Yulun Wang | Medical tele-robotic system with a master remote station with an arbitrator |
US6925357B2 (en) | 2002-07-25 | 2005-08-02 | Intouch Health, Inc. | Medical tele-robotic system |
US7813836B2 (en) | 2003-12-09 | 2010-10-12 | Intouch Technologies, Inc. | Protocol for a remotely controlled videoconferencing robot |
US20050204438A1 (en) * | 2004-02-26 | 2005-09-15 | Yulun Wang | Graphical interface for a remote presence system |
US8077963B2 (en) | 2004-07-13 | 2011-12-13 | Yulun Wang | Mobile robot with a head-based movement mapping scheme |
JP5140580B2 (en) | 2005-06-13 | 2013-02-06 | インテリジェント メカトロニック システムズ インコーポレイテッド | Vehicle immersive communication system |
US9198728B2 (en) | 2005-09-30 | 2015-12-01 | Intouch Technologies, Inc. | Multi-camera mobile teleconferencing platform |
AU2007211838A1 (en) * | 2006-02-01 | 2007-08-09 | Icommand Ltd | Human-like response emulator |
US8849679B2 (en) | 2006-06-15 | 2014-09-30 | Intouch Technologies, Inc. | Remote controlled robot system that provides medical images |
US9976865B2 (en) | 2006-07-28 | 2018-05-22 | Ridetones, Inc. | Vehicle communication system with navigation |
US9160783B2 (en) | 2007-05-09 | 2015-10-13 | Intouch Technologies, Inc. | Robot system that operates through a network firewall |
US10875182B2 (en) | 2008-03-20 | 2020-12-29 | Teladoc Health, Inc. | Remote presence system mounted to operating room hardware |
AU2009227944B2 (en) * | 2008-03-25 | 2014-09-11 | E-Lane Systems Inc. | Multi-participant, mixed-initiative voice interaction system |
US8179418B2 (en) | 2008-04-14 | 2012-05-15 | Intouch Technologies, Inc. | Robotic based health care system |
US8170241B2 (en) | 2008-04-17 | 2012-05-01 | Intouch Technologies, Inc. | Mobile tele-presence system with a microphone system |
US8838075B2 (en) | 2008-06-19 | 2014-09-16 | Intelligent Mechatronic Systems Inc. | Communication system with voice mail access and call by spelling functionality |
US9193065B2 (en) | 2008-07-10 | 2015-11-24 | Intouch Technologies, Inc. | Docking system for a tele-presence robot |
US9842192B2 (en) | 2008-07-11 | 2017-12-12 | Intouch Technologies, Inc. | Tele-presence robot system with multi-cast features |
US9652023B2 (en) | 2008-07-24 | 2017-05-16 | Intelligent Mechatronic Systems Inc. | Power management system |
US8340819B2 (en) | 2008-09-18 | 2012-12-25 | Intouch Technologies, Inc. | Mobile videoconferencing robot system with network adaptive driving |
US8996165B2 (en) | 2008-10-21 | 2015-03-31 | Intouch Technologies, Inc. | Telepresence robot with a camera boom |
US9138891B2 (en) | 2008-11-25 | 2015-09-22 | Intouch Technologies, Inc. | Server connectivity control for tele-presence robot |
US8463435B2 (en) | 2008-11-25 | 2013-06-11 | Intouch Technologies, Inc. | Server connectivity control for tele-presence robot |
US9084551B2 (en) * | 2008-12-08 | 2015-07-21 | Medtronic Xomed, Inc. | Method and system for monitoring a nerve |
US8335546B2 (en) * | 2008-12-19 | 2012-12-18 | Harris Technology, Llc | Portable telephone with connection indicator |
US8849680B2 (en) | 2009-01-29 | 2014-09-30 | Intouch Technologies, Inc. | Documentation through a remote presence robot |
US8897920B2 (en) | 2009-04-17 | 2014-11-25 | Intouch Technologies, Inc. | Tele-presence robot system with software modularity, projector and laser pointer |
WO2010135837A1 (en) | 2009-05-28 | 2010-12-02 | Intelligent Mechatronic Systems Inc | Communication system with personal information management and remote vehicle monitoring and control features |
WO2010148518A1 (en) | 2009-06-27 | 2010-12-29 | Intelligent Mechatronic Systems | Vehicle internet radio interface |
US11399153B2 (en) | 2009-08-26 | 2022-07-26 | Teladoc Health, Inc. | Portable telepresence apparatus |
US8384755B2 (en) | 2009-08-26 | 2013-02-26 | Intouch Technologies, Inc. | Portable remote presence robot |
US9978272B2 (en) | 2009-11-25 | 2018-05-22 | Ridetones, Inc | Vehicle to vehicle chatting and communication system |
US11154981B2 (en) | 2010-02-04 | 2021-10-26 | Teladoc Health, Inc. | Robot user interface for telepresence robot system |
US8670017B2 (en) | 2010-03-04 | 2014-03-11 | Intouch Technologies, Inc. | Remote presence system including a cart that supports a robot face and an overhead camera |
US10343283B2 (en) | 2010-05-24 | 2019-07-09 | Intouch Technologies, Inc. | Telepresence robot system that can be accessed by a cellular phone |
US10808882B2 (en) | 2010-05-26 | 2020-10-20 | Intouch Technologies, Inc. | Tele-robotic system with a robot face placed on a chair |
US9264664B2 (en) | 2010-12-03 | 2016-02-16 | Intouch Technologies, Inc. | Systems and methods for dynamic bandwidth allocation |
US12093036B2 (en) | 2011-01-21 | 2024-09-17 | Teladoc Health, Inc. | Telerobotic system with a dual application screen presentation |
EP2668008A4 (en) | 2011-01-28 | 2018-01-24 | Intouch Technologies, Inc. | Interfacing with a mobile telepresence robot |
US9323250B2 (en) | 2011-01-28 | 2016-04-26 | Intouch Technologies, Inc. | Time-dependent navigation of telepresence robots |
US10769739B2 (en) | 2011-04-25 | 2020-09-08 | Intouch Technologies, Inc. | Systems and methods for management of information among medical providers and facilities |
US20140139616A1 (en) | 2012-01-27 | 2014-05-22 | Intouch Technologies, Inc. | Enhanced Diagnostics for a Telepresence Robot |
US9098611B2 (en) | 2012-11-26 | 2015-08-04 | Intouch Technologies, Inc. | Enhanced video interaction for a user interface of a telepresence network |
US8836751B2 (en) | 2011-11-08 | 2014-09-16 | Intouch Technologies, Inc. | Tele-presence system with a user interface that displays different communication links |
US9418674B2 (en) | 2012-01-17 | 2016-08-16 | GM Global Technology Operations LLC | Method and system for using vehicle sound information to enhance audio prompting |
US9263040B2 (en) | 2012-01-17 | 2016-02-16 | GM Global Technology Operations LLC | Method and system for using sound related vehicle information to enhance speech recognition |
US9934780B2 (en) * | 2012-01-17 | 2018-04-03 | GM Global Technology Operations LLC | Method and system for using sound related vehicle information to enhance spoken dialogue by modifying dialogue's prompt pitch |
US9569594B2 (en) | 2012-03-08 | 2017-02-14 | Nuance Communications, Inc. | Methods and apparatus for generating clinical reports |
US9569593B2 (en) * | 2012-03-08 | 2017-02-14 | Nuance Communications, Inc. | Methods and apparatus for generating clinical reports |
US9251313B2 (en) | 2012-04-11 | 2016-02-02 | Intouch Technologies, Inc. | Systems and methods for visualizing and managing telepresence devices in healthcare networks |
US8902278B2 (en) | 2012-04-11 | 2014-12-02 | Intouch Technologies, Inc. | Systems and methods for visualizing and managing telepresence devices in healthcare networks |
US9361021B2 (en) | 2012-05-22 | 2016-06-07 | Irobot Corporation | Graphical user interfaces including touchpad driving interfaces for telemedicine devices |
WO2013176758A1 (en) | 2012-05-22 | 2013-11-28 | Intouch Technologies, Inc. | Clinical workflows utilizing autonomous and semi-autonomous telemedicine devices |
US9256889B1 (en) * | 2012-09-20 | 2016-02-09 | Amazon Technologies, Inc. | Automatic quote generation |
US10187520B2 (en) * | 2013-04-24 | 2019-01-22 | Samsung Electronics Co., Ltd. | Terminal device and content displaying method thereof, server and controlling method thereof |
US9853860B2 (en) * | 2015-06-29 | 2017-12-26 | International Business Machines Corporation | Application hierarchy specification with real-time functional selection |
US11862302B2 (en) | 2017-04-24 | 2024-01-02 | Teladoc Health, Inc. | Automated transcription and documentation of tele-health encounters |
US10983753B2 (en) | 2017-06-09 | 2021-04-20 | International Business Machines Corporation | Cognitive and interactive sensor based smart home solution |
US10483007B2 (en) | 2017-07-25 | 2019-11-19 | Intouch Technologies, Inc. | Modular telehealth cart with thermal imaging and touch screen user interface |
US11316865B2 (en) | 2017-08-10 | 2022-04-26 | Nuance Communications, Inc. | Ambient cooperative intelligence system and method |
US11605448B2 (en) | 2017-08-10 | 2023-03-14 | Nuance Communications, Inc. | Automated clinical documentation system and method |
US11636944B2 (en) | 2017-08-25 | 2023-04-25 | Teladoc Health, Inc. | Connectivity infrastructure for a telehealth platform |
US10565312B2 (en) * | 2017-10-04 | 2020-02-18 | Motorola Mobility Llc | Context-based action recommendations based on a shopping transaction correlated with a monetary deposit as incoming communications |
US20190163331A1 (en) * | 2017-11-28 | 2019-05-30 | International Business Machines Corporation | Multi-Modal Dialog Broker |
US10878124B1 (en) * | 2017-12-06 | 2020-12-29 | Dataguise, Inc. | Systems and methods for detecting sensitive information using pattern recognition |
US20190272895A1 (en) | 2018-03-05 | 2019-09-05 | Nuance Communications, Inc. | System and method for review of automated clinical documentation |
US11250382B2 (en) | 2018-03-05 | 2022-02-15 | Nuance Communications, Inc. | Automated clinical documentation system and method |
US11515020B2 (en) | 2018-03-05 | 2022-11-29 | Nuance Communications, Inc. | Automated clinical documentation system and method |
US10617299B2 (en) | 2018-04-27 | 2020-04-14 | Intouch Technologies, Inc. | Telehealth cart that supports a removable tablet with seamless audio/video switching |
US11227679B2 (en) | 2019-06-14 | 2022-01-18 | Nuance Communications, Inc. | Ambient clinical intelligence system and method |
US11216480B2 (en) | 2019-06-14 | 2022-01-04 | Nuance Communications, Inc. | System and method for querying data points from graph data structures |
US11043207B2 (en) | 2019-06-14 | 2021-06-22 | Nuance Communications, Inc. | System and method for array data simulation and customized acoustic modeling for ambient ASR |
US11531807B2 (en) | 2019-06-28 | 2022-12-20 | Nuance Communications, Inc. | System and method for customized text macros |
US11670408B2 (en) | 2019-09-30 | 2023-06-06 | Nuance Communications, Inc. | System and method for review of automated clinical documentation |
US11222103B1 (en) | 2020-10-29 | 2022-01-11 | Nuance Communications, Inc. | Ambient cooperative intelligence system and method |
CN114765027A (en) * | 2021-01-15 | 2022-07-19 | 沃尔沃汽车公司 | Control device, vehicle-mounted system and method for vehicle voice control |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287102A (en) * | 1991-12-20 | 1994-02-15 | International Business Machines Corporation | Method and system for enabling a blind computer user to locate icons in a graphical user interface |
JPH05197355A (en) * | 1992-01-23 | 1993-08-06 | Hitachi Ltd | Acoustic effect defining device |
US6359636B1 (en) * | 1995-07-17 | 2002-03-19 | Gateway, Inc. | Graphical user interface for control of a home entertainment system |
US5767835A (en) * | 1995-09-20 | 1998-06-16 | Microsoft Corporation | Method and system for displaying buttons that transition from an active state to an inactive state |
US6184876B1 (en) * | 1996-07-10 | 2001-02-06 | Intel Corporation | Method and apparatus for audibly communicating comparison information to a user |
US6404442B1 (en) * | 1999-03-25 | 2002-06-11 | International Business Machines Corporation | Image finding enablement with projected audio |
US20010047384A1 (en) * | 1999-11-29 | 2001-11-29 | John Croy | Methods and systems for providing personalized content over a network |
US6513009B1 (en) * | 1999-12-14 | 2003-01-28 | International Business Machines Corporation | Scalable low resource dialog manager |
DE10028447A1 (en) * | 2000-06-14 | 2001-12-20 | Merck Patent Gmbh | Monolithic porous molded article production, used for chromatographic separation of substances, comprises repeated filling of gel mold with monomer sol, polymerization and aging gel |
US7765163B2 (en) * | 2000-12-12 | 2010-07-27 | Sony Corporation | System and method for conducting secure transactions over a network |
CN1154395C (en) * | 2001-02-28 | 2004-06-16 | Tcl王牌电子(深圳)有限公司 | Acoustical unit for digital TV set |
JP4694758B2 (en) * | 2001-08-17 | 2011-06-08 | 株式会社リコー | Apparatus operating device, program, recording medium, and image forming apparatus |
JP5008234B2 (en) * | 2001-08-27 | 2012-08-22 | 任天堂株式会社 | GAME DEVICE, PROGRAM, GAME PROCESSING METHOD, AND GAME SYSTEM |
JP2003131785A (en) * | 2001-10-22 | 2003-05-09 | Toshiba Corp | Interface device, operation control method and program product |
JP2004051074A (en) * | 2001-11-13 | 2004-02-19 | Equos Research Co Ltd | In-vehicle device, data preparation device, and data preparation program |
US6996777B2 (en) | 2001-11-29 | 2006-02-07 | Nokia Corporation | Method and apparatus for presenting auditory icons in a mobile terminal |
US20030142149A1 (en) * | 2002-01-28 | 2003-07-31 | International Business Machines Corporation | Specifying audio output according to window graphical characteristics |
US7742609B2 (en) * | 2002-04-08 | 2010-06-22 | Gibson Guitar Corp. | Live performance audio mixing system with simplified user interface |
US7318198B2 (en) * | 2002-04-30 | 2008-01-08 | Ricoh Company, Ltd. | Apparatus operation device for operating an apparatus without using eyesight |
JP4010864B2 (en) * | 2002-04-30 | 2007-11-21 | 株式会社リコー | Image forming apparatus, program, and recording medium |
DE10249060A1 (en) | 2002-05-14 | 2003-11-27 | Philips Intellectual Property | Dialog control for electrical device |
AU2002950336A0 (en) * | 2002-07-24 | 2002-09-12 | Telstra New Wave Pty Ltd | System and process for developing a voice application |
EP1611504B1 (en) * | 2003-04-07 | 2009-01-14 | Nokia Corporation | Method and device for providing speech-enabled input in an electronic device having a user interface |
US7257769B2 (en) * | 2003-06-05 | 2007-08-14 | Siemens Communications, Inc. | System and method for indicating an annotation for a document |
US20050125235A1 (en) * | 2003-09-11 | 2005-06-09 | Voice Signal Technologies, Inc. | Method and apparatus for using earcons in mobile communication devices |
-
2005
- 2005-03-21 KR KR1020067020053A patent/KR20060131929A/en not_active Application Discontinuation
- 2005-03-21 DE DE602005013938T patent/DE602005013938D1/en active Active
- 2005-03-21 CN CN2005800100935A patent/CN1938757B/en not_active Expired - Fee Related
- 2005-03-21 JP JP2007505684A patent/JP2007531141A/en active Pending
- 2005-03-21 WO PCT/IB2005/050956 patent/WO2005093715A1/en active Application Filing
- 2005-03-21 EP EP05709048A patent/EP1733383B1/en not_active Not-in-force
- 2005-03-21 US US10/599,328 patent/US20080263451A1/en not_active Abandoned
- 2005-03-21 AT AT05709048T patent/ATE429010T1/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
CN1938757B (en) | 2010-06-23 |
ATE429010T1 (en) | 2009-05-15 |
DE602005013938D1 (en) | 2009-05-28 |
US20080263451A1 (en) | 2008-10-23 |
EP1733383A1 (en) | 2006-12-20 |
JP2007531141A (en) | 2007-11-01 |
WO2005093715A1 (en) | 2005-10-06 |
KR20060131929A (en) | 2006-12-20 |
CN1938757A (en) | 2007-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1733383B1 (en) | A method for driving multiple applications and a dialog management system | |
US5715370A (en) | Method and apparatus for extracting text from a structured data file and converting the extracted text to speech | |
CN101557432B (en) | Mobile terminal and menu control method thereof | |
JP4558074B2 (en) | Telephone communication terminal | |
US20070250190A1 (en) | System and method for controlling a remote environmental control unit | |
US20190333513A1 (en) | Voice interaction method, device and computer readable storage medium | |
CN103959751A (en) | Automatically adapting user interfaces for hands-free interaction | |
US10460719B1 (en) | User feedback for speech interactions | |
KR102343084B1 (en) | Electronic device and method for executing function of electronic device | |
CN104969289A (en) | Voice trigger for a digital assistant | |
CN106796497A (en) | Dynamic threshold for monitoring speech trigger all the time | |
CN105144133A (en) | Context-sensitive handling of interruptions | |
WO2004083981A2 (en) | System and methods for storing and presenting personal information | |
US11568885B1 (en) | Message and user profile indications in speech-based systems | |
JP2010541481A (en) | Active in-use search via mobile device | |
WO2019152115A1 (en) | Methods to present the context of virtual assistant conversation | |
WO2020105302A1 (en) | Response generation device, response generation method, and response generation program | |
JPWO2018100743A1 (en) | Control device and equipment control system | |
WO2012032714A1 (en) | User device, server, and operating conditions setting system | |
JP2016130800A (en) | System, server, electronic apparatus, method for controlling server, and program | |
CN111710339A (en) | Voice recognition interaction system and method based on data visualization display technology | |
US10002611B1 (en) | Asynchronous audio messaging | |
US7102485B2 (en) | Motion activated communication device | |
WO2020054409A1 (en) | Acoustic event recognition device, method, and program | |
CN113314115B (en) | Voice processing method of terminal equipment, terminal equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20061030 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20071102 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REF | Corresponds to: |
Ref document number: 602005013938 Country of ref document: DE Date of ref document: 20090528 Kind code of ref document: P |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090726 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090915 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090715 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090815 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |
|
26N | No opposition filed |
Effective date: 20100118 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090715 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20100330 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20100419 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20100531 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090716 Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100331 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100321 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100331 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20110321 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20111130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20111001 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110321 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602005013938 Country of ref document: DE Effective date: 20111001 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20091016 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100321 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090415 |