EP3791386A1 - Configuration de dialogue efficace - Google Patents

Configuration de dialogue efficace

Info

Publication number
EP3791386A1
EP3791386A1 EP19726908.7A EP19726908A EP3791386A1 EP 3791386 A1 EP3791386 A1 EP 3791386A1 EP 19726908 A EP19726908 A EP 19726908A EP 3791386 A1 EP3791386 A1 EP 3791386A1
Authority
EP
European Patent Office
Prior art keywords
dialog
protocol
dialogue
keyword
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19726908.7A
Other languages
German (de)
English (en)
Inventor
Christoph Neumann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gk Easydialog
Original Assignee
Gk Easydialog
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gk Easydialog filed Critical Gk Easydialog
Publication of EP3791386A1 publication Critical patent/EP3791386A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention is directed to a method for efficient dialog design and interpretation in a computerized automated dialogue system.
  • the present invention provides u. a. the advantages that the workload is reduced when creating dialogs and thus the error rate. Moreover, it is possible to consider a context of partial dialogues and to dynamically create a dialogue depending on this context. Furthermore, the proposed method can initiate a dialogue with a human user himself.
  • the present invention further relates to a correspondingly configured system arrangement and to a computer program product with control commands which execute the method or operate the system arrangement.
  • US 2015/0134337 A1 shows a conversation-based search system with a conversation scenario.
  • DE 10 2012 019 178 A1 shows a computer program product for interpreting a user input in order to execute a task on a computing device having at least one processor.
  • DE 60 030 920 T2 discloses methods of collecting data associated with a voice of a user of a speech system in conjunction with a data warehouse.
  • NLP Natural Language Processing
  • NLP Natural Language Translation
  • dialog systems are known in the prior art in which a machine system interacts with a human user and receives voice commands from the human user. Responses to these commands are then initiated and the user receives a corresponding response.
  • Such systems are known, for example, under the registered trademark "Alexa” or "Siri”.
  • Alexa or "Siri”.
  • infotainment industry such systems are used for example in the automotive sector, where the user can control a navigation system or other functions linguistically.
  • the prior art also shows devices which, for example, serve to convert sound to text and back again.
  • Such systems are generally referred to as “speech recognition" systems. This can be done in such a way that the human user speaks a sentence and the recorded acoustic signals are then assigned to a text by means of pattern matching. It is also known to provide a text which is then converted into natural acoustic speech. This technique is commonly referred to as "text-to-speech".
  • NLP Natural Language Processing NLP
  • dialog systems are implemented in such a way that the user gives an acoustic or textual input, which is converted into a text. Then, a corresponding source code is controlled, which controls the further dialogue. In such a dialog control or a dialog protocol is deposited, which answer is given to which question. In addition, it can be specified which action is executed on which command. For this purpose, a separate source code is provided for each dialog, which hard-coded describes the dialogue history.
  • the proposed method should be constructed so dynamically that dialogue modules can be selected and used at runtime.
  • a method for efficient dialog design and interpretation in a computer-aided, automated dialog system comprising storing a plurality of dialog protocols for at least one keyword, the dialog protocols each specifying a dynamic sequence of a dialog, a selection of a dialog protocol in Dependency of at least one keyword provided at runtime, assignment of text modules to dialog units of the dialog protocol, and execution of the dialog protocol at runtime.
  • the method steps can be carried out iteratively and / or in a different order.
  • the individual steps can have sub-steps.
  • the storage of a plurality of dialog protocols can be iterative, and new dialog protocols can always be entered.
  • the assignment of text modules to dialogue units of the dialog protocol can be carried out at runtime, ie during the dialog, or even beforehand. Executing the dialog protocol at runtime involves several sub-steps, such as those provided for the speech recognition process or natural language understanding process.
  • the proposed method comprises known techniques, which have not been mentioned here, since they can be implemented on the basis of conventional components.
  • the user typically inputs an acoustic question or answer and then also receives an acoustic answer or question.
  • the acoustic signals ie the language
  • an output text is also converted back to speech.
  • a dialogue system is created which communicatively interacts with the user.
  • the invention is also aimed at textual dialogue systems, eg chatbots.
  • the proposed method is an efficient method since, according to the invention, a decoupling of the dialog protocols from the individual text modules takes place. In this way it is possible that dialogs do not have to be hard-coded and thus have to be created individually.
  • the disadvantage of the state of the art is overcome that according to conventional methods the dialog design proceeds in such a way that the text modules are already embedded in the source code.
  • Keywords can also be generated by the dialog protocol depending on the context (then no external control command is required). For example, twice the question was misunderstood, then automatically forwarded to a replacement question.
  • the proposed method is computer-aided and automated in such a way that the proposed method can interact automatically with a user and thus the proposed system can also design dialogues themselves based on the dialogue protocols. Consequently, all procedures computer-based, whereby only the user provides an acoustic or textual input.
  • dialog protocols are stored for at least one keyword each time.
  • dialogue protocols are created which, for example, provide activities as dialog units that are linked with one another in terms of time or logic.
  • the dialog protocols thus do not specify any conventional rigid sequences, but rather specify when which actor performs an activity without specifying which text modules are used. This overcomes the disadvantage of the prior art that a linking of dialog logic and text modules takes place.
  • the dialog protocols specify, for example, that in the case of a specific user input, a request is made by the dialog system.
  • dialog protocols may specify that if the user suggests a topic, corresponding keywords are extracted and then the dialog system has to ask a question. If it is detected that the user himself is asking a question, the dialog protocol provides that the system looks for and provides a corresponding answer.
  • a dialog protocol corresponds to a dynamic process of a conversation or dialogue between the dialog system and the user.
  • dialog protocols are provided in advance, d. H. before the runtime, and are therefore already available if the dialog is started.
  • the dialog protocols can be described as abstract because they have no concrete text modules, but rather provide which dynamic flow of the dialogue should take.
  • the individual dialog protocols are assigned keywords, ie one or more keywords.
  • a keyword may be referred to as a topic with respect to which the dialogue is conducted. So
  • conventional methods can be used which select one sentence from a particularly prominent noun.
  • the user may ask for the weather and then the proposed method will be able to extract by conventional methods that the user will actually ask for the weather.
  • the pitch of the input can be taken into account and then recognized whether this is a statement or a question. If it is detected that there is a question concerning the weather, the corresponding keyword is "Weather" and a dialogue protocol is selected which provides that the dialogue system asks a question and then an expected response from the user Answered question at the beginning.
  • the proposed procedure can provide for a question to be posed by the dialogue system, which refers to a specific location. Then the text module is assigned "Where would you like to know the current weather?". The dialog protocol may then provide that the user has to respond. If a corresponding answer is provided, the location can be extracted from this and, in turn, provision can then be made for the dialogue system to provide a response. At this point in time, the dialogue system has the keyword "weather” and then, for example, the location "Munich", which is provided by the user.
  • the system can perform a database query using a provided interface, and then the dialog protocol can provide that the dialog system must respond and terminate the procedure after the response.
  • the dialog system can carry out the weather forecast and in this case read out the weather parameters from a database provide. Since typically the request should be satisfactorily answered, the process can terminate.
  • a partial dialog protocol can provide a branch to another partial dialog protocol, and then, for example, a dialog protocol for the keyword "excursion destination" can be selected. If the dialogue system has thus answered the request for the weather, the dialogue log can branch out in such a way that a partial dialogue protocol is selected which asks the user whether he wants to make an excursion in fine weather.
  • the process described above then repeats itself and after a corresponding database query the user can be proposed suitable activities that fit both the topic "weather” as well as the topic "Munich”. In doing so, the system, as a location, opens up the excursion destination "Munich" automatically from the previous dialogue process, without the user having to say it again. In this way, a dynamic process of a dialog is created and the dialog protocol is decoupled from the actual text modules.
  • text modules are also assigned to diagnostic units of the dialogue protocol, which can be done either at runtime or in advance.
  • the runtime is always the execution time of the dialog itself.
  • the disadvantage is overcome that the provided source code provides both the text modules and the individual activities, ie the dialog units.
  • the dialogue units of the dialogue protocol can be defined as abstract building blocks, which provide that a dialogue system must ask a question. How the question is to be concretely formulated is indicated by the corresponding text module.
  • This provides a further advantage over the prior art, namely that the proposed method is particularly language-independent. This allows generic dialog protocols to be created using text modules in all languages. So a dialogue protocol can only be used with text modules of the German language.
  • an existing dialog protocol including text modules can be ported unchanged into a new development environment (without rewriting the source code or even rewriting it into another programming language) as long as a DCS (Dialog Control Script) is available in the development environment.
  • DCS Dialog Control Script
  • the dialog protocol Since the dialog protocols have now been selected and a single dialog protocol is available, including the text modules, the dialog can now be carried out or the dialog protocol can be executed. Thus, the user is guided through the dynamic dialogue by means of the dialogue protocol and always receives corresponding information or questions, if the dialog protocol so provides. Since the dialog protocol can also provide partial protocols, it is always possible to branch to another partial dialog protocol if the user gives an appropriate input. If, for example, the user has not been sufficiently informed, then another dialog can be entered which specifies how the user can otherwise be helped. In turn, the corresponding text modules are selected and presented to the user. According to one aspect of the present invention, the dialog protocol specifies a presence of dialog units and a dialog history that places the dialog units in temporal succession.
  • a dialogue history can be provided that is independent of text modules per se.
  • the dialog units only provide when the proposed method should act and when the user should act. In addition, it can be specified which activity is specifically intended as a diagnostic unit.
  • a dialogue unit may be an input by the user or by the dialogue system.
  • a dialog unit can provide control commands that are to be processed at a specific point within the dialog.
  • two dialog units can be cascaded one after the other, both of which must be operated by the dialog system, wherein first a database query is made in the first dialog unit, and in the second diagnostic unit it is specified that the read information is provided to the user got to.
  • the temporal sequence can be a flowchart, whereby a logical sequence can generally also be specified as an alternative to the temporal sequence. So it can be provided that always a user input has to wait and then has to be answered. It is also possible that the dialogue units provide that the dialogue system initiates the dialogue and then the user replies.
  • the diagnostic units are stored as alphanumeric parameters. This has the advantage that a memory-efficient and simple human-readable form can be selected, in which the dialog units are stored. Thus, the individual dialog units can be outsourced to a separate file and provided to the proposed method. According to a further aspect of the present invention, the diagnostic units are stored in tabular form. This has the advantage that the activities or actors can be entered at the columns and in the rows a continuous index can be entered, to which a partial logging protocol can refer. Thus, a dialog can be dynamically created in such a way that the respective index is addressed, and at the end of the partial dialog protocol, another index is dynamically referenced, which is then called up and another sub-dialog can be executed. Thus, the individual lines specify a partial dialogue protocol, which can be composed dynamically.
  • the keyword is provided by means of a user input and / or by a control command.
  • a control command This has the advantage that both the user can make an acoustic or textual input, from which the keyword is extracted, or else control commands are executed, which then select the appropriate dialog protocol.
  • a macro can be created, which in turn comprises a plurality of control commands. These control commands are set up to provide the keyword.
  • a combination of user input and control command can also take place in such a way that the user selects a first keyword and the control command for this purpose models a context, whereupon the corresponding dialog protocol is selected.
  • the keyword is provided using technology to understand a natural language.
  • This has the advantage that already installed or implemented components can be used for so-called "natural language understanding".
  • a user it is thus possible for a user to make a user input acoustically, to convert this textually, and then to use one or more key words from this acoustic input. Words will be extracted.
  • Another inventive aspect of the NLU is a mapping of a set of different matchable ones
  • Keywords on a keyword for example, house, buildings, skyscrapers are all mapped to "house" or age: 43, 49 is both mapped to age group 40-49. This includes in particular “false” speech recognition results. So in English “cups” are pronounced in many dialects almost like “cats”, so for example “cats” is used as trigger for a system where you want to recognize "cups”.
  • a dialog unit specifies an activity of an actor within the dialog protocol. This has the advantage that the dialog unit can specify whether a question, an answer or a control command is now to be provided.
  • Corresponding actors are either the user with whom the dialogue is conducted or the dialogue system. In this case, other actors can be used, such as an external database, which is to be addressed by means of a control command.
  • the dynamic sequence of the dialogue describes partial dialogs which are compiled on the basis of branches.
  • This has the advantage that the dialog can be created dynamically at runtime and then corresponding sub-dialog protocols can be used.
  • the user input can be waited for and, depending on the user input, another sub-dialog can again be selected.
  • a branch depending on a user input and / or by a Control command selected has the advantage that not only the acoustic or textual user inputs can be used but, on the contrary, a control command, such as a database query, can also be initiated by means of a user input.
  • each diagnostic unit specifies an activity of the dialog system, an activity of a user, the execution of control commands and / or a termination of the method.
  • This has the advantage that the individual activities can be specified, which are not only directed to the dialog system itself or the human user, but rather it can also be proposed which control commands are executed to answer the user input.
  • it can be specified at what point in time the dialog protocol stipulates that the method terminates because the user request has been successfully answered.
  • executing the dialog protocol at runtime includes operating an audio conversation between the dialog system and a user.
  • This has the advantage that conventional components can be used which convert an acoustic input into text and then convert a textual output back into acoustic information.
  • the method is carried out on a mobile terminal, in a vehicle or on a stationary computing unit.
  • a stationary arithmetic unit may be, for example, a conventional personal computer of a user.
  • When used in the vehicle is typically one User interface for use, which guides the driver through a dialogue.
  • interfaces to existing Internet based services are provided. This has the advantage that software components already implemented can be reused and Internet services can be queried, which provide information for answering the user requests or provide services such as shopping facilities.
  • a system arrangement for efficient dialog design and interpretation in a computer-aided Dialogsys- tem comprising a memory unit configured to store a plurality of dialog protocols to at least one keyword, the dialogue protocols each specify a dynamic sequence of a dialogue, set up a computer for selecting a dialog protocol as a function of at least one keyword provided at runtime, another arithmetic unit configured for assigning text modules to dialog units of the dialog protocol, and a dialog unit configured for executing the dialog protocol at runtime.
  • the object is also achieved by a computer program product with control commands which implement the proposed method or operate the proposed system arrangement.
  • the system arrangement provides structural features which functionally correspond to the individual process steps.
  • the system arrangement is for carrying out the proposed method.
  • the proposed method is again set up to operate the system arrangement.
  • the method steps can also be used as structural features of the system arrangement. be formed.
  • the system arrangement includes devices that are actually set up as already stated, and does not merely cover generic components for suitability. Further advantageous embodiments will be explained in more detail with reference to the attached figures. Show it:
  • FIG. 2 shows a system arrangement for efficient dialog design and interpretation according to one aspect of the present invention
  • FIG. 3 shows another example of a system arrangement for efficient dialog design and interpretation according to another aspect of the present invention
  • FIG. 5 is a schematic flow diagram of the proposed method for efficient dialog design and interpretation according to another aspect of the present invention.
  • Figure 1 shows on the left side a human user handing an input to a speech recognition system.
  • a voice recognition system is a Speech Recognition Service (SRS) system.
  • SRS Speech Recognition Service
  • the individual dialogs are implemented separately, and a flow logic is specified here, which contains corresponding text modules. This is particularly disadvantageous because the dialog protocols must be linked to the text modules. Here- By no separate maintenance can be done and there is an increased technical effort.
  • FIG. 2 shows an adaptation of the system according to FIG. 1 and provides a further component, namely a selection unit, for example a "Dialog Control Script" unit DCS.
  • a selection unit for example a "Dialog Control Script" unit DCS.
  • This unit precedes the text modules on the right-hand side, and then the corresponding dialog can be selected at runtime and the text modules on the right-hand side merely have to be integrated into the system.
  • the disadvantage is overcome that, as shown in FIG. 1, the dialog protocols are not stored on the right-hand side in the three schematic units together with text modules, but rather the dialog protocols are stored in the upstream unit DCS and from the right Page, only the individual text blocks need to be read.
  • the individual dialog protocols are not stored in the DCS but are called / read dynamically / at runtime by the DCS.
  • the additive component creates DCS specific dialog profiles and then selects further parameters from the right side.
  • PVD stands for Parameter Voice Dialog, for example PVD1, PVD2 and PVD3.
  • FIG. 3 shows a corresponding system according to the invention, wherein on the left-hand side above a user request, a so-called "user request” UR, which is transmitted to an interpretation component, which is referred to herein as the "Natural Language Understanding” component NLU. Then, an answer or a question is generated, which takes place in a further component, which in the present case is referred to as "System Answer / Prompt" SA / P. Then, the method can terminate, which is marked on the right side by the corresponding arrow to the outside, or it is again a transmission of the textual response to the so-called “Voice Service” VS, which in turn the textual output in speech converts and provides to the user, who can then make a user input again. The reverse direction is also provided in this figure.
  • the use case - "system asks, user answers” (voice sur vey) is advantageous.
  • the use of the voice service is not mandatory
  • dialog protocol including dialog units which specify the dialog procedure.
  • a start dialog unit is shown on the top left, which then branches to the first dialog unit.
  • a user input can be obtained and, depending on the response provided, either the dialog unit 5 or 21 is referenced.
  • the corresponding dialog units can be stored in tabular form, whereby the numerical index can be a row number.
  • each box shown corresponds to a dialog unit assigned to an actor.
  • the dialog unit 11 is provided by a user, and in a subsequent dialog unit, the dialogue is terminated by the proposed dialog system.
  • the two arrows on the left illustrate that even more dialog units can be provided and that the proposed example according to FIG. 4 is merely a section of a larger-scale dialogue protocol. Since more than two branches are possible as well as diagonal steps in another dialog branch, a branch from 11 to 2 is also possible. All this contributes to a good user experience.
  • FIG. 5 shows a schematic flow diagram of a method for efficient dialog design and interpretation in a computer-aided, automated dialog system, comprising storing 100 a plurality of dialog protocols for at least one keyword each, the dialog protocols each specifying a dynamic sequence of a dialog. ren, selecting 101 a dialog protocol depending at least a keyword provided at runtime, an association 103 of text modules to dialogue units of the dialog protocol, and an execution 104 of the dialog protocol at runtime.
  • the components of a dialog to be parameterized in a voice dialog system and not hard-coded, and dialog units to be unambiguously classified by indexes, which makes it possible for the source code or the script to remain unchanged, and new dialogues can be displayed as parameter tables, and dialogs can also be displayed by a voice dialog system
  • Development environment can be ported in the other or made accessible via interfaces.
  • the software does not have to be changed and multi-part or recursive dialogs can be carried out, in particular allowing the machine to ask and the user to answer.
  • the control program itself must, according to one aspect of the present invention, be ported once per new development environment, but not the DP.
  • the machine first asks, and then the user answers, and finally the response is permanently stored (via a control command).
  • This allows the simple implementation of applications using this dialogue structure, in particular voice surveys and voice data collections.
  • Dialog history and user input can also be stored permanently (on disk or in a database), which is beneficial for voice surveys / data collections.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)
  • Communication Control (AREA)
  • Machine Translation (AREA)

Abstract

La présente invention concerne un procédé de configuration de dialogue et d'interprétation efficaces dans un système de dialogue automatique, assisté par ordinateur. La présente invention offre entre autre les avantages que, lors de l'établissement de dialogues, la charge de travail est réduite, et donc le taux d'erreurs également. En outre, il est possible de prendre en compte un contexte de parties de dialogue et d'établir un dialogue de façon dynamique en fonction de ce contexte. En outre, le procédé selon l'invention peut initier lui-même un dialogue avec un utilisateur humain. La présente invention concerne en outre un ensemble système conçu de manière correspondante ainsi qu'un produit-programme informatique comprenant des instructions de commande qui mettent en œuvre le procédé ou font fonctionner l'ensemble système.
EP19726908.7A 2018-05-29 2019-05-26 Configuration de dialogue efficace Withdrawn EP3791386A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP18000483.0A EP3576084B1 (fr) 2018-05-29 2018-05-29 Conception du dialogue efficace
PCT/EP2019/025156 WO2019228667A1 (fr) 2018-05-29 2019-05-26 Configuration de dialogue efficace

Publications (1)

Publication Number Publication Date
EP3791386A1 true EP3791386A1 (fr) 2021-03-17

Family

ID=62495532

Family Applications (2)

Application Number Title Priority Date Filing Date
EP18000483.0A Active EP3576084B1 (fr) 2018-05-29 2018-05-29 Conception du dialogue efficace
EP19726908.7A Withdrawn EP3791386A1 (fr) 2018-05-29 2019-05-26 Configuration de dialogue efficace

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP18000483.0A Active EP3576084B1 (fr) 2018-05-29 2018-05-29 Conception du dialogue efficace

Country Status (5)

Country Link
US (1) US11488600B2 (fr)
EP (2) EP3576084B1 (fr)
JP (1) JP7448240B2 (fr)
CN (1) CN112204656A (fr)
WO (1) WO2019228667A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3576084B1 (fr) 2018-05-29 2020-09-30 Christoph Neumann Conception du dialogue efficace

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665644B1 (en) 1999-08-10 2003-12-16 International Business Machines Corporation Conversational data mining
GB2372864B (en) * 2001-02-28 2005-09-07 Vox Generation Ltd Spoken language interface
US7606714B2 (en) * 2003-02-11 2009-10-20 Microsoft Corporation Natural language classification within an automated response system
JP4849662B2 (ja) * 2005-10-21 2012-01-11 株式会社ユニバーサルエンターテインメント 会話制御装置
JP4849663B2 (ja) * 2005-10-21 2012-01-11 株式会社ユニバーサルエンターテインメント 会話制御装置
JP5149737B2 (ja) * 2008-08-20 2013-02-20 株式会社ユニバーサルエンターテインメント 自動会話システム、並びに会話シナリオ編集装置
US8126715B2 (en) * 2008-11-26 2012-02-28 Microsoft Corporation Facilitating multimodal interaction with grammar-based speech applications
US9378202B2 (en) * 2010-03-26 2016-06-28 Virtuoz Sa Semantic clustering
EP2575128A3 (fr) 2011-09-30 2013-08-14 Apple Inc. Utilisation d'information contextuelle pour faciliter le traitement des commandes pour un assistant virtuel
FR2989209B1 (fr) * 2012-04-04 2015-01-23 Aldebaran Robotics Robot apte a integrer des dialogues naturels avec un utilisateur dans ses comportements, procedes de programmation et d'utilisation dudit robot
US20140180695A1 (en) * 2012-12-25 2014-06-26 Microsoft Corporation Generation of conversation to achieve a goal
US9466294B1 (en) * 2013-05-21 2016-10-11 Amazon Technologies, Inc. Dialog management system
JP6025785B2 (ja) * 2013-07-08 2016-11-16 インタラクションズ リミテッド ライアビリティ カンパニー 自然言語理解のための自動音声認識プロキシシステム
KR101491843B1 (ko) * 2013-11-13 2015-02-11 네이버 주식회사 대화 기반 검색 도우미 시스템 및 그 방법
RU2014111971A (ru) * 2014-03-28 2015-10-10 Юрий Михайлович Буров Способ и система голосового интерфейса
EP2933070A1 (fr) * 2014-04-17 2015-10-21 Aldebaran Robotics Procédés et systèmes de manipulation d'un dialogue avec un robot
US10726831B2 (en) * 2014-05-20 2020-07-28 Amazon Technologies, Inc. Context interpretation in natural language processing using previous dialog acts
US20150370787A1 (en) * 2014-06-18 2015-12-24 Microsoft Corporation Session Context Modeling For Conversational Understanding Systems
US9830249B2 (en) * 2015-03-04 2017-11-28 International Business Machines Corporation Preemptive trouble shooting using dialog manager
US10659403B2 (en) * 2015-03-25 2020-05-19 Pypestream, Inc. Systems and methods for navigating nodes in channel based chatbots using natural language understanding
US10418032B1 (en) * 2015-04-10 2019-09-17 Soundhound, Inc. System and methods for a virtual assistant to manage and use context in a natural language dialog
JP6601069B2 (ja) * 2015-09-01 2019-11-06 カシオ計算機株式会社 対話制御装置、対話制御方法及びプログラム
JP2017049471A (ja) * 2015-09-03 2017-03-09 カシオ計算機株式会社 対話制御装置、対話制御方法及びプログラム
RU2018113724A (ru) * 2015-10-21 2019-11-21 ГУГЛ ЭлЭлСи Сбор параметров и автоматическая генерация диалога в диалоговых системах
US10394963B2 (en) 2015-10-22 2019-08-27 International Business Machines Corporation Natural language processor for providing natural language signals in a natural language output
US10276160B2 (en) * 2015-11-12 2019-04-30 Semantic Machines, Inc. Automated assistant for user interaction via speech
CN106095833B (zh) * 2016-06-01 2019-04-16 竹间智能科技(上海)有限公司 人机对话内容处理方法
US10606952B2 (en) * 2016-06-24 2020-03-31 Elemental Cognition Llc Architecture and processes for computer learning and understanding
JP6819988B2 (ja) * 2016-07-28 2021-01-27 国立研究開発法人情報通信研究機構 音声対話装置、サーバ装置、音声対話方法、音声処理方法およびプログラム
US20180052913A1 (en) * 2016-08-16 2018-02-22 Ebay Inc. Selecting next user prompt types in an intelligent online personal assistant multi-turn dialog
CN106528522A (zh) * 2016-08-26 2017-03-22 南京威卡尔软件有限公司 场景化的语义理解与对话生成方法及系统
US20180131642A1 (en) * 2016-11-04 2018-05-10 Microsoft Technology Licensing, Llc Conversation runtime
US20180261223A1 (en) * 2017-03-13 2018-09-13 Amazon Technologies, Inc. Dialog management and item fulfillment using voice assistant system
US10547729B2 (en) * 2017-03-27 2020-01-28 Samsung Electronics Co., Ltd. Electronic device and method of executing function of electronic device
US10666581B2 (en) * 2017-04-26 2020-05-26 Google Llc Instantiation of dialog process at a particular child node state
AU2018273197B2 (en) * 2017-05-22 2021-08-12 Genesys Cloud Services Holdings II, LLC System and method for dynamic dialog control for contact center systems
US20200160187A1 (en) * 2017-06-09 2020-05-21 E & K Escott Holdings Pty Ltd Improvements to artificially intelligent agents
US10902533B2 (en) * 2017-06-12 2021-01-26 Microsoft Technology Licensing, Llc Dynamic event processing
US20190026346A1 (en) * 2017-07-24 2019-01-24 International Business Machines Corporation Mining procedure dialogs from source content
CN107657017B (zh) * 2017-09-26 2020-11-13 百度在线网络技术(北京)有限公司 用于提供语音服务的方法和装置
US10425247B2 (en) * 2017-12-12 2019-09-24 Rovi Guides, Inc. Systems and methods for modifying playback of a media asset in response to a verbal command unrelated to playback of the media asset
US10498898B2 (en) * 2017-12-13 2019-12-03 Genesys Telecommunications Laboratories, Inc. Systems and methods for chatbot generation
US10839160B2 (en) * 2018-01-19 2020-11-17 International Business Machines Corporation Ontology-based automatic bootstrapping of state-based dialog systems
US10430466B2 (en) * 2018-01-25 2019-10-01 International Business Machines Corporation Streamlining support dialogues via transitive relationships between different dialogues
US20190251957A1 (en) * 2018-02-15 2019-08-15 DMAI, Inc. System and method for prediction based preemptive generation of dialogue content
US10699708B2 (en) * 2018-04-24 2020-06-30 Accenture Global Solutions Limited Robotic agent conversation escalation
EP3576084B1 (fr) 2018-05-29 2020-09-30 Christoph Neumann Conception du dialogue efficace
US11308940B2 (en) * 2019-08-05 2022-04-19 Microsoft Technology Licensing, Llc Counterfactual annotated dialogues for conversational computing

Also Published As

Publication number Publication date
US20210210092A1 (en) 2021-07-08
US11488600B2 (en) 2022-11-01
CN112204656A (zh) 2021-01-08
EP3576084B1 (fr) 2020-09-30
WO2019228667A1 (fr) 2019-12-05
JP2022501652A (ja) 2022-01-06
EP3576084A1 (fr) 2019-12-04
JP7448240B2 (ja) 2024-03-12

Similar Documents

Publication Publication Date Title
DE60222093T2 (de) Verfahren, modul, vorrichtung und server zur spracherkennung
EP1927980B1 (fr) Procédé de classification de la langue parlée dans des systèmes de dialogue vocal
DE10220524B4 (de) Verfahren und System zur Verarbeitung von Sprachdaten und zur Erkennung einer Sprache
EP1964110B1 (fr) Procédé de commande d'au moins une première et une deuxième application d'arrière-plan par l'intermédiaire d'un système de dialogue vocal universel
DE69822296T2 (de) Mustererkennungsregistrierung in einem verteilten system
DE60005326T2 (de) Erkennungseinheiten mit komplementären sprachmodellen
EP1336955B1 (fr) Procédé pour la synthèse de parole naturelle dans un système de dialogue par ordinateur
DE60201262T2 (de) Hierarchische sprachmodelle
EP1435088B1 (fr) Construction dynamique d'une commande conversationnelle a partir d'objets de dialogue
EP0987682B1 (fr) Procédé d'adaptation des modèles de language pour la reconnaissance de la parole
DE102011103528A1 (de) Modulare Spracherkennungsarchitektur
EP1361737A1 (fr) Méthode et système de traitement du signal de parole et de classification de dialogues
EP3152753B1 (fr) Système d'assistance pouvant être commandé au moyen d'entrées vocales et comprenant un moyen fonctionnel et plusieurs modules de reconnaissance de la parole
DE60105063T2 (de) Entwicklungswerkzeug für einen dialogflussinterpreter
DE69333762T2 (de) Spracherkennungssystem
EP3576084B1 (fr) Conception du dialogue efficace
EP3115886A1 (fr) Procede de fonctionnement d'un systeme de commande vocale et systeme de commande vocale
EP1659571A2 (fr) Système de dialogue vocal et méthode pour son exploitation
EP3735688B1 (fr) Procédé, dispositif et support d'informations lisible par ordinateur ayant des instructions pour traiter une entrée vocale, véhicule automobile et terminal d'utilisateur doté d'un traitement vocal
DE102019217751A1 (de) Verfahren zum Betreiben eines Sprachdialogsystems und Sprachdialogsystem
EP2141692A1 (fr) Assistance automatisée à commande vocale d'un utilisateur
EP1363271A1 (fr) Méthode et système pour le traitement et la mémorisation du signal de parole d'un dialogue
EP1959430A2 (fr) Procédé destiné à la génération automatique d'applications vocales VoiceXML à partir de modèles de dialogues vocaux
WO2018091662A1 (fr) Procédé de création et/ou de modification d'un ensemble d'enregistrements pour un moyen d'assistance technique guidé par dialogue pour l'assistance lors de la création et/ou de la modification de programmes de traitement de données ou d'entrées de bases de données
EP4040433A1 (fr) Génération dynamique d'une chaîne de modules fonctionnels d'un assistant virtuel

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20201208

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20211027

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20220507