WO2015141700A1 - Appareil et procédé de support de construction de système de dialogue - Google Patents

Appareil et procédé de support de construction de système de dialogue Download PDF

Info

Publication number
WO2015141700A1
WO2015141700A1 PCT/JP2015/057970 JP2015057970W WO2015141700A1 WO 2015141700 A1 WO2015141700 A1 WO 2015141700A1 JP 2015057970 W JP2015057970 W JP 2015057970W WO 2015141700 A1 WO2015141700 A1 WO 2015141700A1
Authority
WO
WIPO (PCT)
Prior art keywords
dialogue
scenario
utterance
utterances
speech recognition
Prior art date
Application number
PCT/JP2015/057970
Other languages
English (en)
Inventor
Yumiko Shimogori
Kenji Iwata
Masahiro Ito
Hisayoshi Nagae
Original Assignee
Kabushiki Kaisha Toshiba
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kabushiki Kaisha Toshiba filed Critical Kabushiki Kaisha Toshiba
Publication of WO2015141700A1 publication Critical patent/WO2015141700A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • Embodiments described herein relate generally to a dialogue system construction support apparatus and method.
  • a dialogue system such as an interactive voice response apparatus that automatically responds to an utterance of a user.
  • Such a dialogue system responds in accordance with a scenario constructed in advance.
  • the dialogue system may fail in responding because of a
  • JP-B 4901738 discloses an automated response system that performs learning using a conversation set between an agent and a user.
  • JP-B 4901738 discloses an automated response system that performs learning using a conversation set between an agent and a user.
  • FIG. 1 is a block diagram schematically showing a dialogue system according to an embodiment
  • FIG. 2 is a flowchart showing an example of the procedure of dialogue log recording according to the embodiment ;
  • FIG. 3 is a view showing an example of a dialogue between a user and an operator
  • FIG. 4 is a view showing examples of an utterance type
  • FIG. 5 is a view showing examples of an intention tag
  • FIG. 6 is a view showing examples of an action
  • FIG. 7 is a view showing examples of a semantic class
  • FIG. 8 is a view showing an example of action
  • FIG. 9 is a view showing a dialogue log concerning the dialogue shown in FIG . 3 ;
  • FIG. 10 is a flowchart showing an example of the procedure of scenario construction according to the
  • FIG. 11A and FIG. 11B are views showing examples of a scenario constructed from the dialogue log shown in FIG. 9;
  • FIG. 12 is a view showing an example of a dialogue between the user and the operator, in which the user makes an utterance other than an answer in response to a question of the operator;
  • FIG. 13 is a view showing evaluation data used by a scenario construction unit shown in FIG. 1 to evaluate a scenario
  • FIG. 14 is a flowchart showing an example of the procedure of action candidate display according to the embodiment .
  • FIG. 15 is a view showing an example of contents displayed by a dialogue state display unit shown in FIG. 1.
  • a dialogue system According to an embodiment, a dialogue system
  • construction support apparatus includes a speech
  • the speech recognition unit is
  • the spoken language understanding unit is configured to understand intentions of the utterances based on the texts and obtain a spoken language understanding result including types of the utterances, the intentions of the utterances, words
  • the dialogue information storage unit is configured to store the speech recognition result, the spoken language understanding result, and an action executed by the
  • the scenario construction unit is configured to acquire, as an attribute, a word having a semantic class common to an utterance of question and an utterance of answer included in the dialogue and construct a scenario using the attribute and the action.
  • the embodiments are directed to a dialogue system that automatically responds to an utterance of a user.
  • This dialogue system is used in, for example, a contact center.
  • the dialogue system selects a scenario meeting the utterance of the user from scenarios (dialogue scenarios) registered in advance and responds in accordance with the scenario.
  • scenarios dialogue scenarios
  • an operator responds via a dialogue with the user.
  • the dialogue system can construct a new scenario based on the dialogue between the user and the operator and an action of the operator.
  • the dialogue system can respond well to a similar request received later.
  • the scenario construction cost can be reduced. It is also possible to decrease the necessary number of operators.
  • FIG. 1 schematically shows a dialogue system 100 according to an embodiment.
  • the dialogue system 100 includes a speech recognition unit 101, a spoken language understanding unit 102, a dialogue management unit 103, a response generation unit 104, a dialogue extraction unit 105, a scenario construction unit 106, a scenario updating unit 107, a dictionary storage unit 108, a spoken language understanding model storage unit 109, a scenario storage unit 110, a dialogue log storage unit (also called a dialogue information storage unit) 111, a dialogue state display unit 112, a scenario searching unit 113, and a scenario object database (DB) 114.
  • DB scenario object database
  • Automatic response processing of the dialogue system 100 will briefly be explained first.
  • the user communicates with the dialogue system 100 via a network using a terminal such as a mobile-phone or a smartphone.
  • the dialogue system 100 provides a service to the terminal via the network by the automatic response processing.
  • the dialogue system 100 transmits, to the
  • the dialogue system 100 executes the automatic
  • the speech recognition unit 101 performs speech recognition for the utterance of the user, and generates a natural language text (to be simply referred to as a text hereinafter) corresponding to the utterance.
  • the spoken language understanding unit 102 analyzes the text by referring to the dictionary storage unit 108 and the spoken language understanding model storage unit 109 so as to understand the intention of the utterance, and outputs the spoken language understanding result.
  • the management unit 103 selects a scenario corresponding to the spoken language understanding result from the scenario object DB 114, and executes an action (for example,
  • the response generation unit 104 generates a response sentence corresponding to the action executed by the dialogue management unit 103.
  • the response sentence is converted into speech by a speech synthesis technology and output .
  • the dialogue with the user may fail because, for example, a scenario meeting a request of the user does not exist in the scenario object DB 114.
  • the dialogue management unit 103 transfers the connection with the user to an operator.
  • the dialogue management unit 103 can also transfer the connection with the user to the operator when a predetermined condition has occurred during a response. A dialogue between the user and the operator thus starts .
  • the dialogue system 100 analyzes the dialogue between the user and the operator.
  • the dialogue system 100 analyzes the dialogue between the user and the operator.
  • the scenario construction processing is performed using the speech recognition unit 101, the spoken language
  • the dialogue system construction support unit may be included in the dialogue system 100, as shown in FIG. 1, or provided outside the dialogue system 100.
  • the speech recognition unit 101, the spoken language understanding unit 102, the dictionary storage unit 108, and the spoken language understanding model storage unit 109 can be shared by automatic response processing and the scenario construction processing.
  • the speech recognition unit 101 performs speech recognition for a plurality of utterances included in the dialogue between the user and the operator, and generates a plurality of texts corresponding to the plurality of utterances, respectively. That is, the speech recognition unit 101 converts the plurality of utterances into the plurality of texts by a speech recognition technology.
  • the spoken language understanding unit 102 Based on each text generated by the speech recognition unit 101, the spoken language understanding unit 102 understands the intention of the utterance corresponding to the text. More specifically, the spoken language
  • the understanding unit 102 performs morphological analysis of each text, thereby dividing the text into words on a morpheme basis. Next, referring to a dictionary stored in the dictionary storage unit 108, the spoken language understanding unit 102 assigns a semantic class
  • a plurality of words are registered in the dictionary in association with semantic classes.
  • the spoken language understanding unit 102 understands the intention of an utterance by referring to a spoken language understanding model stored in the spoken language understanding model storage unit 109 using features such as morphemes, the semantic classes of words, and notations of words, and outputs a spoken language understanding result.
  • Spoken language understanding models are generated by learning using semantic classes, words, and the like from a number of utterance samples as features .
  • the spoken language understanding method is not limited to the example described here.
  • the dialogue extraction unit 105 receives the spoken language understanding result from the spoken language understanding unit 102, and detects an operation performed for the dialogue system 100 by the operator during a response as the action of the operator. The action can be detected based on information received from a computer terminal operated by the operator. More specifically, the dialogue extraction unit 105 can receive, from the computer terminal, information representing the contents of an action executed by the operator. The dialogue extraction unit 105 records the analysis result of the dialogue between the user and the operator and the action of the operator in the dialogue log storage unit 111 in
  • the analysis result of the dialogue includes the speech recognition result and the spoken language understanding result concerning an
  • the scenario construction unit 106 constructs a scenario by referring to the dialogue log storage unit 111, and stores the scenario in the scenario storage unit 110.
  • the scenario updating unit 107 updates the scenario object DB 114 by referring to the scenario storage unit 110. More specifically, the scenario updating unit 107 converts a scenario stored in the scenario storage unit 110 into an object executable by the dialogue management unit 103, and adds it to the scenario object DB 114 at an arbitrary timing.
  • a scenario stored in the scenario storage unit 110 is a text-based scenario
  • a scenario stored in the scenario object DB 114 is an object-based scenario.
  • a scenario stored in the scenario object DB 114 may be a text-based scenario.
  • the scenario searching unit 113 extracts a scenario feature word from the dialogue between the user and the operator, and selects, as a similar scenario, a scenario associated with the scenario feature word from the scenario storage unit 110.
  • the scenario feature word will be described later.
  • the dialogue state display unit 112 displays the similar scenario.
  • the dialogue state display unit 112 also displays the analysis result of the dialogue between the user and the operator.
  • FIG. 2 schematically shows the procedure of dialogue log recording of the dialogue system 100.
  • a detailed example will be explained using a dialogue shown in FIG. 3.
  • the dialogue extraction unit 105 records a dialogue start label representing the start of the dialogue in the dialogue log storage unit 111.
  • step S202 the user or operator utters.
  • the user first utters "Where can I pick up the rental car I have reserved earlier?"
  • step S203 the speech recognition unit 101 performs speech recognition for the utterance input in step S202.
  • a text "Where can I pick up the rental car I have reserved earlier?" can be obtained as a speech recognition result.
  • the spoken language understanding unit 102 understands the intention of the utterance from the speech recognition result, and outputs a spoken language understanding result.
  • the spoken language understanding result includes an utterance type, an intention tag, and a semantic class .
  • the utterance type represents the role of the utterance in the dialogue. Examples of the utterance type are "request”, “greeting”, “question”, “response”, “proposal”, “confirmation”, and “answer”, as shown in
  • FIG. 4 The utterance type is output in a form
  • the intention tag is information representing an intention such as "flight timetable display”, “rental car search”, “rental car location display”, “hotel rate
  • the intention tag is output in a form understandable by the machine, for example, as an intention tag ID.
  • step S205 the dialogue extraction unit 105
  • step S205 extracts any one piece of information out of the intention tag, attribute, attribute value, and action contents from the utterance input in step S202, and records the speech recognition result, the spoken language understanding result, and the extracted information in the dialogue log storage unit 111 in association with each other.
  • the process of step S205 will be described later.
  • step S206 it is determined whether the dialogue has ended. For example, when an utterance representing the end of the dialogue is detected or when the operator executes an action, it is determined that the dialogue has ended. If the dialogue continues, the process returns to step S202. When the process returns to step S202, the next utterance occurs.
  • the operator utters "Location to pick up the rental car?”
  • the processes of steps S203, S204, and S205 are executed for this utterance.
  • the operator's utterance "At which airport are you?" the user's utterance "At OO
  • the dialogue extraction unit 105 detects the action of the operator based on the spoken language understanding result of the utterance "I will send the map of the location to pick up the rental car” .
  • the dialogue extraction unit 105 acquires the contents of the action executed by the operator during the response, and records them in the dialogue log storage unit 111.
  • Each action is associated with an action ID.
  • step S207 the dialogue extraction unit 105 determines that the dialogue between the user and the operator has ended, and records a dialogue end label representing the end of the dialogue in the dialogue log storage unit 111.
  • a log concerning one dialogue is recorded between a dialogue start label and a dialogue end label.
  • the dialogue log concerning one dialogue includes the analysis result of the dialogue, scenario feature words, intention tags,
  • step S205 The process of step S205 will be described in more detail .
  • step S205-1 if the type of the utterance input in step S202 is confirmation, the dialogue extraction unit 105 extracts a scenario feature word from this utterance and a counterpart utterance. More specifically, the dialogue extraction unit 105 extracts, as the scenario feature word, a word common to the utterance of confirmation of one party (for example, operator) and the immediately preceding utterance of the other party (for example, user) .
  • the utterance of confirmation is the operator's utterance "Location to pick up the rental car?”
  • the utterance as the counterpart to this is the immediately preceding user's utterance "Where can I pick up the rental car I have reserved earlier?"
  • the common words are "rental car” and "pick up”. Hence, "rental car” and "pick up” are extracted as the scenario feature words.
  • step S205-2 the dialogue extraction unit 105 determines whether the utterance type is question. If the utterance type is question, the process advances to step S205-3. Otherwise, the process advances to step S205-4. In step S205-4, the dialogue extraction unit 105 determines whether the utterance type is answer. If the utterance type is answer, the process advances to step S205-5.
  • step S205-6 the dialogue extraction unit 105 determines whether the utterance is associated with the action of the
  • step S205-8 If the utterance is associated with the action, the process advances to step S205-8. Otherwise, the process advances to step S205-7.
  • the dialogue extraction unit 105 acquires the
  • Semantic classes can be defined by hierarchically classifying meanings, as shown in FIG. 7. Note that the semantic classes need not always be expressed in the hierarchical structure.
  • the attribute value is an argument used to attain the intention represented by the intention tag.
  • the dialogue extraction unit 105 acquires, out of words having a semantic class common to the utterance of question and the utterance of answer, a word in the utterance of question as an attribute and a word in the utterance of answer as an attribute value .
  • the user's answer to the operator's question "At which airport are you?" is "At OO airport”.
  • a semantic class common to these utterances is "Location_STATION_AIR” .
  • the word having the semantic class "Location_STATION_AIR” is "airport"
  • "airport" is extracted as an attribute.
  • the word having the semantic class having the semantic class of the semantic class of the user's utterance "At OO airport” the word having the semantic class
  • the dialogue extraction unit 105 does not necessarily extract the same word that appears in both an utterance of confirmation and an utterance as the
  • the dialogue extraction unit 105 acquires the action contents (step S205-8) .
  • the action contents include an operation that the operator actually executed for the system.
  • FIG. 8 shows an example of action contents obtained when the operator operates an application in association with the dialogue example shown in FIG. 3.
  • the action contents shown in FIG. 8 represent sending of a map illustrating the location to pick up the rental car.
  • the dialogue extraction unit 105 acquires an intention tag from an utterance that is neither of utterances
  • This utterance is recorded in the dialogue log storage unit 111 as an utterance having an intention that does not contribute to attain the purpose of the dialogue .
  • FIG. 9 shows a dialogue log associated with the dialogue example shown in FIG. 3.
  • "START OPERATOR” is the dialogue start label
  • "END OPERATOR” is the dialogue end label.
  • Pieces of information about utterances and actions are recorded between the dialogue start label and the dialogue end label .
  • the log of an utterance is described using colon separation as utterance subj ect : utterance type : utterance contents : intention tag.
  • the utterance contents include a speech recognition result, words, and their semantic classes. Each semantic class is described in parentheses immediately after a word.
  • the log of an action is described using colon separation as action subj ect : action contents.
  • FIG. 10 schematically shows the processing procedure of constructing a scenario from a dialogue log.
  • the scenario construction unit 106 loads a dialogue log from the dialogue log storage unit 111, and extracts a dialogue start label and a dialogue end label concerning a scenario construction target dialogue from the loaded dialogue log.
  • the scenario construction unit 106 loads a dialogue log from the dialogue log storage unit 111, and extracts a dialogue start label and a dialogue end label concerning a scenario construction target dialogue from the loaded dialogue log.
  • step S302 the scenario
  • FIGS. 11A and 11B show examples of a scenario constructed based on the dialogue log shown in FIG. 9.
  • the scenario shown in FIG. 11A includes three states.
  • the scenario shown in FIG. 11B includes one state.
  • An input includes an intention tag and an attribute.
  • An operation includes an operation tag.
  • step S304 the scenario construction unit 106 acquires a semantic class common to an utterance whose type is question and an utterance whose type is answer and the word of the semantic class.
  • “common” is used as a term that means “same” or "being in inclusion relation”.
  • the scenario construction unit 106 uses the acquired word or semantic class as the attribute of the input.
  • step S304 the scenario construction unit 106 acquires words from an utterance whose type is question as attribute candidates and stores them in a memory. If the type of the next utterance is answer, in step S304-2, the scenario construction unit 106 acquires words from the utterance as attribute candidates and holds them in the memory.
  • step S304-3 the semantic classes of the words acquired in steps S304-1 and S304 -2 are compared, and an attribute is obtained from words having a common semantic class. For example, "airport" is acquired as an attribute from the pair of the operator 1 s utterance "At which airport are you?" and the user's utterance "At OO airport". Note that the attribute acquisition method may be the same as that described concerning the process of step S205-3. Two attributes, "airport" and "airline”, are obtained from the dialogue log of FIG. 9.
  • step S304-3 When an attribute is obtained in step S304-3, the process advances to step S304-5.
  • step S304-5 the scenario construction unit 106 generates an input condition using the attribute obtained in step S304 -3. More
  • the scenario construction unit 106 registers the attribute in a scenario as an input attribute
  • step S304-4 the scenario construction unit 106 determines whether the user has returned a question in response to the question of the operator. For example, in a dialogue example shown in FIG. 12, the user responds by "Urn? I don't know” to the operator's question "At which terminal are you?" If the type of an utterance to a question is not answer, as described above, the scenario construction unit 106
  • step S304-6 the scenario construction unit 106 waits for an utterance whose type is answer. Upon detecting an utterance whose type is answer, the scenario construction unit 106 acquires an attribute from the pair of the
  • step S304-7 the spoken language understanding unit 102 acquires an intention tag
  • step S305 the scenario construction unit 106 ends the load of the dialogue log.
  • step S306 the scenario construction unit 106 replaces the word included in the action content with a semantic class serving as a variable.
  • step S307 the scenario construction unit 106 stores the constructed scenario in the scenario storage unit 110.
  • the scenario is stored in association with a scenario feature word so as to enable a search by the scenario feature word.
  • scenario can be constructed so as to faithfully reproduce the dialogue between the user and the operator, as in the example of FIG. 11A, or constructed so as to receive necessary attributes at once, as in the example of FIG. 11B.
  • the scenario updating unit 107 converts the scenario stored in the scenario storage unit 110 into an object executable by the dialogue management unit 103 and adds it to the scenario object DB 114. As for the timing, the updating may be done automatically or based on an operation by an administrator. Similar scenarios may simultaneously be constructed for a plurality of operators. As shown in FIG. 13, the scenario storage unit 110 stores each scenario in association with a scenario feature word, the number of states, the number of response steps, and the number of response failures. The number of response failures
  • the scenario updating unit 107 can display the evaluation data together with the scenarios so that the administrator of the dialogue system 100 can select the scenarios to be added to the scenario object DB 114.
  • FIG. 14 shows a procedure of presenting a candidate of an action to be executed to the operator during a response.
  • the scenario searching unit 113 extracts one or more scenario feature words from the dialogue between the user and the operator during the response of the operator. More specifically, the scenario searching unit 113 extracts, as the scenario feature words, words common to an utterance whose type is confirmation and an utterance as the counterpart to it.
  • the scenario searching unit 113 searches the scenario storage unit 110 using the scenario feature words as search keys .
  • the scenario searching unit 113 searches the scenario storage unit
  • step S404 the scenario searching unit 113 acquires action contents included in the similar scenario.
  • the scenario searching unit 113 displays the acquired action contents as an action candidate via the dialogue state display unit 112. The operator decides an action to be executed with reference to the displayed action
  • FIG. 15 shows an example of contents displayed by the dialogue state display unit 112.
  • the dialogue state display unit 112 includes a conversation monitor, a spoken language understanding monitor, and an operation monitor.
  • the conversation monitor displays the speech recognition result for the dialogue between the user and the operator by the speech recognition unit 101.
  • the spoken language understanding monitor displays the spoken language
  • the operation monitor displays an action candidate acquired by the scenario searching unit 113. In the example of FIG. 15, three action candidates are displayed.
  • the operator can visually confirm the request of the user. If there are inadequacies in the speech recognition result and the spoken language understanding result, the speech recognition result and the spoken language understanding result need to be corrected to construct a useful scenario.
  • spoken language understanding fails because of a recognition error in speech recognition.
  • a necessary scenario can easily be added to the dialogue system by constructing the scenario based on the dialogue log including the analysis result of the dialogue between the user and the operator and the action of the operator.
  • the dialogue system 100 can also be implemented by, for example, using a general-purpose computer apparatus as basic hardware. That is, the speech recognition unit 101, the spoken language understanding unit 102, the dialogue management unit 103, the response generation unit 104, the dialogue extraction unit 105, the scenario construction unit 106, the scenario updating unit 107, the dialogue state display unit 112, and the scenario searching unit 113 can be implemented by causing a processor included in the computer apparatus to execute a program.
  • the dialogue system can be implemented by installing the program in the computer apparatus in advance or by distributing the program stored in a storage medium such as a CD-ROM or via a network and installing the program in the computer apparatus as needed.
  • the dialogue log storage unit, the scenario storage unit, the dictionary storage unit, and the spoken language understanding model storage unit can be implemented using an internal or external memory of the computer apparatus, a hard disk, or a storage medium such as a CD-R, CD-RW,
  • DVD-RAM DVD-RAM, or DVD-R as needed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon un mode de réalisation, un appareil de support de construction de système de dialogue comprend les unités suivantes. L'unité de reconnaissance vocale réalise une reconnaissance vocale pour des énoncés inclus dans un dialogue pour générer des textes. L'unité de compréhension de langage parlé comprend des intentions des énoncés sur la base des textes, et obtient un résultat de compréhension de langage parlé comprenant des types des énoncés, les intentions des énoncés, des mots inclus dans les textes, et des classes sémantiques de mots. L'unité de construction de scénario acquiert, comme attribut, un mot ayant une classe sémantique commune à un énoncé de question et un énoncé de réponse, et construit un scénario à l'aide de l'attribut et d'une action exécutée par l'opérateur concernant le dialogue.
PCT/JP2015/057970 2014-03-18 2015-03-11 Appareil et procédé de support de construction de système de dialogue WO2015141700A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014054491A JP2015176099A (ja) 2014-03-18 2014-03-18 対話システム構築支援装置、方法、及びプログラム
JP2014-054491 2014-03-18

Publications (1)

Publication Number Publication Date
WO2015141700A1 true WO2015141700A1 (fr) 2015-09-24

Family

ID=54144664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/057970 WO2015141700A1 (fr) 2014-03-18 2015-03-11 Appareil et procédé de support de construction de système de dialogue

Country Status (2)

Country Link
JP (1) JP2015176099A (fr)
WO (1) WO2015141700A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10574821B2 (en) 2017-09-04 2020-02-25 Toyota Jidosha Kabushiki Kaisha Information providing method, information providing system, and information providing device
CN111048084A (zh) * 2019-12-18 2020-04-21 上海智勘科技有限公司 在智能语音交互过程中推送信息的方法及系统
EP3663940A4 (fr) * 2017-08-04 2020-07-29 Sony Corporation Dispositif de traitement d'informations et procédé de traitement d'informations
CN112837684A (zh) * 2021-01-08 2021-05-25 北大方正集团有限公司 业务处理方法和系统、业务处理装置和可读存储介质
RU2755781C1 (ru) * 2020-06-04 2021-09-21 Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) Интеллектуальное рабочее место оператора и способ его взаимодействия для осуществления интерактивной поддержки сессии обслуживания клиента
WO2022105115A1 (fr) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Procédé et appareil d'appariement de paire de question et réponse, dispositif électronique et support de stockage

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6946406B2 (ja) * 2016-03-16 2021-10-06 株式会社東芝 概念辞書作成装置、方法およびプログラム
JP2017167851A (ja) * 2016-03-16 2017-09-21 株式会社東芝 概念辞書作成装置、方法およびプログラム
JP6899558B2 (ja) * 2016-08-26 2021-07-07 株式会社Nextremer 対話制御装置、対話エンジン、管理端末、対話装置、対話制御方法、およびプログラム
JP6615803B2 (ja) * 2017-02-08 2019-12-04 日本電信電話株式会社 用件判定装置、用件判定方法およびプログラム
JP2018159729A (ja) * 2017-03-22 2018-10-11 株式会社東芝 対話システム構築支援装置、方法、及びプログラム
JP6873805B2 (ja) * 2017-04-24 2021-05-19 株式会社日立製作所 対話支援システム、対話支援方法、及び対話支援プログラム
US20210034678A1 (en) * 2018-04-23 2021-02-04 Ntt Docomo, Inc. Dialogue server
CA3045132C (fr) * 2019-06-03 2023-07-25 Eidos Interactive Corp. Communication avec des agents virtuels de realite augmentee
JP6755633B2 (ja) * 2019-07-19 2020-09-16 日本電信電話株式会社 用件判定装置、用件判定方法およびプログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105712A1 (en) * 2003-02-11 2005-05-19 Williams David R. Machine learning
JP2013225036A (ja) * 2012-04-23 2013-10-31 Scsk Corp 自動対話シナリオ作成支援装置及び自動対話シナリオ作成支援プログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105712A1 (en) * 2003-02-11 2005-05-19 Williams David R. Machine learning
JP2013225036A (ja) * 2012-04-23 2013-10-31 Scsk Corp 自動対話シナリオ作成支援装置及び自動対話シナリオ作成支援プログラム

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3663940A4 (fr) * 2017-08-04 2020-07-29 Sony Corporation Dispositif de traitement d'informations et procédé de traitement d'informations
US11514903B2 (en) 2017-08-04 2022-11-29 Sony Corporation Information processing device and information processing method
US10574821B2 (en) 2017-09-04 2020-02-25 Toyota Jidosha Kabushiki Kaisha Information providing method, information providing system, and information providing device
US10992809B2 (en) 2017-09-04 2021-04-27 Toyota Jidosha Kabushiki Kaisha Information providing method, information providing system, and information providing device
CN111048084A (zh) * 2019-12-18 2020-04-21 上海智勘科技有限公司 在智能语音交互过程中推送信息的方法及系统
CN111048084B (zh) * 2019-12-18 2022-05-31 上海智勘科技有限公司 在智能语音交互过程中推送信息的方法及系统
RU2755781C1 (ru) * 2020-06-04 2021-09-21 Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) Интеллектуальное рабочее место оператора и способ его взаимодействия для осуществления интерактивной поддержки сессии обслуживания клиента
WO2022105115A1 (fr) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Procédé et appareil d'appariement de paire de question et réponse, dispositif électronique et support de stockage
CN112837684A (zh) * 2021-01-08 2021-05-25 北大方正集团有限公司 业务处理方法和系统、业务处理装置和可读存储介质

Also Published As

Publication number Publication date
JP2015176099A (ja) 2015-10-05

Similar Documents

Publication Publication Date Title
WO2015141700A1 (fr) Appareil et procédé de support de construction de système de dialogue
US11568855B2 (en) System and method for defining dialog intents and building zero-shot intent recognition models
CN107210035B (zh) 语言理解系统和方法的生成
KR102469513B1 (ko) 불완전한 자연어 질의를 이해하기 위한 방법
US10672391B2 (en) Improving automatic speech recognition of multilingual named entities
US9626152B2 (en) Methods and systems for recommending responsive sticker
KR101634086B1 (ko) 감정 분석을 통한 스티커 추천 방법 및 시스템
US9805718B2 (en) Clarifying natural language input using targeted questions
CN108369580B (zh) 针对屏幕上项目选择的基于语言和域独立模型的方法
US11030400B2 (en) System and method for identifying and replacing slots with variable slots
CN109325091B (zh) 兴趣点属性信息的更新方法、装置、设备及介质
JP6791825B2 (ja) 情報処理装置、対話処理方法及び対話システム
US11915693B2 (en) System and method for rule based modifications to variable slots based on context
KR20160089152A (ko) 화행 분석을 통한 스티커 추천 방법 및 시스템
US20170199867A1 (en) Dialogue control system and dialogue control method
US20190164540A1 (en) Voice recognition system and voice recognition method for analyzing command having multiple intents
EP2887229A2 (fr) Appareil et procédé de support de communication et produit de programme informatique
CN116737908A (zh) 知识问答方法、装置、设备和存储介质
KR101763679B1 (ko) 화행 분석을 통한 스티커 추천 방법 및 시스템
US20220414463A1 (en) Automated troubleshooter
JP2011232619A (ja) 音声認識装置および音声認識方法
JP2018128869A (ja) 検索結果表示装置、検索結果表示方法、及びプログラム
JP2018045639A (ja) 対話ログ分析装置、対話ログ分析方法およびプログラム
JP2018063271A (ja) 音声対話装置、音声対話システム、および、音声対話装置の制御方法
US11416555B2 (en) Data structuring device, data structuring method, and program storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15764494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15764494

Country of ref document: EP

Kind code of ref document: A1