WO2020090148A1 - Système de dialogue - Google Patents

Système de dialogue Download PDF

Info

Publication number
WO2020090148A1
WO2020090148A1 PCT/JP2019/024372 JP2019024372W WO2020090148A1 WO 2020090148 A1 WO2020090148 A1 WO 2020090148A1 JP 2019024372 W JP2019024372 W JP 2019024372W WO 2020090148 A1 WO2020090148 A1 WO 2020090148A1
Authority
WO
WIPO (PCT)
Prior art keywords
response
user
voice recognition
inquiry
content
Prior art date
Application number
PCT/JP2019/024372
Other languages
English (en)
Japanese (ja)
Inventor
友理子 尾▲崎▼
昂宗 橋本
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Priority to JP2020554756A priority Critical patent/JP7093844B2/ja
Publication of WO2020090148A1 publication Critical patent/WO2020090148A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/52Arrangements for routing dead number calls to operators

Definitions

  • One aspect of the present invention relates to a dialogue system.
  • chatbots that use voice or text to interact with users is being promoted in systems such as the call centers described above.
  • the chatbot can be used to provide an appropriate response (or transfer to the operator) through interaction with the user.
  • the chat bot responds the same to each user who makes an inquiry (phone call). This may result in repeated explanations, questions, etc., which are useless for a certain user, or may require the user to perform an unnecessary operation. This may reduce the satisfaction of the user who made the inquiry.
  • One aspect of the present invention has been made in view of the above circumstances, and an object thereof is to improve user satisfaction by providing a response suitable for each user.
  • An interactive system is an interactive system that provides a response to an inquiry from a user, and includes a storage unit that stores the content of past voice recognition related to the inquiry for each user, and a user who makes the inquiry.
  • the acquisition unit that acquires the inquiry information including the user identification information to be identified from the user and the storage unit, the content of the past voice recognition of the user specified by the user identification information included in the inquiry information is specified,
  • a determining unit that determines a response content based on the identified content of the past voice recognition, and a response providing unit that provides a response corresponding to the inquiry according to the response content determined by the determining unit.
  • the inquiry information from the user is acquired, and the response content to the inquiry is determined based on the content of the past voice recognition of the user.
  • the content of past voice recognition includes, for example, success or failure of voice recognition, voice characteristics, and the like.
  • the storage unit stores the success or failure of the past voice recognition for each user as the content of the past voice recognition
  • the determination unit refers to the storage unit and identifies the user identified by the user identification information included in the inquiry information.
  • the content of the response may be determined so as to perform a response in which the user is not required to perform voice recognition.
  • a response can be provided by a method other than voice recognition (for example, button push along voice guidance or transfer to an operator). As a result, it is possible to improve the satisfaction level of the user who cannot (or is not good at) performing voice recognition.
  • the storage unit stores, as the content of the past voice recognition, the time required for the past voice recognition for each user, and the determination unit refers to the storage unit and is specified by the user identification information included in the inquiry information. If the time required for the past voice recognition is longer than the predetermined time for the user, the content of the response may be determined so that the user does not request the voice recognition. It is presumed that a user who needs time for voice recognition is a user who is not good at performing voice recognition (does not want to perform voice recognition). Therefore, by providing a response to such a user by a method other than voice recognition, it is possible to improve user satisfaction.
  • the storage unit stores, as the content of the past voice recognition, the features of the voice in the past voice recognition for each user, and the determination unit refers to the storage unit and is specified by the user identification information included in the inquiry information.
  • the response content may be determined according to the characteristics of the voice in the past voice recognition. For example, a language, a dialect, a generation, a way of speaking, etc. can be specified from the characteristics of voice. Therefore, the user's satisfaction can be improved by making a response such as forwarding to the operator according to such a voice feature.
  • the storage unit stores, as the content of the past voice recognition, the content of the past inquiry derived by the past voice recognition, and the determination unit refers to the storage unit and specifies the user identification information included in the inquiry information.
  • the response content may be determined according to the content of past inquiries regarding the user who is to be answered. In this way, by determining the response content by utilizing the content of the inquiry in the past, it is possible to avoid the response such as asking the user about the information already acquired in the past (performing the inquiry). Therefore, the response time can be shortened and the user's satisfaction can be improved.
  • FIG. 3 is a block diagram showing a functional configuration of a dialogue device included in the dialogue system according to the present embodiment. It is a figure which shows an example of the inquiry table memorize
  • FIG. 1 is a block diagram showing a functional configuration of a dialogue device 10 included in the dialogue system 1 according to the present embodiment.
  • the dialogue system 1 shown in FIG. 1 is a system that provides a response to an inquiry from the user terminal 50 by the dialogue between the user terminal 50 (user) and the dialogue device 10.
  • the dialogue system 1 is a system introduced into, for example, a call center or the like.
  • the dialogue device 10 receives an incoming call from the user terminal 50, the dialogue between the user terminal 50 and the dialogue device 10 is started.
  • the dialogue system 1 is configured to include a dialogue device 10 and an operator terminal 80.
  • the operator terminal 80 is a terminal operated by an operator such as a call center, and responds to an inquiry from the user terminal 50 received via the dialogue device 10 to the user terminal 50 with a response (answer) according to the operation of the operator. provide.
  • the operator terminal 80 provides a response (operator's voice) to the user terminal 50 by, for example, a voice call.
  • the operator terminal 80 may provide the response to the user terminal 50 by transmitting a text message or the like to the user terminal 50.
  • the user terminal 50 is a terminal capable of voice call and wireless communication, and is, for example, a smartphone or the like.
  • the interactive device 10 is a device that provides a response to an inquiry from the user terminal 50, and is a device that uses a so-called chatbot.
  • the chatbot is a term that combines chat and bot, and is an automatic dialogue program that provides a response to a query from the user while having a dialogue with the user by utilizing, for example, artificial intelligence.
  • the dialogue apparatus 10 receives an inquiry from the user terminal 50 triggered by an incoming call from the user terminal 50, asks the user terminal 50 a question related to the inquiry (listening), and the operator terminal 80.
  • the response is provided to the user terminal 50 in cooperation with or alone (details will be described later).
  • the dialogue device 10 includes an input unit 11 (acquisition unit), a response content determination unit 12 (determination unit), an information DB 13 (storage unit), and an output unit 14 (response provision unit).
  • the transfer unit 15 (response providing unit) and the response recording unit 16 are provided.
  • the input unit 11 acquires inquiry information from the user terminal 50 by receiving an incoming call from the user terminal 50.
  • the input unit 11 receives an incoming call from the user terminal 50 and acquires the telephone number of the user terminal 50.
  • the input unit 11 also acquires the content of the inquiry from the user terminal 50.
  • the input unit 11 makes an inquiry from the user terminal 50 according to the result of voice recognition performed when an incoming call is received from the user terminal 50 or the input result (number input result) input at the user terminal 50 according to the voice guidance. Get the contents of.
  • the voice recognition is performed by using a conventionally known technique.
  • the voice recognition may be performed in the dialogue device 10, or may be performed by an external device (not shown) and the dialogue device 10 may obtain the result.
  • the conventionally known IVR (Interactive Voice Response) technology can be used for the number input according to the voice guidance.
  • the input unit 11 acquires the telephone number and the content of the inquiry from the user terminal 50.
  • the input unit 11 outputs the inquiry information including the telephone number and the inquiry content to the response content determination unit 12.
  • the inquiry information includes the telephone number as the user identification information for identifying the user who made the inquiry.
  • the user identification information is described as a telephone number, but the present invention is not limited to this, and the user identification information may be other information that can identify the user terminal 50 (that is, the user).
  • the response content determination unit 12 determines the response content based on the inquiry information input from the input unit 11.
  • the response content determination unit 12 first determines whether or not the inquiry information includes the result of voice recognition. As described above, when voice recognition is performed when the input unit 11 receives an incoming call from the user terminal 50, the inquiry information includes the voice recognition result. When the inquiry information includes the result of the voice recognition, the response content determination unit 12 specifies the specific content (content of the voice recognition).
  • the content of the voice recognition is, for example, success or failure of the voice recognition, the time required for the voice recognition, the feature of the voice in the voice recognition, the inquiry content of the user terminal 50 guided by the voice recognition, and the like.
  • the characteristics of the voice in the voice recognition are the language (Japanese, English, etc.) estimated from the user's voice, the dialect (which local language), the generation, or the speaking style (fast, slow, etc.).
  • the response content determination unit 12 updates the inquiry table TB of the information DB 13 based on the identified content of the voice recognition.
  • the information DB 13 stores, for each user, past information including the contents of past voice recognition related to the inquiry. For each user, for example, each telephone number, each user terminal information (terminal manufacturing number), each user identification ID input by text or voice from the user, and the like are included.
  • FIG. 2 is a diagram showing an example of the inquiry table TB stored in the information DB 13.
  • the number of successful voice recognitions, the number of voice recognition failures, the voice recognition utterance time, and the language information are associated with a telephone number (user identification information that identifies the user terminal 50).
  • the inquiry contents, the corresponding operator, and the presence or absence of complaints are stored.
  • the number of times of successful voice recognition (or the number of times of failure) is, for example, the total number of times of successful (or unsuccessful) voice recognition of the corresponding user terminal 50.
  • the voice recognition utterance time is, for example, the time required for one voice recognition, and for the user terminal 50 performing the voice recognition a plurality of times, it may be the average time or the longest time of the voice recognition. ..
  • the language information is various kinds of information about the characteristics of the voice in the voice recognition. For example, the language (Japanese, English, etc.), the dialect (which local language), the generation, or the speaking style (fast, slow, etc.) estimated from the voice of the user is used. ) Etc.
  • the inquiry content is the content of the inquiry identified by the voice recognition performed in the past, the content of the inquiry identified by the input on the user terminal 50 performed according to the voice guidance, or the content of the inquiry identified by the response on the operator terminal 80. is there.
  • the corresponding operator is information that identifies the operator who has responded in the past correspondence by the operator terminal 80. By recording the information of such a corresponding operator, it is possible to connect to the same operator terminal 80 as the previous time, and the user satisfaction can be improved.
  • the presence or absence of complaint is information indicating whether or not there is a complaint from the user of the corresponding user terminal 50 in the response made by the operator terminal 80 in the past. By recording such information on the presence or absence of complaints, for example, it becomes possible to connect the user terminal 50 with many complaints to a dedicated operator terminal 80 (operator terminal 80 of high skill), etc. The degree can be improved.
  • the response content determination unit 12 updates the number of times of successful voice recognition (or the number of times of failure) in the inquiry table TB, and includes the time required for the voice recognition.
  • the speech recognition utterance time of the inquiry table TB is updated
  • the language information of the inquiry table TB is updated when the voice feature is included
  • the inquiry table TB of the inquiry table TB is included when the inquiry content is included. Update the inquiry content.
  • the response content determination unit 12 identifies the content of past voice recognition of the user terminal 50 identified by the telephone number included in the inquiry information by referring to the inquiry table TB of the information DB 13, and identifies the identified past voice recognition.
  • the response content is determined based on the content of.
  • the response content determination unit 12 refers to the inquiry table TB of the information DB 13, and when the past voice recognition failure rate or the number of failures of the user terminal 50 identified by the telephone number included in the inquiry information is larger than a predetermined value. May determine the response content so that the user terminal 50 is not required to perform voice recognition.
  • the failure count is acquired by referring to the voice recognition failure count in the inquiry table TB. Further, the failure rate is derived from the number of voice recognition failures and the number of voice recognition successes in the inquiry table TB.
  • the response content determination unit 12 refers to the inquiry table TB of the information DB 13, and when the time required for the past voice recognition is longer than the predetermined time for the user terminal 50 specified by the telephone number included in the inquiry information. May determine the response content so that the user terminal 50 is not required to perform voice recognition.
  • the time required for voice recognition is acquired by referring to the voice recognition utterance time in the inquiry table TB.
  • the response content determination unit 12 refers to the inquiry table TB of the information DB 13 and determines the response content for the user identified by the telephone number included in the inquiry information according to the characteristics of the voice in the past voice recognition. Good.
  • the feature of voice is acquired by referring to the language information of the inquiry table TB.
  • the response content determination unit 12 specifies and identifies, for example, the user's language (Japanese, English, etc.), dialect (which local language), generation, or speaking style (fast, slow, etc.) from the characteristics of the voice.
  • the response content is determined so that it is transferred to the operator terminal 80 of the operator according to the characteristics.
  • the response content determination unit 12 may refer to the inquiry table TB of the information DB 13 and determine the response content of the user terminal 50 identified by the telephone number included in the inquiry information according to the past inquiry content. ..
  • the contents of the past inquiry are acquired by referring to the inquiry contents of the inquiry table TB.
  • the response content determination unit 12 determines the response content so that the output unit 14 does not ask the user terminal 50 a question to be answered for information that has already been acquired, for example, by past voice recognition.
  • the response content determination unit 12 does not generate a response by using the information in the inquiry table TB of the information DB 13, or wants to generate a response by using the information in the inquiry table TB of the information DB 13. If the information of the inquiry table TB is not stored for the user terminal 50 of 1, the response is generated without using the information of the inquiry table TB. In this case, the response content determination unit 12 asks the user terminal 50 a question to ask back (a question for digging deep in the inquiry of the user terminal 50) according to a predetermined scenario (the output unit 14 asks the question).
  • Response may be generated, a response for performing voice recognition may be generated, and a response that the transfer unit 15 connects to the operator terminal 80 (information such as which operator terminal 80 is connected at what timing) is generated. (Including) may be generated.
  • the response content determination unit 12 uses the information in the inquiry table TB of the information DB 13 to generate the response, the response content determination unit 12 refers to the inquiry table TB to generate the response.
  • the response content determination unit 12 When the response content determination unit 12 generates a response, it outputs a response provision instruction to the output unit 14 or the transfer unit 15.
  • the response providing instruction output to the output unit 14 includes the generated response, for example.
  • the response providing instruction output to the transfer unit 15 includes, for example, the generated response and information used for determining the response content (inquiry information, information in the inquiry table TB related to the corresponding user terminal 50, etc.). include.
  • the output unit 14 provides the user terminal 50 with a response to the inquiry according to the response content determined by the response content determination unit 12.
  • the output unit 14 receives the response providing instruction from the response content determining unit 12, and outputs the response included in the response providing instruction to the user terminal 50.
  • the response output from the output unit 14 may be provided to the user terminal 50 by, for example, a voice or text message.
  • the transfer unit 15 receives a response providing instruction from the response content determining unit 12 and gives a response request to the operator terminal 80 to provide a response to the user terminal 50 in cooperation with the operator terminal 80.
  • the transfer unit 15 instructs the operator terminal 80 indicated in the response included in the response providing instruction to request a response.
  • the response request includes, for example, information used for determining the response content (inquiry information, information in the inquiry table TB related to the user terminal 50, etc.).
  • the operator of the operator terminal 80 may provide the appropriate response to the user terminal 50 by referring to this information.
  • the response recording unit 16 records the response record of the operator terminal 80 with the user terminal 50 in the inquiry table TB of the information DB 13 (updates the inquiry table TB).
  • the operator terminal 80 provides information such as the telephone number of the user terminal 50, the content of the inquiry from the user terminal 50, the corresponding operator name, and the presence or absence of a complaint from the user of the user terminal 50, according to an input from the operator. Send to 10.
  • the response recording unit 16 updates the inquiry table TB of the information DB 13 based on the information transmitted from the operator terminal 80. Specifically, the response recording unit 16 updates the inquiry content, the corresponding operator, and the presence / absence of a complaint regarding the corresponding user terminal 50 in the inquiry table TB.
  • FIG. 3 is a flowchart showing a process performed by the dialogue device 10.
  • the process of determining the response content based on the inquiry information is a specific process of the process of “determining the response content” in a broad sense performed by the response content determining unit 12 (step S4).
  • a process of determining whether to use the information DB steps S5 and S6), a process of generating a response using the information DB (step S7), and a process of generating a response without using the information DB (Step S8) is illustrated.
  • the response content determination unit 12 of the dialog device 10 determines that the inquiry information includes predetermined information (specifically, the result of voice recognition). It is determined whether or not it is included (step S2).
  • the response content determination unit 12 updates the inquiry table TB (see FIG. 2) of the information DB 13 based on the specified content of the voice recognition (see FIG. 2). Step S3). Specifically, when the inquiry information includes the success or failure of the voice recognition, the response content determination unit 12 updates the number of times of successful voice recognition (or the number of failures) of the inquiry table TB, and the time required for the voice recognition. , The voice recognition utterance time of the inquiry table TB is updated, the language information of the inquiry table TB is updated if the characteristics of the voice are included, and the inquiry content is included. Updates the inquiry contents of the inquiry table TB.
  • the response content determination unit 12 determines the response content based on the inquiry information input from the input unit 11 (step S4).
  • the response content determination unit 12 determines to use the information in the inquiry table TB of the information DB 13 when making a response related to voice recognition, for example.
  • the response content determination unit 12 determines not to use the information in the inquiry table TB of the information DB 13 when, for example, making a response to a fixed question regardless of the information in the information DB 13. .
  • the response content determination unit 12 determines whether or not the response uses information (that is, past information) in the inquiry table TB of the information DB 13 based on the response content (step S5). Furthermore, when the response content determination unit 12 determines in step S5 that the response uses the information in the inquiry table TB, the response content determination unit 12 determines whether or not the inquiry table TB stores information on the corresponding user terminal 50. (Step S6).
  • step S5 If it is determined in step S5 that the response does not use the information of the inquiry table TB, or if it is determined in step S6 that the information of the corresponding user terminal 50 is not stored in the inquiry table TB, the response content determination unit 12 generates a response without using the information (that is, past information) in the inquiry table TB (step S8). On the other hand, if it is determined in step S6 that the information of the corresponding user terminal 50 is stored in the inquiry table TB, the response content determination unit 12 uses the information of the inquiry table TB (that is, past information). A response is generated (step S7).
  • the response content determination unit 12 determines whether or not the generated response relates to the transfer to the operator terminal 80 (step S9).
  • the response content determination unit 12 outputs a response providing instruction to the transfer unit 15, and the transfer unit 15 instructs the operator terminal 80 to request a response.
  • the predetermined information is transferred to the operator terminal 80 (step S10).
  • the response request includes, for example, information used for determining the response content (inquiry information, information in the inquiry table TB related to the user terminal 50, etc.).
  • Step S11 a response record is transmitted from the operator terminal 80 to the response recording unit 16, and the response recording unit 16 stores the response record in the inquiry table TB of the information DB 13.
  • the response content determination unit 12 outputs the response providing instruction to the output unit 14, and the output unit 14 transmits the response to the user terminal 50. Output (step S12).
  • the interactive device 10 of the interactive system 1 that provides a response to the inquiry from the user terminal 50 includes an information DB 13 that stores the content of past voice recognition related to the inquiry for each user, and a telephone number that identifies the user terminal 50 related to the inquiry.
  • the input unit 11 that obtains inquiry information including the following from the user terminal 50 and the inquiry table TB of the information DB 13
  • the content of past voice recognition of the user terminal 50 specified by the telephone number included in the inquiry information can be displayed.
  • a response content determination unit 12 that determines the response content based on the identified content of the past speech recognition, and an output unit 14 that provides a response corresponding to the inquiry according to the response content determined by the response content determination unit 12.
  • the inquiry information from the user terminal 50 is acquired, and the response content to the inquiry is determined based on the content of the past voice recognition of the user terminal 50.
  • the content of past voice recognition includes, for example, success or failure of voice recognition, voice characteristics, and the like.
  • the information DB 13 stores the success or failure of the past voice recognition for each user as the content of the past voice recognition, and the response content determination unit 12 refers to the information DB 13 and is specified by the telephone number included in the inquiry information.
  • the response content of the user terminal 50 is determined so that the user is not required to perform voice recognition.
  • a response can be provided to the user terminal 50 that is likely to fail in voice recognition by a method other than voice recognition (for example, button push according to voice guidance or transfer to an operator).
  • a method other than voice recognition for example, button push according to voice guidance or transfer to an operator.
  • the information DB 13 stores the time required for the past voice recognition for each user as the content of the past voice recognition, and the response content determination unit 12 refers to the information DB 13 and uses the telephone number included in the inquiry information.
  • the response content is determined such that the user terminal 50 is not required to perform voice recognition. It is assumed that the user terminal 50 that requires time for voice recognition is the user terminal 50 that is not good at performing voice recognition (does not want to perform voice recognition). Therefore, by providing a response to such a user terminal 50 by a method other than voice recognition, it is possible to improve user satisfaction.
  • the information DB 13 stores the characteristics of the voice in the past voice recognition for each user as the content of the past voice recognition, and the response content determination unit 12 refers to the information DB 13 and uses the telephone number included in the inquiry information.
  • the response content is determined according to the characteristics of the voice in the past voice recognition. For example, a language, a dialect, a generation, a way of speaking, etc. can be specified from the characteristics of voice. Therefore, the user's satisfaction can be improved by making a response such as forwarding to the operator according to such a voice feature.
  • the information DB 13 stores the past inquiry content guided by the past speech recognition as the content of the past speech recognition, and the response content determination unit 12 refers to the information DB 13 and refers to the telephone number included in the inquiry information.
  • the response content is determined according to the content of past inquiries. As described above, by utilizing the past inquiry content to determine the response content, it is possible to avoid making a response such as asking the user terminal 50 about the information already acquired in the past (performing a listening reply). It is possible to shorten the response time and improve the user's satisfaction.
  • the interaction device 10 described above may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
  • the word “device” can be read as a circuit, device, unit, or the like.
  • the hardware configuration of the dialog device 10 may be configured to include one or a plurality of each device illustrated in the figure, or may be configured not to include some devices.
  • Each function in the dialog device 10 causes a predetermined software (program) to be loaded on hardware such as the processor 1001 and the memory 1002, so that the processor 1001 performs an arithmetic operation, communication by the communication device 1004, memory 1002 and storage 1003. It is realized by controlling the reading and / or writing of data in.
  • the processor 1001 operates an operating system to control the entire computer, for example.
  • the processor 1001 may be composed of a central processing unit (CPU) including an interface with peripheral devices, a control device, a calculation device, a register, and the like.
  • CPU central processing unit
  • the control function of the response content determination unit 12 or the like of the dialog device 10 may be realized by the processor 1001.
  • the processor 1001 reads a program (program code), software module, and data from the storage 1003 and / or the communication device 1004 into the memory 1002, and executes various processes according to these.
  • a program program that causes a computer to execute at least part of the operations described in the above-described embodiments is used.
  • the control function of the response content determination unit 12 or the like of the dialogue device 10 may be realized by the control program stored in the memory 1002 and operated by the processor 1001, or may be realized similarly for other functional blocks. ..
  • the various processes described above are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001.
  • the processor 1001 may be implemented by one or more chips.
  • the program may be transmitted from the network via an electric communication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one of a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (ElectricallyErasable Programmable ROM), a RAM (Random Access Memory), and the like. May be done.
  • the memory 1002 may be called a register, a cache, a main memory (main storage device), or the like.
  • the memory 1002 can store a program (program code) executable to implement the wireless communication method according to the embodiment of the present invention, a software module, and the like.
  • the storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CD-ROM (Compact Disc ROM), a hard disc drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray disc). (Registered trademark) disk), smart card, flash memory (for example, card, stick, key drive), floppy (registered trademark) disk, magnetic strip, and the like.
  • the storage 1003 may be called an auxiliary storage device.
  • the storage medium described above may be, for example, a database including the memory 1002 and / or the storage 1003, a server, or another appropriate medium.
  • the communication device 1004 is hardware (transmission / reception device) for performing communication between computers via a wired and / or wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that receives an input from the outside.
  • the output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that performs output to the outside.
  • the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
  • Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be composed of a single bus, or may be composed of different buses among devices.
  • the dialogue device 10 includes hardware such as a microprocessor, a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable Gate Array). It may be configured, and the hardware may implement some or all of the functional blocks. For example, processor 1001 may be implemented with at least one of these hardware.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • PLD Program Integrated Circuit
  • FPGA Field Programmable Gate Array
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution-Advanced
  • SUPER 3G IMT-Advanced
  • 4G 5G
  • FRA Full Radio Access
  • W-CDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • CDMA2000 Code Division Multiple Access 2000
  • UMB Universal Mobile Broad-band
  • IEEE 802.11 Wi-Fi
  • IEEE 802.16 WiMAX
  • IEEE 802.20 UWB (Ultra-Wide) Band
  • Bluetooth registered trademark
  • Information that has been input and output may be stored in a specific location (for example, memory), or may be managed in a management table. Information that is input / output can be overwritten, updated, or added. The output information and the like may be deleted. The input information and the like may be transmitted to another device.
  • the determination may be performed by a value represented by 1 bit (whether 0 or 1), may be performed by a Boolean value (Boolean: true or false), and may be performed by comparing numerical values (for example, a predetermined value). (Comparison with the value).
  • the notification of the predetermined information (for example, the notification of “being X”) is not limited to the explicit notification, but is performed implicitly (for example, the notification of the predetermined information is not performed). Good.
  • software, instructions, etc. may be sent and received via a transmission medium.
  • the software may use a wired technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to websites, servers, or other When transmitted from a remote source, these wireline and / or wireless technologies are included within the definition of transmission medium.
  • a wired technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to websites, servers, or other
  • the information, signals, etc. described herein may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description include voltage, current, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any of these. May be represented by a combination of
  • the information, parameters, and the like described in this specification may be represented by absolute values, relative values from predetermined values, or may be represented by other corresponding information. ..
  • User terminals are defined by those skilled in the art as mobile communication terminals, subscriber stations, mobile units, subscriber units, wireless units, remote units, mobile devices, wireless devices, wireless communication devices, remote devices, mobile subscriber stations, access terminals, It may also be referred to as a mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term.
  • determining and “determining” may encompass a wide variety of actions.
  • “Judgment” and “decision” mean, for example, calculating, computing, processing, deriving, investigating, looking up (e.g., table, database or another). (Search in data structure), ascertaining that it is regarded as “judgment” and “decision” can be included.
  • “decision” and “decision” include receiving (eg, receiving information), transmitting (eg, transmitting information), input (input), output (output), access (accessing) (for example, accessing data in a memory) may be regarded as “judging” and “deciding”.
  • judgment and “decision” are considered to be “judgment” and “decision” when things such as resolving, selecting, choosing, establishing, establishing, and comparing are done. May be included. That is, the “judgment” and “decision” may include considering some action as “judgment” and “decision”.
  • the phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” means both "based only on” and “based at least on.”
  • any reference to that element does not generally limit the amount or order of those elements. These designations may be used herein as a convenient way to distinguish between two or more elements. Thus, references to the first and second elements do not mean that only two elements may be employed there, or that the first element must precede the second element in any way.
  • a device including a plurality of devices is also included unless it is a device in which only one clearly exists in terms of context or technology.
  • SYMBOLS 1 Dialogue system, 10 ... Dialogue device, 11 ... Input part (acquisition part), 13 ... Information DB (storage part), 12 ... Response content determination part (determination part), 14 ... Output part (response provision part), 15 ... Transfer unit (response providing unit), 50 ... User terminal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Le dispositif de dialogue du système de dialogue de la présente invention permettant fournir des réponses à des requêtes provenant d'un terminal utilisateur comprend : une base de données d'informations qui, pour chaque utilisateur, stocke un contenu de reconnaissance vocale passé associé à des requêtes; une unité d'entrée qui, à partir du terminal utilisateur, acquiert des informations d'interrogation comprenant un numéro de téléphone pour identifier le terminal utilisateur associé à l'interrogation; une unité de détermination de contenu de réponse qui, en se référant à une table d'interrogation dans la base de données d'informations, spécifie un contenu de reconnaissance vocale passé du terminal utilisateur spécifié par le numéro de téléphone inclus dans les informations d'interrogation et qui détermine un contenu de réponse sur la base du contenu de reconnaissance vocale passé spécifié; et une unité de sortie et une unité de transfert qui, en fonction du contenu de réponse déterminé par l'unité de détermination de contenu de réponse, fournissent une réponse correspondant à l'interrogation.
PCT/JP2019/024372 2018-10-30 2019-06-19 Système de dialogue WO2020090148A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020554756A JP7093844B2 (ja) 2018-10-30 2019-06-19 対話システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-203680 2018-10-30
JP2018203680 2018-10-30

Publications (1)

Publication Number Publication Date
WO2020090148A1 true WO2020090148A1 (fr) 2020-05-07

Family

ID=70462562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/024372 WO2020090148A1 (fr) 2018-10-30 2019-06-19 Système de dialogue

Country Status (2)

Country Link
JP (1) JP7093844B2 (fr)
WO (1) WO2020090148A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002182681A (ja) * 2000-12-13 2002-06-26 Nec Corp 音声認識型取引システム
JP2005142897A (ja) * 2003-11-07 2005-06-02 Fujitsu Support & Service Kk 電話受付システム
JP2015049337A (ja) * 2013-08-30 2015-03-16 株式会社東芝 音声応答装置、音声応答プログラム及び音声応答方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002182681A (ja) * 2000-12-13 2002-06-26 Nec Corp 音声認識型取引システム
JP2005142897A (ja) * 2003-11-07 2005-06-02 Fujitsu Support & Service Kk 電話受付システム
JP2015049337A (ja) * 2013-08-30 2015-03-16 株式会社東芝 音声応答装置、音声応答プログラム及び音声応答方法

Also Published As

Publication number Publication date
JPWO2020090148A1 (ja) 2021-09-02
JP7093844B2 (ja) 2022-06-30

Similar Documents

Publication Publication Date Title
JP6802364B2 (ja) 対話システム
WO2019202788A1 (fr) Système de dialogue
WO2019193796A1 (fr) Serveur d'interaction
US11971977B2 (en) Service providing apparatus
WO2020090147A1 (fr) Système de dialogue
JP7043593B2 (ja) 対話サーバ
WO2020090148A1 (fr) Système de dialogue
WO2019216054A1 (fr) Serveur interactif
JP7033195B2 (ja) 対話装置
JP6934825B2 (ja) 通信制御システム
JP7323370B2 (ja) 審査装置
WO2019187463A1 (fr) Serveur de dialogue
US11430440B2 (en) Dialog device
US11645477B2 (en) Response sentence creation device
JP6944594B2 (ja) 対話装置
WO2019102904A1 (fr) Dispositif d'interaction et système de réponse interactif
JP6960049B2 (ja) 対話装置
JP7357061B2 (ja) オーソリゼーション装置
JP6957671B2 (ja) 情報処理装置
JP2024115929A (ja) 音声書き起こしシステム及び音声翻訳システム
JP2018196017A (ja) 通信端末および通信システム
JP2024108744A (ja) 埋め込み表現生成システム
CN118051652A (zh) 数据处理方法、装置、存储介质及电子设备
CN118797118A (zh) 数据筛选方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19878759

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020554756

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19878759

Country of ref document: EP

Kind code of ref document: A1