WO2020177282A1 - Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2020177282A1
WO2020177282A1 PCT/CN2019/103612 CN2019103612W WO2020177282A1 WO 2020177282 A1 WO2020177282 A1 WO 2020177282A1 CN 2019103612 W CN2019103612 W CN 2019103612W WO 2020177282 A1 WO2020177282 A1 WO 2020177282A1
Authority
WO
WIPO (PCT)
Prior art keywords
response
value
intention
dialogue
model
Prior art date
Application number
PCT/CN2019/103612
Other languages
English (en)
Chinese (zh)
Inventor
吴壮伟
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020177282A1 publication Critical patent/WO2020177282A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present invention relates to the field of artificial intelligence technology, in particular to a machine dialogue method, device, computer equipment and storage medium.
  • chatbots have gradually emerged.
  • a chatbot is a program used to simulate human conversations or chats. It can be used for practical purposes, such as customer service, consultation and Q&A, and some social robots are used to chat with people.
  • chatbots will be equipped with natural language processing systems, but more often extract keywords from input sentences, and then retrieve answers based on keywords from the database.
  • the answers of these chat bots are usually pretty, non-emotional, and the chat mode is the same, causing people to be less interested in chatting with them, and the utilization rate of chat bots is also low.
  • the invention provides a machine dialogue method, device, computer equipment and storage medium to solve the same problem that a chat robot answers.
  • a machine dialogue method includes the following steps:
  • the dialogue intention is input into a preset response decision model, and the response strategy output by the response decision model in response to the dialogue intention is obtained, wherein the response decision model is used to obtain a response strategy from a plurality of preset candidate response strategies Select a response strategy corresponding to the dialogue intention in the dialog;
  • the language information is input into a response generation model having a mapping relationship with the response strategy, and the response information input by the response generation model in response to the language information is obtained.
  • a machine dialogue device including:
  • the acquisition module is used to acquire the language information input by the current user
  • a recognition module which inputs the language information into a preset intention recognition model, and obtains a dialogue intention output by the intention recognition model in response to the language information;
  • the calculation module inputs the dialog intention into a preset response decision model, and obtains the response strategy output by the response decision model in response to the dialog intention, wherein the response decision model is used to obtain a response from a plurality of preset Selecting a response strategy corresponding to the dialogue intention among candidate response strategies;
  • a generating module inputs the language information into a response generation model that has a mapping relationship with the response strategy, and obtains response information input by the response generation model in response to the language information.
  • a computer device comprising a memory and a processor, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the machine dialogue method described above .
  • a computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the processor executes the steps of the machine dialogue method described above.
  • the beneficial effects of the embodiments of the present invention are: by acquiring the language information input by the current user; inputting the language information into a preset intention recognition model, and acquiring the dialogue intention output by the intention recognition model in response to the language information;
  • the dialogue intention is input into a preset response decision model, and the response strategy output by the response decision model in response to the dialogue intention is obtained, wherein the response decision model is used to obtain a response strategy from a plurality of preset candidate response strategies Select the response strategy corresponding to the dialogue intention; input the language information into a response generation model that has a mapping relationship with the response strategy, and obtain the response information input by the response generation model in response to the language information.
  • the response generation model is determined, and the reinforcement learning network model is introduced in the process of determining the response generation model.
  • different response generation models are used to generate different types of responses, so that the dialogue is diversified and more interesting.
  • FIG. 1 is a schematic diagram of the basic flow of a machine dialogue method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a process flow of determining a response strategy using a Q-value matrix in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a process flow of determining a response strategy using a Q value reinforcement learning network according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of the training process of an LSTM-CNN neural network model according to an embodiment of the present invention
  • FIG. 5 is a block diagram of the basic structure of a machine dialogue device according to an embodiment of the present invention.
  • Fig. 6 is a block diagram of the basic structure of a computer device according to an embodiment of the present invention.
  • terminal and “terminal equipment” used herein include both wireless signal receiver equipment, equipment that only has wireless signal receivers without transmitting capability, and equipment receiving and transmitting hardware.
  • a device which has a device capable of performing two-way communication receiving and transmitting hardware on a two-way communication link.
  • Such equipment may include: cellular or other communication equipment, which has a single-line display or multi-line display or cellular or other communication equipment without a multi-line display; PCS (Personal Communications Service, personal communication system), which can combine voice and data Processing, fax and/or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet/Intranet access, web browsers, notebooks, calendars and/or GPS (Global Positioning System (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device, which has and/or includes a radio frequency receiver, a conventional laptop and/or palmtop computer or other device.
  • PCS Personal Communications Service, personal communication system
  • PDA Personal Digital Assistant
  • GPS Global Positioning System (Global Positioning System) receiver
  • a conventional laptop and/or palmtop computer or other device which has and/or includes a radio frequency receiver, a conventional laptop and/or palmtop computer or other device.
  • terminal and terminal equipment used here may be portable, transportable, installed in vehicles (aviation, sea and/or land), or suitable and/or configured to operate locally, and/or In a distributed form, it runs on the earth and/or any other location in space.
  • the "terminal” and “terminal device” used here can also be communication terminals, Internet terminals, music/video playback terminals, such as PDA, MID (Mobile Internet Device, mobile Internet device) and/or music/video playback Functional mobile phones can also be devices such as smart TVs and set-top boxes.
  • the terminal in this embodiment is the aforementioned terminal.
  • FIG. 1 is a schematic diagram of the basic flow of a machine dialogue method in this embodiment.
  • a machine dialogue method includes the following steps:
  • the language information input by the user is acquired through the interactive page on the terminal.
  • the received information can be text information or voice information.
  • the voice information is converted into text information through a voice recognition device.
  • the recognition of the dialogue intention can be based on keywords, for example, to determine whether the intent is task-based or chat-type.
  • the task-type is the dialogue intention that requires robots to answer questions. It can be determined whether the input language information contains query keywords, such as "?" "What", "How much”, “Where", “How” and other interrogative mood particles. You can also use a regular matching algorithm to determine whether the input language information is a question sentence.
  • a regular expression is a logical formula for string manipulation. It uses predefined specific characters and combinations of these specific characters to form a "rule” String", this "rule string” is used to express a kind of filtering logic for string.
  • the dialogue intention is a chat type.
  • dialogue intentions can be subdivided.
  • the chat type can be subdivided into positive types, including emotions such as affirmation, praise, and thanks, and negative types, including emotions such as complaints, complaints, and accusations.
  • the subdivided dialogue intentions can be judged by the preset keyword list.
  • a keyword list is preset. When the keywords in the extracted input language information are in the keyword list corresponding to a certain dialogue intention When the words match, it is considered that the input language information corresponds to the dialogue intention.
  • the dialogue intention recognition is performed through the pre-trained LSTM-CNN neural network model.
  • first perform Chinese word segmentation use the basic word segmentation library, and sequentially enter to remove stop words, punctuation, etc., obtain the word embedding vector through the word vector model, and pass it to the neural network model based on LSTM-CNN.
  • the word embedding vector enters the multi-layer LSTM neural unit to obtain the state vector and output of each stage; then, based on the state vector of each stage, perform convolution operation and pooling operation (CNN) to obtain the integrated vector index; then the integrated vector index Enter the softmax function to get the probability of the corresponding intention.
  • CNN convolution operation and pooling operation
  • the intention with the highest probability is the dialogue intention corresponding to the input language information.
  • Figure 4 for the training process of the LSTM-CNN neural network model.
  • the dialogue intention of the input language information is obtained, and the dialogue intention is input into the response decision model to determine the response strategy for the input language information.
  • different response strategies can be preset for different dialogue intentions, for example, for task-based intentions, the response strategy is question answering, and for negative intentions, the response strategy is emotional resolution.
  • Different response strategies correspond to different response generation models.
  • the Q value is calculated to determine the response strategy to be adopted for the dialogue intention.
  • the Q value is used to measure the value of a certain response strategy for a certain dialogue intention to the entire chat process. For example, we examine the degree of pleasure of the chat. The degree of pleasure can be accounted for by the negative intention sentences of the entire dialogue process as the user’s input in the current round of dialogue. Measured by the ratio of the number, the Q value is the value of a certain response strategy for a certain round of dialogue to chat pleasure.
  • a Q-value matrix can be preset through empirical values, the elements of which are q(s,a), s ⁇ S, a ⁇ A, where S is the dialogue intention space and A is the response strategy space.
  • the Q value is calculated by a Q value reinforcement learning network model.
  • the input of Q-value reinforcement learning network model is s, which is the dialogue intention, and the output is Q(s, a). That is, starting from state s and adopting strategy a, the expected benefits can be obtained.
  • the training of the Q-value reinforcement learning network model takes the convergence of the first loss function as the training objective, and the first loss function is
  • s is the dialogue intention
  • a is the response strategy
  • w is the network parameter of the Q value reinforcement learning network model
  • Q is the true value. Is the predicted value.
  • w is the network parameter trained by the Q-value reinforcement learning network model.
  • the response decision model is the aforementioned Q-value matrix or Q-value reinforcement learning network model.
  • S104 Input the language information to a response generation model that has a mapping relationship with the response strategy, and obtain response information input by the response generation model in response to the language information.
  • a corresponding response generation model is preset.
  • the response strategy is a question answering type
  • the corresponding response generation model includes a question and answer database, and matches the corresponding answer by searching for keywords in the input language information.
  • the corresponding response generation model adopts the trained Seq2Seq model.
  • the specific training process is to prepare the training corpus, that is, prepare the input sequence and the corresponding output sequence, input the input sequence into the Seq2Seq model, and calculate the output For the probability of the sequence, adjust the parameters of the Seq2Seq model so that the entire sample, that is, all input sequences, has the highest probability of outputting the corresponding output sequence after Seq2Seq.
  • the training corpus prepared here requires the sentiment of the input sentence to be negative and the sentiment of the output sentence to be positive.
  • step S103 further includes the following steps:
  • S112 Determine that the candidate response strategy corresponding to the largest q value in the Q value matrix is the response strategy of the dialogue intention.
  • the candidate response strategy with the largest q value is the response strategy corresponding to the dialogue intention.
  • step S103 further includes the following steps:
  • S121 Input the candidate response strategy and the dialogue intention into the Q-value reinforcement learning network model in turn, and obtain the Q value corresponding to each candidate response strategy output by the Q-value reinforcement learning network model;
  • the candidate response strategy and the dialogue intention are input into the Q value reinforcement learning network model to obtain the Q value of the dialogue intention using the response strategy.
  • S122 Determine that the candidate response strategy with the largest Q value is the response strategy of the dialogue intention.
  • the candidate response strategy with the largest Q value is the response strategy that the dialogue intention should adopt.
  • the training of the LSTM-CNN neural network model in the embodiment of the present invention includes the following steps:
  • the training samples are labeled with the category of dialogue intent.
  • the types of training sample marks are task type and chat type.
  • the task type responds to user needs for answering questions
  • the chat type responds to applications and needs for small talk.
  • N is the number of training samples.
  • the corresponding label Yi is the final intent recognition result
  • the neural network model of LSTM-CNN takes the convergence of the second loss function as the training target, that is, by adjusting the weight of each node in the neural network model, the second loss function reaches the minimum value.
  • the loss When the value of the function no longer decreases, but instead increases, the training ends.
  • the second loss function is used to measure whether the conversation intention of the training sample predicted by the LSTM-CNN neural network model is consistent with the conversation intention category marked by the training sample. If the second loss function does not converge, adjust the neural network through the gradient descent method
  • the weight of each node in the model ends when the reference type of dialogue intention predicted by the neural network is consistent with the type of dialogue intention marked by the training sample. That is to continue to adjust the weight, the value of the loss function no longer decreases, but increases instead, the training ends.
  • FIG. 5 is a block diagram of the basic structure of the machine dialogue device of this embodiment.
  • a machine dialogue device includes: an acquisition module 210, an identification module 220, a calculation module 230, and a generation module 240.
  • the obtaining module 210 is used to obtain the language information input by the current user;
  • the recognition module 220 is used to input the language information into a preset intention recognition model, and obtain the dialogue output by the intention recognition model in response to the language information.
  • calculation module 230 input the dialogue intent into a preset response decision model, and obtain the response strategy output by the response decision model in response to the dialogue intention, wherein the response decision model is used from the preset
  • the response strategy corresponding to the dialogue intention is selected among the multiple candidate response strategies
  • the generation module 240 inputs the language information into the response generation model that has a mapping relationship with the response strategy, and obtains the response information of the response generation model The response information entered while describing the language information.
  • the embodiment of the present invention obtains the language information input by the current user; inputs the language information into a preset intention recognition model, and obtains the dialogue intention output by the intention recognition model in response to the language information; Input into a preset response decision model to obtain the response strategy output by the response decision model in response to the dialogue intention, wherein the response decision model is used to select from a plurality of preset candidate response strategies and A response strategy corresponding to the dialogue intention; the language information is input to a response generation model that has a mapping relationship with the response strategy, and the response information input by the response generation model in response to the language information is obtained.
  • the response generation model is determined, and the reinforcement learning network model is introduced in the process of determining the response generation model. For different intentions, different response generation models are used to generate different types of responses, so that the dialogue is diversified and more interesting.
  • the response decision model in the machine dialogue device is based on a preset Q-value matrix, wherein the element q in the Q-value matrix is used to evaluate the value of each candidate response strategy for each dialogue intention.
  • the machine dialogue device further includes: a first query submodule and a first confirmation submodule, wherein the first query submodule is used for querying the Q-value matrix according to the dialogue intention; the first confirmation submodule is used for It is determined that the candidate response strategy corresponding to the largest q value in the Q value matrix is the response strategy of the dialogue intention.
  • the response decision model in the machine dialogue device is based on a pre-trained Q-value reinforcement learning network model, wherein the Q-value reinforcement learning network model is characterized by the following first loss function:
  • s is the dialogue intention
  • a is the response strategy
  • w is the network parameter of the Q value reinforcement learning network model
  • Q is the true value. Is the predicted value; adjusting the value of the network parameter w of the Q-value reinforcement learning network model, so that when the first loss function reaches the minimum value, the Q-value reinforcement learning network model defined by the value of the network parameter w is determined to be Pre-trained Q value reinforcement learning network model.
  • the machine dialogue device further includes: a first processing submodule and a second confirmation submodule.
  • the first processing sub-module is configured to sequentially input candidate response strategies and the dialogue intention into the Q-value reinforcement learning network model, and obtain Q corresponding to each candidate response strategy output by the Q-value reinforcement learning network model. Value; the second confirmation sub-module is used to determine that the candidate response strategy with the largest Q value is the response strategy of the dialogue intention.
  • the preset intention recognition model in the machine dialogue device uses a pre-trained LSTM-CNN neural network model
  • the machine dialogue device further includes: a first acquisition submodule, a second processing submodule, and a A comparison sub-module and a first execution sub-module, wherein the first acquisition sub-module is used to acquire training samples marked with dialogue intention categories, and the training samples are language information marked with different dialogue intention categories; second processing The sub-module is used to input the training samples into the LSTM-CNN neural network model to obtain the reference category of the dialogue intention of the training samples; the first comparison sub-module is used to compare the differences in the training samples through the second loss function Whether the sample dialogue intention reference category is consistent with the dialogue intention category, wherein the second loss function is:
  • N is the number of training samples.
  • the corresponding label Yi is the final intent recognition result
  • the preset intent recognition model in the machine dialogue device adopts a regular matching algorithm, wherein the rule character string used by the regular matching algorithm includes at least a question feature string, and the machine dialogue device also It includes a first matching sub-module, which is used to perform a regular matching operation between the language information and the rule string.
  • the result is a match, it is determined that the dialogue intention is task-based, otherwise, it is determined that the dialogue intention is chat-type .
  • the response generation model in the machine dialogue device includes at least a pre-trained Seq2Seq model
  • the machine dialogue device further includes a second acquisition submodule and a third processing submodule, wherein the second acquisition submodule , Used to obtain training corpus, the training corpus includes an input sequence and an output sequence; a third processing sub-module, used to input the input sequence into the Seq2Seq model, adjust the parameters of the Seq2Seq model, and make the Seq2Seq model respond to the input The probability of outputting the output sequence is the greatest.
  • FIG. 6 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store control information sequences.
  • the processor can make the processor realize the above The machine conversation method described in any embodiment.
  • the processor of the computer equipment is used to provide calculation and control capabilities, and supports the operation of the entire computer equipment.
  • the computer readable instructions may be stored in the memory of the computer device, and when the computer readable instructions are executed by the processor, the processor can cause the processor to execute the machine dialogue method described in any of the foregoing embodiments.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • the processor is used to execute the specific content of the acquisition module 210, the recognition module 220, the calculation module 230, and the generation module 240 in FIG. 5, and the memory stores the program codes and various data required to execute the above modules.
  • the network interface is used for data transmission between user terminals or servers.
  • the memory in this embodiment stores the program codes and data required to execute all the sub-modules in the machine dialogue method, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
  • the computer device obtains the language information input by the current user; inputs the language information into a preset intention recognition model, and obtains the dialogue intention output by the intention recognition model in response to the language information; and inputs the dialogue intention into
  • the response strategy output by the response decision model in response to the dialogue intention is obtained, wherein the response decision model is used to select the dialogue intention from a plurality of preset candidate response strategies Corresponding response strategy; input the language information to a response generation model that has a mapping relationship with the response strategy, and obtain response information input by the response generation model in response to the language information.
  • the response generation model is determined, and the reinforcement learning network model is introduced in the process of determining the response generation model.
  • different response generation models are used to generate different types of responses, so that the dialogue is diversified and more interesting.
  • the present invention also provides a storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the machine conversation method described in any of the above embodiments. A step of.
  • the computer program can be stored in a computer readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

Le mode de réalisation de la présente invention relève du domaine technique de l'intelligence artificielle. L'invention concerne un procédé et un appareil de dialogue avec une machine, un dispositif informatique et un support de stockage. Le procédé comprend les étapes suivantes consistant à : acquérir des informations de langage entrées par un utilisateur actuel ; entrer les informations de langage dans un modèle de reconnaissance d'intention prédéfini, et acquérir une intention de dialogue produite par le modèle de reconnaissance d'intention en réponse aux informations de langage ; entrer l'intention de dialogue dans un modèle de décision de réponse prédéfini, et acquérir une stratégie de réponse produite par le modèle de décision de réponse en réponse à l'intention de dialogue ; et entrer les informations de langage dans un modèle de génération de réponse ayant une relation de mappage avec la stratégie de réponse, et acquérir des informations de réponse entrées par le modèle de génération de réponse en réponse aux informations de langage. La mise en œuvre d'une reconnaissance d'intention, la détermination d'un modèle de génération de réponse et la production de réponses de différents types permettent d'obtenir un dialogue diversifié et plus intéressant.
PCT/CN2019/103612 2019-03-01 2019-08-30 Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage WO2020177282A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910154323.9 2019-03-01
CN201910154323.9A CN110046221B (zh) 2019-03-01 2019-03-01 一种机器对话方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020177282A1 true WO2020177282A1 (fr) 2020-09-10

Family

ID=67274468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103612 WO2020177282A1 (fr) 2019-03-01 2019-08-30 Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN110046221B (fr)
WO (1) WO2020177282A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085594A (zh) * 2020-09-14 2020-12-15 深圳前海微众银行股份有限公司 身份核实方法、设备及可读存储介质
CN112131362A (zh) * 2020-09-22 2020-12-25 腾讯科技(深圳)有限公司 对话语句生成方法和装置、存储介质及电子设备
CN112199927A (zh) * 2020-10-19 2021-01-08 古联(北京)数字传媒科技有限公司 古籍标点填充方法和装置
CN112380875A (zh) * 2020-11-18 2021-02-19 杭州大搜车汽车服务有限公司 对话标签跟踪方法、装置、电子装置及存储介质
CN112528679A (zh) * 2020-12-17 2021-03-19 科大讯飞股份有限公司 一种意图理解模型训练方法及装置、意图理解方法及装置
CN112559714A (zh) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 对话生成方法、装置、电子设备及存储介质
CN113239167A (zh) * 2021-05-31 2021-08-10 百融云创科技股份有限公司 一种可自动生成对话策略的任务型对话管理方法和系统
CN113641806A (zh) * 2021-07-28 2021-11-12 北京百度网讯科技有限公司 对话方法、系统、电子设备及存储介质
CN113705249A (zh) * 2021-08-25 2021-11-26 上海云从企业发展有限公司 对话处理方法、系统、装置及计算机可读存储介质
CN114490985A (zh) * 2022-01-25 2022-05-13 北京百度网讯科技有限公司 对话生成方法、装置、电子设备和存储介质
EP4020326A1 (fr) * 2020-12-25 2022-06-29 Beijing Baidu Netcom Science and Technology Co., Ltd Procédé et appareil de formation de modèle, dispositif, support de stockage et produit-programme
CN116501852A (zh) * 2023-06-29 2023-07-28 之江实验室 一种可控对话模型训练方法、装置、存储介质及电子设备
CN116737888A (zh) * 2023-01-11 2023-09-12 北京百度网讯科技有限公司 对话生成模型的训练方法和答复文本的确定方法、装置
CN117556022A (zh) * 2023-12-18 2024-02-13 北京中关村科金技术有限公司 一种智能客服意图识别方法、装置及系统
CN118396659A (zh) * 2024-06-26 2024-07-26 广州平云信息科技有限公司 基于aigc的数字文化产品用户行为分析方法及系统
CN118568241A (zh) * 2024-07-31 2024-08-30 浙江大学 一种基于预训练模型的用户对话和画像的意图预测方法

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046221B (zh) * 2019-03-01 2023-12-22 平安科技(深圳)有限公司 一种机器对话方法、装置、计算机设备及存储介质
CN110414005B (zh) * 2019-07-31 2023-10-10 达闼机器人股份有限公司 意图识别方法、电子设备及存储介质
CN112396481A (zh) * 2019-08-13 2021-02-23 北京京东尚科信息技术有限公司 线下产品信息发送方法、系统、电子设备和存储介质
CN110472035A (zh) * 2019-08-26 2019-11-19 杭州城市大数据运营有限公司 一种智能应答方法、装置、计算机设备及存储介质
CN110717022A (zh) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 一种机器人对话生成方法、装置、可读存储介质及机器人
CN111739506B (zh) * 2019-11-21 2023-08-04 北京汇钧科技有限公司 一种应答方法、终端及存储介质
CN110928997A (zh) * 2019-12-04 2020-03-27 北京文思海辉金信软件有限公司 意图识别方法、装置、电子设备及可读存储介质
CN111209380B (zh) * 2019-12-31 2023-07-28 深圳追一科技有限公司 对话机器人的控制方法、装置、计算机设备和存储介质
CN113132214B (zh) * 2019-12-31 2023-07-18 深圳市优必选科技股份有限公司 一种对话方法、装置、服务器及存储介质
CN111341309A (zh) 2020-02-18 2020-06-26 百度在线网络技术(北京)有限公司 一种语音交互方法、装置、设备和计算机存储介质
CN111400450B (zh) * 2020-03-16 2023-02-03 腾讯科技(深圳)有限公司 人机对话方法、装置、设备及计算机可读存储介质
CN111538820A (zh) * 2020-04-10 2020-08-14 出门问问信息科技有限公司 一种异常答复处理、装置以及计算机可读存储介质
CN111681653A (zh) * 2020-04-28 2020-09-18 平安科技(深圳)有限公司 呼叫控制方法、装置、计算机设备以及存储介质
CN111611365A (zh) * 2020-05-19 2020-09-01 上海鸿翼软件技术股份有限公司 一种对话系统的流程控制方法、装置、设备及存储介质
CN111611350B (zh) * 2020-05-26 2024-04-09 北京妙医佳健康科技集团有限公司 基于健康知识的应答方法、装置及电子设备
CN111666396B (zh) * 2020-06-05 2023-10-31 北京百度网讯科技有限公司 用户意图理解满意度评估方法、装置、设备和存储介质
CN111881254B (zh) * 2020-06-10 2024-08-09 百度在线网络技术(北京)有限公司 话术生成方法、装置、电子设备及存储介质
CN111651582B (zh) * 2020-06-24 2023-06-23 支付宝(杭州)信息技术有限公司 一种模拟用户发言的方法和系统
CN111797215B (zh) * 2020-06-24 2024-08-13 北京小米松果电子有限公司 对话方法、装置及存储介质
CN112347788A (zh) * 2020-11-06 2021-02-09 平安消费金融有限公司 语料处理方法、装置及存储介质
CN112559700A (zh) * 2020-11-09 2021-03-26 联想(北京)有限公司 一种应答处理方法、智能设备及存储介质
CN112733649B (zh) * 2020-12-30 2023-06-20 平安科技(深圳)有限公司 基于视频图像识别用户意图的方法及相关设备
CN112765959B (zh) * 2020-12-31 2024-05-28 康佳集团股份有限公司 意图识别方法、装置、设备及计算机可读存储介质
CN112328776A (zh) 2021-01-04 2021-02-05 北京百度网讯科技有限公司 对话生成方法、装置、电子设备和存储介质
CN112836028A (zh) * 2021-01-13 2021-05-25 国家电网有限公司客户服务中心 一种基于机器学习的多轮对话方法及系统
CN112800204A (zh) * 2021-02-24 2021-05-14 浪潮云信息技术股份公司 一种智能对话系统的构建方法
CN113220856A (zh) * 2021-05-28 2021-08-06 天津大学 一种基于中文预训练模型的多轮对话系统
CN113360618B (zh) * 2021-06-07 2022-03-11 暨南大学 一种基于离线强化学习的智能机器人对话方法及系统
CN113282755A (zh) * 2021-06-11 2021-08-20 上海寻梦信息技术有限公司 对话型文本分类方法、系统、设备及存储介质
CN113806503A (zh) * 2021-08-25 2021-12-17 北京库睿科技有限公司 一种对话融合方法和装置及设备
CN116521850B (zh) * 2023-07-04 2023-12-01 北京红棉小冰科技有限公司 一种基于强化学习的交互方法及装置
CN117708305B (zh) * 2024-02-05 2024-04-30 天津英信科技有限公司 一种针对应答机器人的对话处理方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106777081A (zh) * 2016-12-13 2017-05-31 竹间智能科技(上海)有限公司 用于确定对话系统应答策略的方法及装置
CN106934452A (zh) * 2017-01-19 2017-07-07 深圳前海勇艺达机器人有限公司 机器人对话方法与系统
CN107146610A (zh) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 一种用户意图的确定方法及装置
CN107665708A (zh) * 2016-07-29 2018-02-06 科大讯飞股份有限公司 智能语音交互方法及系统
CN110046221A (zh) * 2019-03-01 2019-07-23 平安科技(深圳)有限公司 一种机器对话方法、装置、计算机设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150179170A1 (en) * 2013-12-20 2015-06-25 Microsoft Corporation Discriminative Policy Training for Dialog Systems
CN108363690A (zh) * 2018-02-08 2018-08-03 北京十三科技有限公司 基于神经网络的对话语义意图预测方法及学习训练方法
CN108829797A (zh) * 2018-04-25 2018-11-16 苏州思必驰信息科技有限公司 多智能体对话策略系统构建方法及自适应方法
CN109063164A (zh) * 2018-08-15 2018-12-21 百卓网络科技有限公司 一种基于深度学习的智能问答方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665708A (zh) * 2016-07-29 2018-02-06 科大讯飞股份有限公司 智能语音交互方法及系统
CN106777081A (zh) * 2016-12-13 2017-05-31 竹间智能科技(上海)有限公司 用于确定对话系统应答策略的方法及装置
CN106934452A (zh) * 2017-01-19 2017-07-07 深圳前海勇艺达机器人有限公司 机器人对话方法与系统
CN107146610A (zh) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 一种用户意图的确定方法及装置
CN110046221A (zh) * 2019-03-01 2019-07-23 平安科技(深圳)有限公司 一种机器对话方法、装置、计算机设备及存储介质

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085594A (zh) * 2020-09-14 2020-12-15 深圳前海微众银行股份有限公司 身份核实方法、设备及可读存储介质
CN112085594B (zh) * 2020-09-14 2024-05-28 深圳前海微众银行股份有限公司 身份核实方法、设备及可读存储介质
CN112131362A (zh) * 2020-09-22 2020-12-25 腾讯科技(深圳)有限公司 对话语句生成方法和装置、存储介质及电子设备
CN112131362B (zh) * 2020-09-22 2023-12-12 腾讯科技(深圳)有限公司 对话语句生成方法和装置、存储介质及电子设备
CN112199927A (zh) * 2020-10-19 2021-01-08 古联(北京)数字传媒科技有限公司 古籍标点填充方法和装置
CN112380875A (zh) * 2020-11-18 2021-02-19 杭州大搜车汽车服务有限公司 对话标签跟踪方法、装置、电子装置及存储介质
CN112528679A (zh) * 2020-12-17 2021-03-19 科大讯飞股份有限公司 一种意图理解模型训练方法及装置、意图理解方法及装置
CN112528679B (zh) * 2020-12-17 2024-02-13 科大讯飞股份有限公司 一种意图理解模型训练方法及装置、意图理解方法及装置
CN112559714A (zh) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 对话生成方法、装置、电子设备及存储介质
CN112559714B (zh) * 2020-12-24 2024-04-12 北京百度网讯科技有限公司 对话生成方法、装置、电子设备及存储介质
EP4020326A1 (fr) * 2020-12-25 2022-06-29 Beijing Baidu Netcom Science and Technology Co., Ltd Procédé et appareil de formation de modèle, dispositif, support de stockage et produit-programme
CN113239167A (zh) * 2021-05-31 2021-08-10 百融云创科技股份有限公司 一种可自动生成对话策略的任务型对话管理方法和系统
CN113641806B (zh) * 2021-07-28 2023-06-23 北京百度网讯科技有限公司 对话方法、系统、电子设备及存储介质
CN113641806A (zh) * 2021-07-28 2021-11-12 北京百度网讯科技有限公司 对话方法、系统、电子设备及存储介质
US12118319B2 (en) 2021-07-28 2024-10-15 Beijing Baidu Netcom Science Technology Co., Ltd. Dialogue state rewriting and reply generating method and system, electronic device and storage medium
CN113705249A (zh) * 2021-08-25 2021-11-26 上海云从企业发展有限公司 对话处理方法、系统、装置及计算机可读存储介质
CN114490985B (zh) * 2022-01-25 2023-01-31 北京百度网讯科技有限公司 对话生成方法、装置、电子设备和存储介质
CN114490985A (zh) * 2022-01-25 2022-05-13 北京百度网讯科技有限公司 对话生成方法、装置、电子设备和存储介质
CN116737888A (zh) * 2023-01-11 2023-09-12 北京百度网讯科技有限公司 对话生成模型的训练方法和答复文本的确定方法、装置
CN116737888B (zh) * 2023-01-11 2024-05-17 北京百度网讯科技有限公司 对话生成模型的训练方法和答复文本的确定方法、装置
CN116501852A (zh) * 2023-06-29 2023-07-28 之江实验室 一种可控对话模型训练方法、装置、存储介质及电子设备
CN116501852B (zh) * 2023-06-29 2023-09-01 之江实验室 一种可控对话模型训练方法、装置、存储介质及电子设备
CN117556022A (zh) * 2023-12-18 2024-02-13 北京中关村科金技术有限公司 一种智能客服意图识别方法、装置及系统
CN118396659A (zh) * 2024-06-26 2024-07-26 广州平云信息科技有限公司 基于aigc的数字文化产品用户行为分析方法及系统
CN118568241A (zh) * 2024-07-31 2024-08-30 浙江大学 一种基于预训练模型的用户对话和画像的意图预测方法

Also Published As

Publication number Publication date
CN110046221A (zh) 2019-07-23
CN110046221B (zh) 2023-12-22

Similar Documents

Publication Publication Date Title
WO2020177282A1 (fr) Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage
CN107846350B (zh) 一种语境感知网络聊天的方法、计算机可读介质和系统
WO2020147428A1 (fr) Procédé et appareil de génération de contenu interactif, dispositif informatique et support de stockage
WO2020155619A1 (fr) Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations
US11657371B2 (en) Machine-learning-based application for improving digital content delivery
CN111428010B (zh) 人机智能问答的方法和装置
US20210019599A1 (en) Adaptive neural architecture search
CN114600099A (zh) 使用助理系统的基于自然语言理解的元语音系统提高语音识别精度
EP3547155A1 (fr) Apprentissage par représentation d'entités pour améliorer les recommandations de contenu numérique
WO2018033030A1 (fr) Procédé et dispositif de génération de bibliothèque en langage naturel
WO2022252636A1 (fr) Procédé et appareil de génération de réponse reposant sur l'intelligence artificielle, dispositif et support de stockage
CN110781302B (zh) 文本中事件角色的处理方法、装置、设备及存储介质
CN111144124B (zh) 机器学习模型的训练方法、意图识别方法及相关装置、设备
CN109766418B (zh) 用于输出信息的方法和装置
JP7488871B2 (ja) 対話推薦方法、装置、電子機器、記憶媒体ならびにコンピュータプログラム
US11270082B2 (en) Hybrid natural language understanding
US20190362025A1 (en) Personalized query formulation for improving searches
Windiatmoko et al. Developing facebook chatbot based on deep learning using rasa framework for university enquiries
CN112417158A (zh) 文本数据分类模型的训练方法、分类方法、装置和设备
CN112101042A (zh) 文本情绪识别方法、装置、终端设备和存储介质
CN113392640B (zh) 一种标题确定方法、装置、设备及存储介质
CN110727871A (zh) 基于卷积分解深度模型的多模态数据采集及综合分析平台
CN118378148A (zh) 多标签分类模型的训练方法、多标签分类方法及相关装置
CN113420136A (zh) 一种对话方法、系统、电子设备、存储介质和程序产品
CN117312641A (zh) 智能获取信息的方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918097

Country of ref document: EP

Kind code of ref document: A1