CN114036268A - Task type multi-turn dialogue method and system based on intention gate - Google Patents

Task type multi-turn dialogue method and system based on intention gate Download PDF

Info

Publication number
CN114036268A
CN114036268A CN202111193760.5A CN202111193760A CN114036268A CN 114036268 A CN114036268 A CN 114036268A CN 202111193760 A CN202111193760 A CN 202111193760A CN 114036268 A CN114036268 A CN 114036268A
Authority
CN
China
Prior art keywords
intention
word
gate
model
word slot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111193760.5A
Other languages
Chinese (zh)
Inventor
朱亚杰
卢宏涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202111193760.5A priority Critical patent/CN114036268A/en
Publication of CN114036268A publication Critical patent/CN114036268A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a task type multi-turn dialogue method and system based on an intention gate, which relate to the technical field of natural language processing, and comprise the following steps: step S1: collecting and preprocessing the corresponding user corpora according to the service requirements; step S2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data; step S3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model; step S4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model; step S5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled; and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled. The method and the device can ensure the correctness of task-type multi-turn conversation and improve the user experience of man-machine conversation.

Description

Task type multi-turn dialogue method and system based on intention gate
Technical Field
The invention relates to the technical field of natural language processing, in particular to a task-based multi-turn dialogue method and system based on an intention gate.
Background
With the development of the internet and artificial intelligence, intelligent voice interaction is becoming more and more extensive, and in voice interaction, semantic recognition and information processing are extremely important. In the traditional intelligent voice interaction technology, only single-sentence voice information of a user is usually identified, and the transmission and fusion of semantic information cannot be realized.
The multi-turn dialogue task is one of the most practical techniques in natural language processing, and requires the system to be able to take care of the context information while generating the smooth answer sentence. In recent years, a large number of multi-turn dialogue models based on HRED (hierarchical recurrent encoder) models have been developed, which encode context information using a multi-level recurrent neural network and achieve good results on english dialogue datasets such as Movie-DiC. But require very large task-based multi-turn dialog corpora, are computationally intensive, and have poor performance.
The invention patent with publication number CN112905749A discloses a task-based multi-round dialogue method based on an intention-slot value rule tree, which establishes an intention-slot value rule tree with intention-slot value joint information as root and leaf and slot value information as intermediate node according to the business rules of standard multi-round dialogue corpus, and during the dialogue process, adopts a neural network method to extract the intention and slot value in the user statement, and carries out the dialogue in a manner of traversing the intention-slot value rule tree with depth first, thereby effectively combining the business rules with the neural network method. Although the method can support the continuous operation of the task multi-turn conversation, the method can be only applied in a specific scene and has limitation.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a task-based multi-turn dialogue method and system based on an intention gate.
According to the task-type multi-turn dialogue method and system based on the intention gate, the scheme is as follows:
in a first aspect, a task-based multi-turn dialog method based on an intention gate is provided, the method comprising:
step S1: collecting and preprocessing the corresponding user corpora according to the service requirements;
step S2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data;
step S3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model;
step S4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model;
step S5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled.
Preferably, in step S1, the user corpora are collected through a production environment, and the preprocessing includes cleaning the user corpora with dirty and bad meaning and without actual semantics.
Preferably, the step S2 includes:
step S2.1: defining intentions, word slots and intention gate labels, knowing the fields corresponding to the linguistic data through the user linguistic data, and defining corresponding intentions, word slots and intention gate labels by combining business requirements;
step S2.2: annotation intents, word slots, and intention gates.
Preferably, the step S3 includes:
step S3.1: training intention recognition, word slot semantic filling and intention gate models: dividing the linguistic data of each field according to the ratio of 8:1:1, taking 8 parts of the linguistic data of each field as a training set, taking 1 part of the linguistic data of each field as a verification set, and taking 1 part of the linguistic data of each field as a test set;
step S3.2: optimizing intent recognition, word slot semantic filling, and intent gate models: and analyzing the recognition effect of the model on the linguistic data of each field according to the result of the test set, and iteratively optimizing the model according to the linguistic data, the algorithm and the parameters.
Preferably, the step S5 includes:
step S5.1: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
if the word slots are not filled, performing multi-turn tasks until all the word slots are filled, and calling corresponding information source services to reply according to the current intention;
step S5.2: inquiring the corresponding word slot according to the missing word slot in a question-following mode, judging whether the current intention is consistent with the original intention or not through an intention gate model when the user continues to fill the word slot, if so, continuing to fill the word slot, circulating until all the word slots are filled, and calling the corresponding information source service according to the current task to reply;
if not, the new task needs to be switched.
In a second aspect, there is provided an intention gate-based task-based multi-turn dialog system, the system comprising:
module M1: collecting and preprocessing the corresponding user corpora according to the service requirements;
module M2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data;
module M3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model;
module M4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model;
module M5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled.
Preferably, the user corpora in the module M1 are collected through a production environment, and the preprocessing includes cleaning the user corpora which are dirty, bad and have no actual semantics.
Preferably, the module M2 includes:
module M2.1: defining intentions, word slots and intention gate labels, knowing the fields corresponding to the linguistic data through the user linguistic data, and defining corresponding intentions, word slots and intention gate labels by combining business requirements;
module M2.2: annotation intents, word slots, and intention gates.
Preferably, the module M3 includes:
module M3.1: training intention recognition, word slot semantic filling and intention gate models: dividing the linguistic data of each field according to the ratio of 8:1:1, taking 8 parts of the linguistic data of each field as a training set, taking 1 part of the linguistic data of each field as a verification set, and taking 1 part of the linguistic data of each field as a test set;
module M3.2: optimizing intent recognition, word slot semantic filling, and intent gate models: and analyzing the recognition effect of the model on the linguistic data of each field according to the result of the test set, and iteratively optimizing the model according to the linguistic data, the algorithm and the parameters.
Preferably, the module M5 includes:
module M5.1: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
if the word slots are not filled, performing multi-turn tasks until all the word slots are filled, and calling corresponding information source services to reply according to the current intention;
module M5.2: inquiring the corresponding word slot according to the missing word slot in a question-following mode, judging whether the current intention is consistent with the original intention or not through an intention gate model when the user continues to fill the word slot, if so, continuing to fill the word slot, circulating until all the word slots are filled, and calling the corresponding information source service according to the current task to reply;
if not, the new task needs to be switched.
Compared with the prior art, the invention has the following beneficial effects:
1. whether the intentions of the first sentence and the current sentence of the task type conversation are consistent or not is judged through the intention gate model, so that the correctness of the task type multi-turn conversation is improved;
2. compared with the traditional multi-round conversation, the semantic recognition is more accurate, so that the user experience of man-machine conversation is improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of a task-based multi-turn dialog method based on intent gates.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The embodiment of the invention provides a task-based multi-turn dialogue method based on an intention gate, which is shown in figure 1 and comprises the following specific steps:
step S1: collecting and preprocessing the corresponding user corpora according to the service requirements;
specifically, in step S1, corpus collection:
the user corpora are collected through the production environment, and the vehicle-mounted voice conversation system is operated on an actual vehicle. For example: vehicle-mounted voice conversation robots are arranged on new energy vehicles.
Preprocessing the corpus:
preprocessing the corpus: and cleaning the user corpora which are dirty, messy and bad and have no actual semantics. For example: a word, a sentence that fits and has no actual semantics, etc.
Step S2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data;
specifically, in step S2, intents, word slots, and intention gate tags are defined:
the fields corresponding to the corpora can be known through the user corpora, and corresponding intentions, word slots and intention gate labels are defined by combining service requirements.
Annotation intents, word slots, and intention gates:
for example: the user linguistic data comprises: 1. weather in tomorrow; 2. i want to navigate to the beach; 3. play beyond the mile of Zhou Jilun. The corresponding intent is then: 1. weather; 2. navigating; 3. and (4) music. The corresponding word slot is: 1. date and city name; 2. a departure location and a destination; 3. singers and song names.
The input and output labels of the intent gate are as follows:
1. how the weather is in tomorrow; beijing label _ yes
2. How the weather is in tomorrow; then nobel _ no
Step S3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model;
specifically, in step S3, intention recognition, word slot semantic filling, and training of the intention gate model:
firstly, the linguistic data of each field are divided according to the proportion of 8:1:1, and then 8 parts of the linguistic data of each field are used as a training set, 1 part of the linguistic data of each field is used as a verification set, and 1 part of the linguistic data of each other field is used as a test set.
Intention recognition, word slot semantic filling, and optimization of the intention gate model:
and analyzing the recognition effect of the model on the linguistic data of each field through the result of the test set, and iteratively optimizing the model from the linguistic data, the algorithm and the parameters.
Step S4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model;
specifically, in step S4, intent recognition, word slot semantic filling, and inference of the intent gate model:
for the intent recognition model, the input is text and the output is the intent to which the text corresponds. For example, the input of the intent recognition model: and what the weather in the next day is, the model deduces that the intention corresponding to the text is weather _ intent.
For the word slot semantic filling model, the input is text and the output is word slots. For example, the input of the word slot semantic filling model: the weather of Beijing in tomorrow is similar, the model reasoning result is the date slot position: tomorrow, city name slot: beijing.
For the intention gate model, the input is two texts and the output is whether the intentions of the two texts are consistent. For example, input to the intent gate model: how the weather is in tomorrow; beijing, the model infers that the two texts are an intention label _ yes.
Step S5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled; and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled.
For example, the former sentence says "how do the weather today", the next sentence says "beijing? "the second sentence of users wants to ask what the weather of Beijing today is, the traditional multi-turn dialog may recognize that the current semantic is Baidu Beijing, may enter encyclopedia, and cannot perform multi-turn, thereby affecting the user experience.
For another example, the former sentence of the user says "how the weather is today", the next sentence says "the weather is very good in the sea", the sentence means chatty, but the conventional multi-turn dialog may recognize that the current semantic is the weather in the sea today, and a new task cannot be switched, which also affects the user experience.
Specifically, in step S5, multi-round task determination:
identifying the intention of the user corpus and extracting a corresponding word slot through an intention identification model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled; and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled, and calling corresponding source services to reply according to the current intention.
And (3) multi-round tasks:
the main reason of the multi-round tasks is that word slots are not filled up at one time, and the word slots can be inquired by lacking the word slots in a mode of pursuing, when a user continues to fill the word slots, the user needs to judge whether the current intention is consistent with the original intention diagram through an intention gate model, if so, the word slots are continuously filled up, the process is circulated until all the word slots are filled up, and then the corresponding information source service is called according to the current task to make a reply; if not, a new task needs to be switched.
The embodiment of the invention provides a task-based multi-turn dialogue method and system based on an intention gate, which are used for judging whether the intentions of a first sentence and a current sentence of a task-based dialogue are consistent or not through an intention gate model so as to ensure the correctness of the task-based multi-turn dialogue and improve the user experience of man-machine dialogue.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A task-based multi-turn dialog method based on an intention gate, comprising:
step S1: collecting and preprocessing the corresponding user corpora according to the service requirements;
step S2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data;
step S3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model;
step S4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model;
step S5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled.
2. The method for intent-gate based task-based multi-turn dialog of claim 1 wherein the user corpuses are collected through a production environment in step S1 and the preprocessing includes cleaning the user corpuses that are dirty, bad and without actual semantics.
3. The intention gate-based task-type multi-turn dialog method of claim 1, wherein the step S2 comprises:
step S2.1: defining intentions, word slots and intention gate labels, knowing the fields corresponding to the linguistic data through the user linguistic data, and defining corresponding intentions, word slots and intention gate labels by combining business requirements;
step S2.2: annotation intents, word slots, and intention gates.
4. The intention gate-based task-type multi-turn dialog method of claim 1, wherein the step S3 comprises:
step S3.1: training intention recognition, word slot semantic filling and intention gate models: dividing the linguistic data of each field according to the ratio of 8:1:1, taking 8 parts of the linguistic data of each field as a training set, taking 1 part of the linguistic data of each field as a verification set, and taking 1 part of the linguistic data of each field as a test set;
step S3.2: optimizing intent recognition, word slot semantic filling, and intent gate models: and analyzing the recognition effect of the model on the linguistic data of each field according to the result of the test set, and iteratively optimizing the model according to the linguistic data, the algorithm and the parameters.
5. The intention gate-based task-type multi-turn dialog method of claim 1, wherein the step S5 comprises:
step S5.1: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
if the word slots are not filled, performing multi-turn tasks until all the word slots are filled, and calling corresponding information source services to reply according to the current intention;
step S5.2: inquiring the corresponding word slot according to the missing word slot in a question-following mode, judging whether the current intention is consistent with the original intention or not through an intention gate model when the user continues to fill the word slot, if so, continuing to fill the word slot, circulating until all the word slots are filled, and calling the corresponding information source service according to the current task to reply;
if not, the new task needs to be switched.
6. A goal-based task-based multi-turn dialog system, comprising:
module M1: collecting and preprocessing the corresponding user corpora according to the service requirements;
module M2: defining corresponding intentions, word slots and intention gate labels according to the user linguistic data;
module M3: training and optimizing an intention recognition model, a word slot semantic filling model and an intention gate model;
module M4: reasoning is carried out on the intention recognition model, the word slot semantic filling model and the intention gate model;
module M5: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
and if the word slots are not filled, performing multiple rounds of tasks until all the word slots are filled.
7. The intent gate based task-based multi-turn dialog system of claim 6 wherein user corpuses are collected via the production environment in the module M1 and the preprocessing includes cleaning user corpuses that are dirty, confusing and without actual semantics.
8. The intent gate-based task-type multi-turn dialog system of claim 6 wherein the module M2 comprises:
module M2.1: defining intentions, word slots and intention gate labels, knowing the fields corresponding to the linguistic data through the user linguistic data, and defining corresponding intentions, word slots and intention gate labels by combining business requirements;
module M2.2: annotation intents, word slots, and intention gates.
9. The intent gate-based task-type multi-turn dialog system of claim 6 wherein the module M3 comprises:
module M3.1: training intention recognition, word slot semantic filling and intention gate models: dividing the linguistic data of each field according to the ratio of 8:1:1, taking 8 parts of the linguistic data of each field as a training set, taking 1 part of the linguistic data of each field as a verification set, and taking 1 part of the linguistic data of each field as a test set;
module M3.2: optimizing intent recognition, word slot semantic filling, and intent gate models: and analyzing the recognition effect of the model on the linguistic data of each field according to the result of the test set, and iteratively optimizing the model according to the linguistic data, the algorithm and the parameters.
10. The intent gate-based task-type multi-turn dialog system of claim 6 wherein the module M5 comprises:
module M5.1: recognizing the intention of the user corpus and extracting a corresponding word slot through an intention recognition model and a word slot semantic filling model, and calling a corresponding information source service to reply through the current intention if the word slot is filled;
if the word slots are not filled, performing multi-turn tasks until all the word slots are filled, and calling corresponding information source services to reply according to the current intention;
module M5.2: inquiring the corresponding word slot according to the missing word slot in a question-following mode, judging whether the current intention is consistent with the original intention or not through an intention gate model when the user continues to fill the word slot, if so, continuing to fill the word slot, circulating until all the word slots are filled, and calling the corresponding information source service according to the current task to reply;
if not, the new task needs to be switched.
CN202111193760.5A 2021-10-13 2021-10-13 Task type multi-turn dialogue method and system based on intention gate Pending CN114036268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111193760.5A CN114036268A (en) 2021-10-13 2021-10-13 Task type multi-turn dialogue method and system based on intention gate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111193760.5A CN114036268A (en) 2021-10-13 2021-10-13 Task type multi-turn dialogue method and system based on intention gate

Publications (1)

Publication Number Publication Date
CN114036268A true CN114036268A (en) 2022-02-11

Family

ID=80141253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111193760.5A Pending CN114036268A (en) 2021-10-13 2021-10-13 Task type multi-turn dialogue method and system based on intention gate

Country Status (1)

Country Link
CN (1) CN114036268A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401346A (en) * 2023-03-09 2023-07-07 北京海致星图科技有限公司 Task type multi-round dialogue construction method, equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401346A (en) * 2023-03-09 2023-07-07 北京海致星图科技有限公司 Task type multi-round dialogue construction method, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112100349B (en) Multi-round dialogue method and device, electronic equipment and storage medium
CN108446286B (en) Method, device and server for generating natural language question answers
CN111026842B (en) Natural language processing method, natural language processing device and intelligent question-answering system
CN111708869B (en) Processing method and device for man-machine conversation
CN109325040B (en) FAQ question-answer library generalization method, device and equipment
CN113987147A (en) Sample processing method and device
CN110502227A (en) The method and device of code completion, storage medium, electronic equipment
CN115525753A (en) Task-oriented multi-turn dialogue method and system based on 1+ N
CN112579733B (en) Rule matching method, rule matching device, storage medium and electronic equipment
CN111914074A (en) Method and system for generating limited field conversation based on deep learning and knowledge graph
CN114281957A (en) Natural language data query method and device, electronic equipment and storage medium
CN114118417A (en) Multi-mode pre-training method, device, equipment and medium
CN111930912A (en) Dialogue management method, system, device and storage medium
CN116644168A (en) Interactive data construction method, device, equipment and storage medium
CN113988071A (en) Intelligent dialogue method and device based on financial knowledge graph and electronic equipment
CN113761868A (en) Text processing method and device, electronic equipment and readable storage medium
CN114911893A (en) Method and system for automatically constructing knowledge base based on knowledge graph
CN114036268A (en) Task type multi-turn dialogue method and system based on intention gate
CN110750632B (en) Improved Chinese ALICE intelligent question-answering method and system
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
CN115617974B (en) Dialogue processing method, device, equipment and storage medium
CN117473054A (en) Knowledge graph-based general intelligent question-answering method and device
CN114860869A (en) Controllable universal dialogue model with generalized intentions
CN114116975A (en) Multi-intention identification method and system
CN115114453A (en) Intelligent customer service implementation method and device based on knowledge graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination