CN116910220A - Multi-round dialogue interaction processing method, device, equipment and storage medium - Google Patents

Multi-round dialogue interaction processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116910220A
CN116910220A CN202310950620.0A CN202310950620A CN116910220A CN 116910220 A CN116910220 A CN 116910220A CN 202310950620 A CN202310950620 A CN 202310950620A CN 116910220 A CN116910220 A CN 116910220A
Authority
CN
China
Prior art keywords
dialogue
data
vector
knowledge
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310950620.0A
Other languages
Chinese (zh)
Inventor
顾孙炎
章翔
陆韬宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202310950620.0A priority Critical patent/CN116910220A/en
Publication of CN116910220A publication Critical patent/CN116910220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of artificial intelligence and provides a multi-round dialogue interaction processing method, device, equipment and storage medium. The method comprises the following steps: collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model; splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model; based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on vectorized semantic information; response information is generated based on the search text. The application can judge whether to quote knowledge and generate the proper search keyword to inquire the knowledge graph through the user semantic information, and based on the knowledge graph, the efficiency and the accuracy of intelligent dialogue are improved.

Description

Multi-round dialogue interaction processing method, device, equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a multi-round dialogue interaction processing method, device, equipment and storage medium.
Background
Intelligent dialog is a sub-direction in the field of artificial intelligence, specifically to allow a person to interact with a computer through a human language. Intelligent conversations may be divided into single-round conversations and multi-round conversations, where multiple rounds of conversations typically interact in conjunction with historical information. The multi-round dialogue is divided into a multi-round dialogue of a vertical field task type and a multi-round dialogue of an open field type, the dialogue mode of a family scene is open, and communication between users and equipment is closer to interaction between people.
At present, an intelligent dialogue generally builds an entity dictionary, places common information in the field into the entity dictionary, then identifies key entities in user interaction through an entity identification method, and finally searches out a response result through a knowledge base. However, this solution has the following problems: the method for searching knowledge by constructing the entity dictionary needs to ensure that the dialogue keywords of the user are all in the entity dictionary, but the dialogue user of the family scene is spoken more, and in many cases, the keywords cannot be ensured to be in the entity dictionary. Meanwhile, in a multi-round dialogue, the method can directly search knowledge after identifying key entities, and in an actual family scene, a user may only want to chat and not necessarily want to know certain knowledge. Based on this, intelligent conversations result in low accuracy.
Disclosure of Invention
The embodiment of the application provides a multi-round dialogue interaction processing method, device, equipment and storage medium, which are used for solving the problem of low accuracy of intelligent dialogue.
In a first aspect, an embodiment of the present application provides a method for processing multi-round dialogue interaction, including:
collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
In one embodiment, the determining that text retrieval is required based on the historical dialog vector and the current dialog vector includes:
splicing the historical dialogue vector and the current dialogue vector to obtain a spliced vector;
inputting the spliced vector to a first decoder module, and obtaining a first decoding result output by the first decoder module;
if the first decoding result is the first set value, text retrieval is needed.
In one embodiment, after the splicing vector is input to the first decoder module and the first decoding result output by the first decoder module is obtained, the method further includes:
if the first decoding result is the second set value, text retrieval is not needed, and universal boring response information is generated.
In one embodiment, the obtaining the search text based on the vectorized semantic information includes:
inputting the vectorized semantic information to a second decoder module, and obtaining a second decoding result output by the second decoder module;
and acquiring the search text based on the second decoding result.
In one embodiment, the generating response information based on the search text includes:
Extracting target text from a knowledge base based on the search text;
and generating the response information based on the target text.
In one embodiment, the joint model is trained based on the following steps:
collecting sample dialogue data;
performing knowledge search content labeling and knowledge classification labeling on the sample dialogue data to generate the sample data;
and training a preset model by adopting the sample data to obtain the joint model.
In one embodiment, the performing knowledge search content annotation and knowledge classification annotation on the sample dialogue data includes:
labeling the sample dialogue data of the current turn as retrievable content of a knowledge base;
the sample dialogue data for each round is labeled as requiring reference knowledge or not requiring reference knowledge.
In a second aspect, an embodiment of the present application provides a multi-round dialogue interaction processing apparatus, including:
the collection module is used for collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
The semantic information determining module is used for splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model and obtaining vectorized semantic information output by the joint model;
the search text determining module is used for determining that text search is required based on the historical dialogue vector and the current dialogue vector, and acquiring search text based on the vectorized semantic information;
the response information generation module is used for generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory storing a computer program, where the processor implements the steps of the multi-round dialogue interaction processing method according to the first aspect when executing the program.
In a fourth aspect, embodiments of the present application provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the multi-round dialog interaction processing method of the first aspect.
The method, the device, the equipment and the storage medium for processing multi-round dialogue interaction provided by the embodiment of the application are used for acquiring historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and acquiring a historical dialogue vector and a current dialogue vector which are output by the joint model; splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model; based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on vectorized semantic information; response information is generated based on the search text. The application can judge whether to quote knowledge and generate the proper search keyword to inquire the knowledge graph through the user semantic information, and based on the knowledge graph, the efficiency and the accuracy of intelligent dialogue are improved.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a multi-round dialogue interactive processing method according to an embodiment of the application;
FIG. 2 is a schematic diagram of a multi-round dialogue interactive processing system according to an embodiment of the present application;
FIG. 3 is a second flowchart of a multi-round dialogue interactive processing method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a semantic understanding module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a search generation module according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a multi-round dialogue interactive processing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a multi-round dialogue interaction processing method according to an embodiment of the present application. Referring to fig. 1, an embodiment of the present application provides a multi-round dialogue interaction processing method, which may include:
step 100, collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
it should be noted that the joint model is obtained by training a preset model by using sample data, and the sample data includes sample dialogue data, a knowledge search content tag and a knowledge classification tag thereof.
Historical dialog data can be used to understand the context and context of a dialog, and by analyzing previous dialogs, information such as the user's intent, preferences, habitual expressions, etc. can be captured to better understand the current user's speech and generate a more accurate, consistent reply. Meanwhile, the historical dialogue data can help the system track the flow of the dialogue and keep the continuity of the dialogue, and by recording the interaction history in the dialogue, the system can judge the state, the theme or the task of the previous round of dialogue and provide proper context and guidance for the reply of the current round of dialogue according to the state, the theme or the task.
In the process of interacting with the equipment, a user acquires historical dialogue data and current dialogue data through the equipment, then inputs the historical dialogue data and the current dialogue data into a pre-trained joint model, and acquires a historical dialogue vector and a current dialogue vector which are output by the joint model.
Where a historical dialog vector refers to the conversion of previous dialog content into a vector representing the entire dialog history that captures semantic information and context contained in the dialog, which may include the user's previous speech, the robot's replies, and other dialog participants ' speech. The current dialog vector refers to a vector that converts the current dialog content into a vector representing the current dialog state, and generally contains only the latest utterance of the current user, i.e., the portion of the dialog content that the model needs to respond to. The current dialogue vector is mainly used for performing tasks such as reply generation, intention recognition, entity recognition and the like aiming at specific user speaking.
Step 200, splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
And splicing the historical dialogue data and the current dialogue data to obtain spliced data, for example, organizing the historical dialogue data and the current dialogue data in a list form, connecting the historical dialogue data together by using separators, finally connecting the spliced historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into a joint model, and obtaining vectorized semantic information output by the joint model. Where vectorized semantic information refers to converting text or language meaning into a digitized vector representation. By vectorization, words, sentences or whole documents can be converted into a mathematically computable form for more convenient processing and analysis in a computer.
Step 300, based on the history dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
if it is determined that the current dialogue needs to be subjected to text retrieval based on the historical dialogue vector and the current dialogue vector, that is, text retrieval from a knowledge base is required, at this time, retrieval text needs to be acquired based on vectorized semantic information.
And 400, generating response information based on the search text.
It should be noted that, in the embodiment of the present application, a knowledge base is created for retrieving text, where the knowledge base mainly includes general domain knowledge and intelligent device knowledge, both of which are stored in a triplet form, and the general domain knowledge is mainly encyclopedic type knowledge, for example, "small Ming-height-226 cm" and the like, and the intelligent device knowledge needs to be combined with current user intelligent device information, for example, "refrigerator-location-kitchen", "desk lamp-status-off" and the like. Wherein the encyclopedia knowledge base is a static knowledge base, and knowledge cannot be changed along with user interaction. The knowledge base of the intelligent device is a dynamic knowledge base, which can be changed along with the user through voice interaction or manual control of the device, for example, when the user turns on the desk lamp, the knowledge information "desk lamp-state-off" can be changed into "desk lamp-state-on".
After determining the search text, extracting a target text from the knowledge base based on the search text, and then generating response information based on the target text. For example, keyword matching, entity recognition and other modes are adopted, and target text corresponding to the search text is extracted from the knowledge base, namely core information concerned by the user is extracted. Then, according to the structure and content of the target text, adopting different methods to generate a response, for example, if the target text is a complete sentence, the target text can be directly returned as the response; if the target text contains only partial information, then the complete response information needs to be generated in conjunction with generating a template answer or using a natural language generation model.
According to the multi-round dialogue interaction processing method provided by the embodiment of the application, the historical dialogue data and the current dialogue data are collected and input into the pre-trained joint model, so that the historical dialogue vector and the current dialogue vector output by the joint model are obtained; splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model; based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on vectorized semantic information; generating response information based on the search text; the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels. According to the embodiment of the application, the open domain scene corpus is pre-trained to obtain the joint model, so that the knowledge can be queried by generating the proper search keywords through the semantic information of the user while judging whether to refer to the knowledge, and based on the knowledge, the efficiency and the accuracy of intelligent dialogue are improved.
In one embodiment, the determining that text retrieval is required based on the historical dialog vector and the current dialog vector includes:
step 310, splicing the historical dialogue vector and the current dialogue vector to obtain a spliced vector;
step 320, inputting the spliced vector to a first decoder module, and obtaining a first decoding result output by the first decoder module;
in step 330, if the first decoding result is the first set value, text search is required.
It should be noted that the first decoder module refers to a softmax decoder module for a multi-class classification task, where each sample belongs to a class.
The historical dialog vector and the current dialog vector are stitched to obtain a stitched vector, e.g., the two vectors are sequentially connected together, or the two vectors are stitched together using a specific stitching function (e.g., torch. Cat or numpy. Conccate). Then, inputting the spliced vector to a softmax decoder module, outputting a first decoding result, namely a classification result, through the softmax decoder module, and if the classification result is 1, indicating that text retrieval is needed; if the classification result is 0, it indicates that text retrieval is not needed, and general chat response information, such as "yes", "no error" and other response information, is generated.
Alternatively, the splice vector Wc may also be mapped to a dimension equal to the number of classification categories by linear transformation. Assuming n classification categories, a weight matrix W may be defined, in the shape (4, n), where each column is the weight of a category. The splice vector Wc may be transformed into a vector Ws of size n using full-join layer or matrix multiplication. Ws is then converted to a probability distribution vector ps= [ p1, p2, ], pn ], with a softmax function, where each pi corresponds to the probability of a class. Finally, whether the data needs to be retrieved is judged according to the predicted probability value. For example, the threshold is set to 0.5, if the probability of a certain category exceeds the threshold, the data is judged to need to be retrieved, otherwise, the data is not needed. For example, assuming that there are two categories, "search needed" and "search not needed", respectively, if the probability value of the "search needed" category is greater than 0.5, it is indicated that the current dialogue needs to be searched.
According to the embodiment of the application, the historical dialogue vector and the current dialogue vector are spliced, and the spliced vector is judged whether to need to be searched based on the softmax decoder module, so that the accuracy of intelligent dialogue is improved.
In one embodiment, the obtaining the search text based on the vectorized semantic information includes:
step 340, inputting the vectorized semantic information to a second decoder module, and obtaining a second decoding result output by the second decoder module;
and step 350, acquiring the search text based on the second decoding result.
It should be noted that the second decoder module refers to a decoder module, which is configured to generate, word by word, text to be retrieved.
And inputting the vectorized semantic information into a second decoder module, acquiring a second decoding result output by the second decoder module, and acquiring the search text based on the second decoding result.
For example, using vectorized semantic information as input to the decode decoder module, during each step of generating, the decoder predicts the next character or word based on the current input vector and the previously generated sequence, and may generate text using different strategies, such as greedy search or random selection with a certain probability. When each step generates a character or word, the generation result with the highest probability can be selected as the next step. When the generated text reaches a certain end condition, for example, reaches a specified length or encounters a specific termination symbol, the generation process may be stopped and the generated text is output, and then the output text is taken as the search text.
According to the embodiment of the application, the vectorized semantic information is input to the second decoder module, the text to be searched is generated word by word based on the second decoder module, and based on the text, the accuracy of the intelligent dialogue is improved.
In one embodiment, the joint model is trained based on the following steps:
step 510, collecting sample dialogue data;
step 610, performing knowledge search content labeling and knowledge classification labeling on the sample dialogue data to generate the sample data;
and step 710, training a preset model by using the sample data to obtain the joint model.
It should be noted that, referring to fig. 2, the joint model includes three parts, namely:
the first part is the encoder, i.e. the shared feature S-Enc. It is mainly responsible for encoding the input sequence, capturing the context information of the input sentence. The encoder uses a self-attention mechanism to model the input sequence through multiple layers of self-attention layers and feedforward neural network layers. The shared feature S-Enc represents the semantic and contextual information of the input sequence, providing the basis for the subsequent understanding of the decoder.
The second part is an understanding decoder, denoted U-Dec, which consists of two sub-modules: self-attitudes and a pre-training mask model (MLM). In the parsing decoder, the self-intent layer may generate attention weights associated with the input sequence based on the context information of the input sequence, thereby better understanding the semantics of the input sentence. The pre-training mask model (MLM) can perform unsupervised pre-training by masking part of the input sequence, thereby improving the understanding ability of the model to the context.
The third part is a generator decoder, denoted G-Dec. It consists of self-attitution of mask and pre-trained DAE (Denoising Autoencoder). The generated decoder is used for generating the next word or phrase according to the previously generated text segment and the context information. The self-intent layer of the mask may provide a contextual understanding of the currently generated words for the model, thereby generating more consistent, reasonable text. The pre-trained DAE may then learn, by way of self-encoding, the potential representation of the input sequence for use in generating the next word or phrase.
Specifically, the joint model is trained based on the following steps:
sample dialogue data is collected, for example, sample history dialogue data is collected, and then knowledge search content labeling and knowledge classification labeling are performed on the sample dialogue data to generate sample data. Specifically, the sample dialogue data of the current turn is marked as the retrievable content of the knowledge base; the sample dialogue data for each round is labeled as requiring or not requiring reference knowledge. For example, knowledge search content tagging requires tagging of dialogue data for a current round as knowledge base retrievable content, e.g., user dialogue content is: "1) Xiaoming Duohuo- > (the system answers Xiaoming height) - > 2) that tavern- > (the system answers the height of tavern's lobules)", in the knowledge search content labeling process, "Xiaoming Duohuo" needs to be labeled "Xiaoming-height", while the data of the current round is introduced into the next round and is converted into "Xiaoming-height-natasterculia" and then labeled "Xiaoming-height-wife".
The knowledge classification labeling needs to label dialogue data of each turn, and mainly two types of labels, namely 'needing reference knowledge' and 'not needing reference knowledge', in the present example, the dialogue of the user needs knowledge retrieval, so 'Xiaoming how high', 'Xiaoming-height-natasterculia wife' can be labeled as 'needing reference knowledge'. If the user dialogue is "little bright high o", "tao wife is also high", it can be directly marked as "no reference knowledge is needed" and the corresponding chat replies such as "yes", "true high" and other general chat are added.
And finally, training the preset model by adopting sample data to obtain a joint model. In the process of training the joint model, the NLU (natural language understanding) task and the NLG (natural language generation) task are simultaneously considered, so that whether knowledge is quoted or not can be judged, and meanwhile, proper search keywords can be generated through user semantic information.
According to the embodiment of the application, the sample dialogue data is marked, the marking mode is divided into knowledge search content marking and knowledge classification marking, and simultaneously, the joint model can simultaneously consider an NLU (natural language understanding) task and an NLG (natural language generation) task, and can also generate proper search keywords through user semantic information while judging whether knowledge is quoted, so that the accuracy of model identification is improved.
In order to further explain the multi-round dialogue interaction processing method provided by the application, refer to fig. 2-5 and the following embodiments.
Based on the characteristics of family scene dialogue, the embodiment of the application provides a model capable of simultaneously meeting two tasks of text classification and text generation. Meanwhile, the embodiment of the application also establishes the knowledge base of the intelligent equipment and the knowledge base of the general field, thereby meeting the knowledge retrieval of the user in the knowledge dialogue scene. The architecture of the multi-round dialogue interaction processing system is shown in fig. 2, and mainly comprises a data processing module, an algorithm model and a knowledge base module, wherein the following is the functional information of each module:
(1) The data processing module is mainly used for marking the multi-round dialogue data of the home scene according to the set requirement, wherein the marking mode is divided into knowledge searching content marking and knowledge classification marking.
Knowledge search content tagging entails tagging dialogue data for a current round as knowledge base retrievable content, e.g., user dialogue content: "1) Xiaoming Duohuo- > (the system answers Xiaoming height) - > 2) that tavern- > (the system answers the height of tavern's lobules)", in the knowledge search content labeling process, "Xiaoming Duohuo" needs to be labeled "Xiaoming-height", while the data of the current round is introduced into the next round and is converted into "Xiaoming-height-natasterculia" and then labeled "Xiaoming-height-wife".
The knowledge classification labeling needs to label dialogue data of each turn, and mainly two types of labels, namely 'needing reference knowledge' and 'not needing reference knowledge', in the present example, the dialogue of the user needs knowledge retrieval, so 'Xiaoming how high', 'Xiaoming-height-natasterculia wife' can be labeled as 'needing reference knowledge'. If the user dialogue is "little bright high o", "tao wife is also high", it can be directly marked as "no reference knowledge is needed" and the corresponding chat replies such as "yes", "true high" and other general chat are added.
(2) The algorithm module mainly comprises a text pre-training module, an NLU (Natural Language Understanding ) module and an NLG (Natural Language Generation, natural language generation) module.
1) The text pre-training module needs to pre-train in the corpus data in the home field in advance to obtain a universal model (i.e. a joint model) adapting to the dialogue of the home scene, and then the NLU module and the NLG module perform fine adjustment. The text pre-training module comprises two tasks, namely MLM (Masked Language Modeling, mask language model) and DAE (noise automatic encoder):
Pretraining task: the MLM is a mask language model, and for the input text, the middle Chinese characters are replaced by [ mask ] at random, for example, "Zhou Xiaoming new album called what name", which becomes after passing through the MLM task: week [ mask ] [ mask ] New album [ mask ] what name [ mask ]. After the text is replaced, the mask Chinese characters need to be predicted according to the context in the pre-training process.
Unsupervised learning task: the DAE is a noise automatic encoder, and is used for realizing a text generation task, and mainly comprises three types of noise for randomly interfering an input text: a word sequence is locally disordered; b words are randomly discarded with a probability of 0.1; c words are replaced with P tags with a probability of 0.1.
The text pre-training module combines the two pre-training methods, so that a general model (i.e. a joint model) obtained by training is suitable for an NLU task and an NLG task, and subsequent fine adjustment is facilitated.
2) The semantic understanding module (NLU module) mainly comprises a classification task for judging whether the current dialogue needs knowledge retrieval or not. Referring to fig. 4, the current dialog includes two parts, the above is that the historical dialog data is represented by [ CLS, H1, H2, H3, H4, SEP ], and the below is that the current dialog data is represented by [ CLS, X1, X2, X3, X4, SEP ], wherein CLS represents a initiator, SEP represents a text separator, and no specific semantic information is provided. Both the above and below require input of a joint model for fine tuning to obtain an above vector Wa (assumed to be 0.1, 0.2) and a below vector Wb (assumed to be 0.3, 0.4), then the above vector and the below vector are spliced to obtain wc= [0.1,0.2,0.3,0.4], and finally Wc is input to a softmax decoder module to determine whether the data needs to be retrieved.
3) The natural language generation module (NLG module) is responsible for generating search content, and the context is also required to be vectorized through a joint model, but the difference between the natural language generation module and the semantic understanding module is that the context is directly spliced and then vectorized. Referring to fig. 5, where the historical dialog data is represented by [ H1, H2, H3, H4] and the current dialog data is represented by [ X1, X2, X3, X4], the data input to the joint model is [ CLS, H1, H2, H3, H4, SEP, X1, X2, X3, X4, SEP ], which is distinguished from the semantic understanding of the context by vectorizing and then stitching the context, respectively. And obtaining context semantic information after vectorizing the text, and inputting the context semantic information into a decoder module for word-by-word generation to obtain the text to be searched.
(3) The knowledge map module mainly comprises general domain knowledge and intelligent equipment knowledge, wherein the general domain knowledge and the intelligent equipment knowledge are stored in a triplet form, the general domain knowledge is mainly of encyclopedic type knowledge, such as 'Xiaoming-height-226 cm', and the intelligent equipment knowledge needs to be combined with current user intelligent equipment information, such as 'refrigerator-position-kitchen', 'desk lamp-state-off', and the like. Wherein the encyclopedia knowledge base is a static knowledge base, and knowledge cannot be changed along with user interaction. The knowledge base of the intelligent device is a dynamic knowledge base, which can be changed along with the user through voice interaction or manual control of the device, for example, when the user turns on the desk lamp, the knowledge information "desk lamp-state-off" can be changed into "desk lamp-state-on".
Referring to fig. 3, in the embodiment of the present application, when a user interacts with a device, firstly, it is determined whether there is a history dialogue, if there is no current dialogue text directly vectorized by an algorithm module and entering an NLU classification module to determine whether knowledge is to be cited, if knowledge is not to be cited, a general chat reply is directly generated, and if knowledge is to be cited, search content is generated by an NLG text generation module and put into a knowledge base for knowledge search. If no end word is received, the next dialog is entered, and the history dialog text is spliced with the current dialog to repeat the above process.
The family scene multi-turn dialog belongs to an open domain dialog, user interaction texts are more random than the fixed domain multi-turn dialog, and text center words are out of a domain dictionary with high probability, so that dialog topics cannot be identified. In order to solve the problem, the embodiment of the application directly generates the search statement which can be identified by the knowledge base by generating each model and combining the historical dialogue of the user, so as to avoid the condition of searching.
In addition, the multi-round dialogue of the family scene is random, and the situation that the dialogue process is only boring and knowledge is not required to be introduced can also occur, and the forced introduction of the knowledge is easy to cause the user to feel objectionable. In order to solve the problem, the embodiment of the application builds a joint model on the basis of text generation, introduces text classification tasks, and judges whether the current dialogue needs quotation knowledge or not. Under the condition that knowledge is not needed, the current dialogue only needs to combine the historical information, and knowledge retrieval is not needed for each sentence. Based on the method, the user experience is improved, and meanwhile, the question and answer efficiency is also improved.
In a home scenario, the user is less boring with the device alone, often including some purpose in the conversation process. For example, ask for air conditioning brands, starburst operas, singers name songs, etc. The common point of the problems is that a certain knowledge is needed to answer the problems correctly, and for the problems, two sets of knowledge bases are built according to the embodiment of the application, one set of knowledge bases is a full house intelligent knowledge base, and the information of the current household intelligent equipment is covered; the other set is a universal scene knowledge base which contains knowledge information of each large field, and meanwhile, knowledge information is introduced into multiple rounds of conversations, so that the multiple rounds of knowledge conversations are realized, and the question-answering efficiency of the multiple rounds of conversations is improved.
According to the embodiment of the application, the open domain scene corpus is pre-trained through the pre-training model, and the NLU task and the NLG task are simultaneously considered, so that whether knowledge is quoted or not can be judged, and meanwhile, a proper search keyword can be generated through user semantic information to inquire a knowledge graph, and based on the knowledge graph, the efficiency and the accuracy of intelligent dialogue are improved.
The following describes the multi-round dialogue interaction processing device provided by the embodiment of the application, and the multi-round dialogue interaction processing device described below and the multi-round dialogue interaction processing method described above can be referred to correspondingly.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a multi-round dialogue interaction processing device provided by an embodiment of the present application, where the multi-round dialogue interaction processing device provided by the embodiment of the present application includes an acquisition module 601, a semantic information determination module 602, a search text determination module 603, and a response information generation module 604.
The collection module 601 is configured to collect historical session data and current session data, input the historical session data and the current session data into a pre-trained joint model, and obtain a historical session vector and a current session vector output by the joint model;
the semantic information determining module 602 is configured to splice the historical dialogue data and the current dialogue data to obtain spliced data, input the spliced data to the joint model, and obtain vectorized semantic information output by the joint model;
a search text determining module 603, configured to determine that text search is required based on the historical dialogue vector and the current dialogue vector, and obtain a search text based on the vectorized semantic information;
a response information generating module 604, configured to generate response information based on the search text;
The joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
According to the multi-round dialogue interaction processing device provided by the embodiment of the application, through collecting historical dialogue data and current dialogue data, the historical dialogue data and the current dialogue data are input into a pre-trained joint model, and a historical dialogue vector and a current dialogue vector output by the joint model are obtained; splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model; based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on vectorized semantic information; generating response information based on the search text; the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels. According to the embodiment of the application, the open domain scene corpus is pre-trained to obtain the joint model, so that the knowledge can be queried by generating the proper search keywords through the semantic information of the user while judging whether to refer to the knowledge, and based on the knowledge, the efficiency and the accuracy of intelligent dialogue are improved.
In one embodiment, the retrieve text determination module 603 is specifically configured to:
splicing the historical dialogue vector and the current dialogue vector to obtain a spliced vector;
inputting the spliced vector to a first decoder module, and obtaining a first decoding result output by the first decoder module;
if the first decoding result is the first set value, text retrieval is needed.
In one embodiment, the retrieve text determination module 603 is further configured to:
if the first decoding result is the second set value, text retrieval is not needed, and universal boring response information is generated.
In one embodiment, the retrieve text determination module 603 is specifically configured to:
inputting the vectorized semantic information to a second decoder module, and obtaining a second decoding result output by the second decoder module;
and acquiring the search text based on the second decoding result.
In one embodiment, the answer information generation module 604 is specifically configured to:
extracting target text from a knowledge base based on the search text;
and generating the response information based on the target text.
In one embodiment, the model training module is specifically configured to:
Collecting sample dialogue data;
performing knowledge search content labeling and knowledge classification labeling on the sample dialogue data to generate the sample data;
and training a preset model by adopting the sample data to obtain the joint model.
In one embodiment, the model training module is specifically configured to:
labeling the sample dialogue data of the current turn as retrievable content of a knowledge base;
the sample dialogue data for each round is labeled as requiring reference knowledge or not requiring reference knowledge.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface (Communication Interface) 720, memory 730, and communication bus 770, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 770. Processor 710 may call a computer program in memory 730 to perform the steps of the multi-round dialog interaction handling method, including, for example:
collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
Splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, an embodiment of the present application further provides a non-transitory computer readable storage medium, on which a computer program is stored, where the computer program is implemented when executed by a processor to perform the steps of the multi-round dialogue interaction processing method provided in the foregoing embodiments, for example, includes:
collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
In another aspect, embodiments of the present application further provide a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, where the computer program when executed by a processor is capable of executing the steps of the multi-round dialogue interaction processing method provided in the foregoing embodiments, where the steps include:
collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A multi-round dialogue interaction processing method, comprising:
collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model, and obtaining vectorized semantic information output by the joint model;
based on the historical dialogue vector and the current dialogue vector, determining that text retrieval is required, and acquiring retrieval text based on the vectorized semantic information;
Generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
2. The multi-turn dialog interaction processing method of claim 1 wherein the determining that text retrieval is required based on the historical dialog vector and the current dialog vector comprises:
splicing the historical dialogue vector and the current dialogue vector to obtain a spliced vector;
inputting the spliced vector to a first decoder module, and obtaining a first decoding result output by the first decoder module;
if the first decoding result is the first set value, text retrieval is needed.
3. The multi-turn dialogue interactive processing method according to claim 2, wherein after inputting the spliced vector to a first decoder module and obtaining a first decoding result output by the first decoder module, the method further comprises:
if the first decoding result is the second set value, text retrieval is not needed, and universal boring response information is generated.
4. The multi-round dialogue interaction processing method according to claim 1, wherein the obtaining the search text based on the vectorized semantic information includes:
inputting the vectorized semantic information to a second decoder module, and obtaining a second decoding result output by the second decoder module;
and acquiring the search text based on the second decoding result.
5. The multi-round dialogue interactive processing method according to claim 1, wherein the generating response information based on the search text includes:
extracting target text from a knowledge base based on the search text;
and generating the response information based on the target text.
6. The multi-round dialogue interaction processing method according to claim 1, wherein the joint model is trained based on the following steps:
collecting sample dialogue data;
performing knowledge search content labeling and knowledge classification labeling on the sample dialogue data to generate the sample data;
and training a preset model by adopting the sample data to obtain the joint model.
7. The method of claim 6, wherein the performing knowledge search content annotation and knowledge classification annotation on the sample dialogue data comprises:
Labeling the sample dialogue data of the current turn as retrievable content of a knowledge base;
the sample dialogue data for each round is labeled as requiring reference knowledge or not requiring reference knowledge.
8. A multi-round dialog interaction handling device, comprising:
the collection module is used for collecting historical dialogue data and current dialogue data, inputting the historical dialogue data and the current dialogue data into a pre-trained joint model, and obtaining a historical dialogue vector and a current dialogue vector output by the joint model;
the semantic information determining module is used for splicing the historical dialogue data and the current dialogue data to obtain spliced data, inputting the spliced data into the joint model and obtaining vectorized semantic information output by the joint model;
the search text determining module is used for determining that text search is required based on the historical dialogue vector and the current dialogue vector, and acquiring search text based on the vectorized semantic information;
the response information generation module is used for generating response information based on the search text;
the joint model is obtained by training a preset model by adopting sample data, wherein the sample data comprises sample dialogue data, knowledge searching content labels and knowledge classification labels.
9. An electronic device comprising a processor and a memory storing a computer program, characterized in that the processor implements the steps of the multi-round dialog interaction handling method of any of claims 1 to 7 when the computer program is executed.
10. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the multi-round dialog interaction handling method of any of claims 1 to 7.
CN202310950620.0A 2023-07-31 2023-07-31 Multi-round dialogue interaction processing method, device, equipment and storage medium Pending CN116910220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310950620.0A CN116910220A (en) 2023-07-31 2023-07-31 Multi-round dialogue interaction processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310950620.0A CN116910220A (en) 2023-07-31 2023-07-31 Multi-round dialogue interaction processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116910220A true CN116910220A (en) 2023-10-20

Family

ID=88366639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310950620.0A Pending CN116910220A (en) 2023-07-31 2023-07-31 Multi-round dialogue interaction processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116910220A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556061A (en) * 2023-11-20 2024-02-13 曾昭涵 Text output method and device, electronic equipment and storage medium
CN117557674A (en) * 2024-01-11 2024-02-13 宁波特斯联信息科技有限公司 Picture processing method, device, equipment and storage medium based on man-machine interaction
CN117972075A (en) * 2024-03-28 2024-05-03 华南理工大学 Mental and language agent cooperative emotion dialogue generation method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556061A (en) * 2023-11-20 2024-02-13 曾昭涵 Text output method and device, electronic equipment and storage medium
CN117556061B (en) * 2023-11-20 2024-05-24 曾昭涵 Text output method and device, electronic equipment and storage medium
CN117557674A (en) * 2024-01-11 2024-02-13 宁波特斯联信息科技有限公司 Picture processing method, device, equipment and storage medium based on man-machine interaction
CN117557674B (en) * 2024-01-11 2024-04-26 宁波特斯联信息科技有限公司 Picture processing method, device, equipment and storage medium based on man-machine interaction
CN117972075A (en) * 2024-03-28 2024-05-03 华南理工大学 Mental and language agent cooperative emotion dialogue generation method

Similar Documents

Publication Publication Date Title
CN108446286B (en) Method, device and server for generating natural language question answers
CN108711420B (en) Multilingual hybrid model establishing method, multilingual hybrid model establishing device, multilingual hybrid model data obtaining device and electronic equipment
CN111339283B (en) Method and device for providing customer service answers aiming at user questions
CN106328147B (en) Speech recognition method and device
CN116910220A (en) Multi-round dialogue interaction processing method, device, equipment and storage medium
CN111177359A (en) Multi-turn dialogue method and device
CN112487139A (en) Text-based automatic question setting method and device and computer equipment
CN115292461B (en) Man-machine interaction learning method and system based on voice recognition
CN113672708A (en) Language model training method, question and answer pair generation method, device and equipment
CN110597968A (en) Reply selection method and device
CN113392265A (en) Multimedia processing method, device and equipment
CN111353026A (en) Intelligent law attorney assistant customer service system
CN115640530A (en) Combined analysis method for dialogue sarcasm and emotion based on multi-task learning
CN117235213A (en) Interactive customer service method and system
CN114003700A (en) Method and system for processing session information, electronic device and storage medium
CN116522905B (en) Text error correction method, apparatus, device, readable storage medium, and program product
CN111046674B (en) Semantic understanding method and device, electronic equipment and storage medium
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
CN116186244A (en) Method for generating text abstract, method and device for training abstract generation model
CN114238595A (en) Metallurgical knowledge question-answering method and system based on knowledge graph
CN114154517A (en) Deep learning-based dialogue quality assessment method and system
CN113744737B (en) Training of speech recognition model, man-machine interaction method, equipment and storage medium
CN116955579B (en) Chat reply generation method and device based on keyword knowledge retrieval
CN117453895B (en) Intelligent customer service response method, device, equipment and readable storage medium
CN116010583B (en) Cascade coupling knowledge enhancement dialogue generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination