CN111858888A - Multi-round dialogue system of check-in scene - Google Patents

Multi-round dialogue system of check-in scene Download PDF

Info

Publication number
CN111858888A
CN111858888A CN202010666709.0A CN202010666709A CN111858888A CN 111858888 A CN111858888 A CN 111858888A CN 202010666709 A CN202010666709 A CN 202010666709A CN 111858888 A CN111858888 A CN 111858888A
Authority
CN
China
Prior art keywords
conversation
strategy
value
intention
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010666709.0A
Other languages
Chinese (zh)
Other versions
CN111858888B (en
Inventor
张日崇
王苏羽晨
张延钊
田源
陈俊帆
胡志元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010666709.0A priority Critical patent/CN111858888B/en
Publication of CN111858888A publication Critical patent/CN111858888A/en
Application granted granted Critical
Publication of CN111858888B publication Critical patent/CN111858888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/247Thesauruses; Synonyms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The multi-round dialogue system of the value-machine scene provided by the invention forms the whole framework of the system through the structures of a user input module, a data processing module, a multi-round dialogue operation module of the value-machine scene, a natural language generation module and a dialogue output module which are sequentially and logically connected, and a main framework of the algorithm is formed in a multi-round dialogue operation module of the value-machine scene by adopting a mode of combining a machine learning model and an artificial rule model, therefore, under the data scale and the data characteristics of the value machine scene, the performance of the model on the label with less data volume is improved, meanwhile, the robustness of the machine learning model can be further improved by utilizing the artificial rule model, therefore, the state management module can complete relatively large-scale multi-classification tasks under a small data set, the characteristics of the data set in the field are adapted to the maximum extent, and an accurate output effect is provided for a user.

Description

Multi-round dialogue system of check-in scene
Technical Field
The invention relates to the field of natural language processing, in particular to a multi-turn dialogue method and a multi-turn dialogue system for an on-duty scene.
Background
The intelligent man-machine conversation system is one of the important problems in the field of natural language processing, and with the rapid development of deep learning related technologies, the emergence of data sets with more complete labels under the drive of big data and the abundance of application scenes, the research on the man-machine conversation system is increasingly becoming the hot field of the industry and academia in recent years.
Currently, man-machine dialog systems are divided into two main categories according to the type of task they perform: an open-domain human-machine conversation system and a task-type human-machine conversation system, which can perform a chat-type conversation, and which aim to maintain continuous communication with a user as much as possible in the open domain; the latter aims to fulfill the task needs of the user in a certain domain.
As intelligent customer service, the task type dialogue system has the advantages of short response time, good mechanical working effect, privacy disclosure avoidance and the like compared with manual customer service; as a new generation of human-computer interaction interface, compared with the traditional graphical user interface, the text interaction interface used by the task type dialog system can enable a user to conveniently find out the required functions, can complete multiple tasks only by inputting characters, is high in flexibility, and reduces the learning cost of the user for using the APP.
For a man-machine interaction system in the field of aviation, operations under some simple scenes, such as inquiring about check-in time, inquiring about why check-in is impossible, requesting seat replacement and the like, are often required by a user for customer service assistance, and the mechanized operations are very suitable for being completed by using a man-machine interaction system; meanwhile, when a user carries out customer service inquiry through a software system in the field of aviation, the customer service is often required to carry out faster reply, and the man-machine conversation system is also suitable for the customer service inquiry.
At present, a dialogue system model in the research field is developed based on a large-scale complete multi-turn dialogue data set, in the data set, the types of dialogue intentions required by a user to be completed by a dialogue system are less, and information provided by the user is mostly character information. Therefore, the development of the aviation field dialogue system is peculiar, and therefore, it is not sufficient to develop the dialogue state management module only by using the existing machine learning model in the research field.
The invention aims to design and realize an intelligent customer service dialogue method which can be applied to an aviation field system and a system applying the intelligent customer service dialogue method, so that the method adapts to the characteristics of a data set in the field to the greatest extent, makes up the defects of the existing model in engineering application, and provides output as accurate as possible for a user.
Disclosure of Invention
The invention realizes a multi-round dialogue system of a value-machine scene, which comprises a user input module, a data processing module, a multi-round dialogue operation module of the value-machine scene, a natural language generation module and a dialogue output module which are sequentially and logically connected;
The user input module acquires a standard data set of related information in a user input mode;
the data processing module comprises a data cleaning step, a data sorting step and a final preparation step of machine learning model input; the data cleaning step comprises the following steps: the method comprises the steps of carrying out data sorting on a data set obtained by a user input module, and correcting errors in the data set, wherein the correction specifically comprises three parts of merging repeated intentions, identifying a slot-value pair which is partially and obviously required to be marked, and ensuring the information consistency of multi-turn conversations, and the identified information of the slot-value pair comprises a flight number, a ticket number, a mobile phone number, a certificate number and travel time; the data arrangement step comprises: replacing numerical value information and identifying a new state generated by each round, wherein the step of replacing numerical value information is to manually replace the slot-value pairs, specifically, a state management module in multi-round conversation operation of a value machine scene needs to extract a conversation state generated by a current round, intentions and slot-value pairs marked in a data set are all the states of a whole group of conversation ending to the current round, so that the new state generated by each round of conversation needs to be re-marked, the intention and slot-value pairs of each round are compared with the previous round in the step of identifying the new state generated by each round, and if the intention is changed, a new slot is added or the value of a slot is changed, the intention and slot-value pairs are extracted as the new state generated by the conversation of the round; the final preparation step of the machine learning model input is vocabulary and ontology extraction, specifically, firstly, word segmentation and vocabulary construction are carried out, then word indexes are associated with word vectors, and then ontology extraction is required;
The multi-round conversation operation module of the value machine scene is realized by using a machine learning method and a manual model method as a basic algorithm, and then a data set is amplified and balanced by a data enhancement means, and finally strategy learning is carried out by a conversation strategy learning module; specifically, the machine learning method adopts an improved GLAD model, the improved GLAD model adopts an embedding, coding, attention and prediction process, a word index list input by a user in the current round and a word index list of a conversation strategy adopted by a system in the previous round of conversation are input, a filled value of each groove is output, the filled value of each groove is modeled as a series of classification problems, the improved GLAD model obtains an input value through a user input coder and a strategy coder, and a generated user input context and a previous round conversation strategy context are generated; further, processing the conversation strategy context, adding user input information into the conversation strategy context, performing attention mechanism calculation on the input context by utilizing the previous round of conversation strategy context to obtain a weighted conversation strategy context vector, further splicing the weighted conversation strategy context with a user input encoder result to obtain a current round of conversation intermediate representation with strategy information, and constructing a conversation history tracker for maintaining conversation history while considering historical conversation state The dialog history tracker inputs as the current round of dialog intermediate representation qkThe output of the dialogue history tracker is the state information of a slot k ending to the current round of dialogue, and the dialogue history tracker adopts a unidirectional GRU; the artificial model is an intention recognition supplementary model used for making up errors generated by a machine learning model under a small-scale data set, three strategies of handwriting rule matching, keyword extraction and intention comparison and data set text similarity calculation are adopted in intention recognition, and finally recognized conversation intentions are voted by the three recognition strategies; the data enhancement means is a modified EDA algorithm; the dialogue strategy learning module uses a strategy generation and text reply model based on a finite state machine, the model carries out related information inquiry on a user by judging whether extra information required for finishing the intention is sufficient or not, so that a required slot value is filled in the next wheel dialogue, and reply strategy selection is obtained based on the analysis of the existing data set dialogue state and strategy corresponding relation;
the natural language generation module adopts a matching model, and each strategy is actually in a one-to-one relationship with the customer service reply, so that a reply text is directly generated by a conversation strategy;
And the dialog output module outputs the reply text generated by the natural language generation module through a screen.
The method for word segmentation and vocabulary construction in the final preparation step of the machine learning model input comprises the following steps: the method comprises the steps of segmenting five types of information, namely user input, customer service reply, intention, slot text and value text, establishing a vocabulary table based on segmented words and carrying out word vector indexing, and manually adding a starting identifier, a terminating identifier, an unknown identifier and five types of placeholders when constructing a word index so as to obtain the word index in the same way; the method for associating the word indexes with the word vectors is realized by using a skip-gram + negative sampling method, and training the word vectors on corpora such as question and answer websites, news and literary works and the like by using two granularities of words and characters; the ontology extraction method considers the training set, the verification set and the test set simultaneously to extract the ontology.
Said useA user input encoder obtains the output value of the final preparation step of the machine learning model input, the output value is represented by user input I, the strategy encoder obtains the output value of the round-up conversation strategy, the round-up conversation strategy is represented by a round-up conversation strategy A, the user input I and the round-up conversation strategy A are firstly filled to adapt to word sequences with different lengths, then the user input I and the conversation strategy A are inquired by a word embedding LUT and convert each word index into a corresponding Chinese word vector, the encoder generates an intermediate representation of a text by utilizing a bidirectional GRU, the intermediate representation obtained by the bidirectional GRU passes through an attention mechanism exclusive to a specific slot k to obtain the context vector output specific to the slot k
Figure BDA0002580655290000031
In the slot k-specific attention mechanism, its request is set to a trainable vector unique to the slot k in a user input encoder
Figure BDA0002580655290000032
The round-up dialog strategy A passes through a dialog strategy encoder part, the encoder structure is consistent with a user input encoder, but the bidirectional GRU parameter and the groove-specific attention mechanism parameter are not shared with the user input encoder, namely, another trainable vector needs to be set for the groove k
Figure BDA0002580655290000033
The handwriting rule matching strategy of the artificial model is as follows: the matching of the intention text can be directly carried out by utilizing the text characteristics of the intention containing obvious mark words, one or more mark words are identified, continuous if-else judgment is adopted in an implementation system, the intention categories with less occurrence and obvious text characteristics in a data set are judged first, and the identification mark words are selected more carefully in the keyword field of part categories which are easy to be confused, so that the intention is accurately identified; the keyword extraction and intention comparison strategy firstly utilizes a TextRank algorithm to extract a plurality of keywords in a text input by a user, after extraction is finished, the keywords are spliced into a text, and then similarity calculation is carried out on the text and all intention types contained in the text. Finally, the intention with the minimum distance from the text of the keyword sequence input by the user is used as the dialogue intention obtained by using the strategy; and comparing the data set text similarity calculation strategy user input text with the existing user input text in the data set, finding the piece with the maximum similarity, and taking the intention marked in the data set as the conversation intention obtained by adopting the strategy.
The improved EDA algorithm is: when the EDA algorithm is applied, in order to avoid semantic change caused by excessive change, only one of the four strategies is adopted at random each time, and only one operation is performed, in order to avoid overfitting caused by data amplification, the EDA operation is only applied to a pre-divided training set.
The technical effects are as follows:
the multi-turn dialogue method and the multi-turn dialogue system for the value-machine scene can enable the state management module to complete relatively large-scale multi-classification tasks under a small data set, adapt to the characteristics of the data set in the field to the greatest extent, and provide accurate output for users.
Drawings
FIG. 1: algorithm model overall structure
FIG. 2: machine learning model structure
FIG. 3: encoder structure
FIG. 4: system global logic architecture
Detailed Description
Referring to fig. 1-4, a multi-turn dialog system for a value-machine scene includes a user input module, a data processing module, a multi-turn dialog operation module for the value-machine scene, a natural language generation module, and a dialog output module, which are logically connected in sequence; the user input module collects a data set of related information meeting a format setting standard through existing interactive input modes such as a keyboard and voice, and then forms a multi-turn dialogue operation module aiming at a value machine scene in the aspect of a model by adopting a mode of combining a machine learning model and an artificial rule model after considering the scale of the data set, so that the performance of the model on a label with less data volume is improved, and meanwhile, the robustness of the machine learning model can be further improved by utilizing the artificial rule model. In the aspect of data sets, in order to solve the problems of small data quantity and great difference of data quantity of each label, the data enhancement module is used for completing and expanding data. The overall structure of the model of the multi-round dialogue operation module is shown in fig. 1.
Data processing module
The first step after the data processing module obtains the data set is to perform data sorting, and the purpose of this step is to correct errors existing in the data set. The problems of excessive categories, repeated categories, failure in extracting part of necessary information and the like exist in the obtained data set, so that data cleaning needs to be manually performed before subsequent steps are performed, and the method mainly comprises three parts of merging repeated intents, identifying a groove-value pair with parts needing to be marked obviously and ensuring information consistency of multiple rounds of conversations. First, problems with too many conversational intents require recognition and merging of repetitive type intents, due to the smaller size of data sets and the larger number of categories of conversational intents. Second, in dialog system state management, the dialog state is composed of two parts, intent and slot-value pair, where the former represents what the user wants to do with the group of dialogs, and the latter extracts additional information that the user needs to do with the event. In the opportunistic scenario, some of the user-mentioned numerical information is likely to be used in the dialogue. Such numerical information includes flight number, ticket number, mobile phone number, certificate number, and travel time. It is therefore necessary to identify slot-value pairs for which parts need to be marked explicitly. Finally, there is a need for intent that best ensures that the same intent is recognized when the user inputs are similar. In the data set, the user inputs are identical but the labeling intentions are different, and manual correction is needed. For a slot-value pair, after a certain value information slot-value pair appears in a group of dialogs, if the information filled in the slot is not modified in the subsequent dialog, the value information appearing in the slot-value pair marked in the round and the subsequent dialogs should be included.
And further sorting is needed after the data is cleaned. The data arrangement mainly comprises two parts of replacing numerical value information and identifying a new state generated in each round. Firstly, because a pre-training Chinese word vector needs to be adopted for processing in a machine learning model, the word vector is difficult to use for numerical information such as flight number, ticket number, certificate number and the like, and the information is likely to be lost by directly processing the numerical information, the information needs to be manually replaced. Secondly, since the state management module also needs to extract the dialog state generated by the current round, and the intention and slot-value pairs marked in the data set are the states of the whole set of dialog ending to the current round, the new state generated by each round of dialog needs to be marked again. This part is implemented by comparing the intent and slot-value pairs of each round with the previous round and extracting them as new states if the intent changes, new slots are added or the value of a slot changes.
The processed data set requires final preparation of machine learning model input, and thus vocabulary and ontology extraction. First, word segmentation and vocabulary construction are required. The method comprises the steps of segmenting five types of information including user input, customer service reply, intention, slot text and value text, then establishing a vocabulary table based on segmented words and carrying out word vector indexing, and manually adding a starting identifier, a terminating identifier, an unknown identifier and five types of placeholders when constructing a word index so as to obtain the word index in the same way. And then associating the word indexes with word vectors, wherein a skip-gram + negative sampling method is used in the implementation, and the word vectors are trained on linguistic data such as question and answer websites, news, literary works and the like by using two granularities of words and characters. After which a bulk extraction is required. In a pipe-type paired-wheel dialogue system, ontology refers to the set of values that can be filled in each slot in the dialogue state. For a non-generative dialog system, there are only a limited number of filling value choices for each slot, and for the purpose of classification problem, filling modeling for a slot requires ontology extraction for each slot before training to determine classification categories. In the system implementation, since the data volume is small, the ontology extraction cannot be omitted due to the division of the data set, so the ontology extraction is performed by simultaneously considering the training set, the verification set and the test set in the step.
Machine learning model
The machine learning model of the method is based on the GLAD model proposed in 2018, and partial modification and simplification are performed on the basis of the model. The modified model is schematically shown in fig. 2.
The model adopts the commonly used 'embedding, coding, attention and prediction' process in the natural language processing field. The input of the model is a word index list I input by the current round of the user and a word index list A of the conversation strategy adopted by the system in the previous round of the conversation, and the output is the value filled in each groove. Since the values that each bin can fill in have been determined in its ontology, filling each bin with values is actually modeled as a series of classification problems.
It is worth noting that the filling of each slot is an independent problem of selecting values in the ontology, so that the filling of each slot is actually an independent classification problem, and the existence of each slot introduces a module specific to the slot for the model; also, since the filling problem for all slots is based on the same set of dialogue data, there is a common module for the filling model for each slot. In the following explanation, we analyze the filling problem of a certain slot k in detail, and only the part containing the subscript k is dedicated to the filling of the slot k, otherwise the part is common to the whole system.
For the encoder part, the user input I and the conversation strategy A in the previous round need to be filled to adapt to word sequences with different lengths, and then each word index is converted into a corresponding Chinese word vector after the user input I and the conversation strategy A are inquired by a word embedding LUT. To efficiently extract context information for text, the encoder first generates an intermediate representation of the text using a bidirectional GRU. The intermediate representation obtained by the bidirectional GRU will pass through the attention mechanism dedicated to the slot k to obtainGet the slot specific context vector output
Figure BDA0002580655290000051
In the slot k-specific attention mechanism, its request is set to a trainable vector unique to slot k in a user input encoder
Figure BDA0002580655290000052
The structure of the encoder is shown in fig. 3. Round-up conversation strategy EAIt is also necessary to go through a dialog strategy encoder section that is structured in accordance with the user input encoder but does not share the bi-directional GRU parameters and the slot-specific attention mechanism parameters with the user input encoder, i.e. where another trainable vector needs to be set for slot k
Figure BDA0002580655290000053
After obtaining the user input context and the previous turn of the dialog policy context generated by the encoder, the dialog policy context needs to be further processed, and the user input information is added into the dialog policy context. The attention mechanism calculation is carried out on the input context by utilizing the conversation strategy context in the previous round, and a weighted conversation strategy context vector is obtained. And splicing the weighted conversation strategy context with the result of the encoder input by the user to obtain the middle representation of the current round of conversation with strategy information. Because the historical dialogue states need to be considered simultaneously when the dialogue states are cut off to the current round in the multi-round dialogue, a tracker for maintaining the dialogue histories needs to be constructed, and the tracker inputs the intermediate representation q of the current round of dialogue kThe tracker outputs status information for slot k that is cut off to the current round of dialog. In an implementation, the dialog history tracker employs a one-way GRU.
Artificial model
In order to compensate errors generated by the machine learning model under a smaller scale data set, the machine learning model is supplemented by an artificial rule model.
In implementation, since a large number of intention samples are still insufficient to support the machine learning model for training after data cleaning, unusual intentions are combined into a uniform 'other' category in the training process and are transmitted into the machine learning model for training and identification. The artificial model adopts three strategies on intention recognition, and finally recognized conversation intentions are voted by the three recognition strategies.
The first strategy is handwriting rule matching. This strategy is based on the following observations: most intentions contained in the "other" category contain obvious tokens, and the matching of the intended text can be directly performed by using the text features. For partial intentions, multiple token words may need to be identified. In the implemented system, continuous if-else judgment is adopted to implement the strategy, and the intention categories with less occurrence and obvious text characteristics in the data set are judged first, and for the confusion part categories, such as the intention of 'being reserved' and 'how to look at the seat', which has repeated signposts, the recognition signposts are selected more carefully to realize accurate recognition of the intention.
The second strategy is keyword extraction and intent comparison. This strategy is based on the association of textual intent and textual keywords, and when certain keywords are included in the user-entered text, this text is more likely to belong to an intent category that contains the same keywords. Firstly, a plurality of keywords in a user input text are extracted by using a TextRank algorithm. After extraction is finished, the keywords are spliced into a text, and then similarity calculation is carried out on the text with all intention categories contained in other categories. Finally, the intention with the smallest distance to the user-input keyword sequence text is taken as the dialog intention obtained by using this strategy.
The third strategy is data set text similarity calculation. This strategy can derive hypotheses of similar dialog intentions based on similar text. Comparing the current input text of the user with the existing input text of the user in the data set, finding the item with the maximum similarity, and taking the intention marked in the data set as the conversation intention obtained by adopting the strategy.
Data enhancement
Due to the complexity of the labeled data of the dialogue system, the scale of the data set of the value machine scene is difficult to achieve the scale of other natural language processing data sets; in addition, the sample size difference corresponding to each intention label of the multi-turn dialogue data sets of the on-duty scene is large, so the system is faced with the problem of extreme imbalance of labels during development. Therefore, amplification and balancing of the data set is required. Considering that the texts of the Data set are short texts and the composition is simple, the eda (easy Data augmentation) algorithm proposed in 2019 is selected for Data enhancement. The algorithm is proven to significantly improve the performance of natural language processing models on small data sets and reduce the degree of overfitting.
The purpose of the EDA algorithm is to generate new text with similar semantics to existing text, and the algorithm adopts 4 random strategies for data enhancement: the first method is synonym replacement, which randomly selects a plurality of words of non-stop words from the text and replaces the words with synonyms; the second method is random insertion, which randomly finds out a word of a non-stop word from a text, obtains a synonym of the word, inserts the synonym into a random position in a sentence, and repeats for a plurality of times; the third method is random exchange, in which two words are randomly selected from the text to exchange positions, and the position exchange is repeated for a plurality of times; a fourth method is random deletion, removing words from a sentence with some fixed probability. In the implementation of the present system, some improvements are made to the existing EDA algorithm. Considering that the texts in the data set contain clauses separated by punctuation marks, and the random exchange across clauses usually has great change to the semantics, when the random exchange is performed, the texts are firstly separated into clause lists according to all punctuation marks in Chinese and English, and then a clause with the word number larger than 1 is randomly selected from the clause lists for random exchange, and the random exchange is repeated for a plurality of times. When the EDA algorithm is applied, in order to avoid semantic change caused by excessive change, only one of the four strategies is adopted at random each time, and only one operation is carried out. To avoid overfitting from the amplified data, EDA operations are only applied in pre-partitioned training sets.
Policy learning and natural language generation
The dialogue strategy learning module uses a strategy generation and text reply model based on a finite state machine, the model carries out related information inquiry on a user by judging whether extra information required for finishing the intention is sufficient or not, so that a required slot value is filled in the next wheel of the dialogue, and reply strategy selection is obtained based on the analysis of the corresponding relation between the existing data set dialogue state and the strategy; the natural language generation module employs a matching model, each policy is in fact in a one-to-one relationship with the customer service reply, so that the reply text is generated directly from the dialog policy. The overall flow of the dialog system is shown in fig. 4.

Claims (5)

1. A multi-turn dialog system for an operator scene, comprising: the system comprises a user input module, a data processing module, a multi-round conversation operation module of a value-machine scene, a natural language generation module and a conversation output module which are sequentially and logically connected;
the user input module acquires a standard data set of related information in a user input mode;
the data processing module comprises a data cleaning step, a data sorting step and a final preparation step of machine learning model input; the data cleaning step comprises the following steps: the method comprises the steps of carrying out data sorting on a data set obtained by a user input module, and correcting errors in the data set, wherein the correction specifically comprises three parts of merging repeated intentions, identifying a slot-value pair which is partially and obviously required to be marked, and ensuring the information consistency of multi-turn conversations, and the identified information of the slot-value pair comprises a flight number, a ticket number, a mobile phone number, a certificate number and travel time; the data arrangement step comprises: replacing numerical value information and identifying a new state generated by each round, wherein the step of replacing numerical value information is to manually replace the slot-value pairs, specifically, a state management module in multi-round conversation operation of a value machine scene needs to extract a conversation state generated by a current round, intentions and slot-value pairs marked in a data set are all the states of a whole group of conversation ending to the current round, so that the new state generated by each round of conversation needs to be re-marked, the intention and slot-value pairs of each round are compared with the previous round in the step of identifying the new state generated by each round, and if the intention is changed, a new slot is added or the value of a slot is changed, the intention and slot-value pairs are extracted as the new state generated by the conversation of the round; the final preparation step of the machine learning model input is vocabulary and ontology extraction, specifically, firstly, word segmentation and vocabulary construction are carried out, then word indexes are associated with word vectors, and then ontology extraction is required;
The multi-round conversation operation module of the value machine scene is realized by using a machine learning method and a manual model method as a basic algorithm, and then a data set is amplified and balanced by a data enhancement means, and finally strategy learning is carried out by a conversation strategy learning module; specifically, the machine learning method adopts an improved GLAD model, the improved GLAD model adopts an embedding, coding, attention and prediction process, a word index list input by a user in the current round and a word index list of a conversation strategy adopted by a system in the previous round of conversation are input, a filled value of each groove is output, the filled value of each groove is modeled as a series of classification problems, the improved GLAD model obtains an input value through a user input coder and a strategy coder, and a generated user input context and a previous round conversation strategy context are generated; further, processing the conversation strategy context, adding user input information into the conversation strategy context, performing attention mechanism calculation on the input context by utilizing the previous round of conversation strategy context to obtain a weighted conversation strategy context vector, further splicing the weighted conversation strategy context with a user input encoder result to obtain a current round of conversation intermediate representation with strategy information, constructing a conversation history tracker for maintaining conversation history while considering historical conversation state, wherein the input of the conversation history tracker is the current round of conversation intermediate representation q kThe output of the dialogue history tracker is the state information of a slot k ending to the current round of dialogue, and the dialogue history tracker adopts a unidirectional GRU; the artificial model is an intention recognition supplementary model for compensating errors generated by a machine learning model under a small-scale data set, and handwriting rule matching and keyword extraction are adopted in intention recognitionComparing the conversation intention with the intention, calculating the similarity of the data set text, and voting the finally identified conversation intention by the three identification strategies; the data enhancement means is a modified EDA algorithm; the dialogue strategy learning module uses a strategy generation and text reply model based on a finite state machine, the model carries out related information inquiry on a user by judging whether extra information required for finishing the intention is sufficient or not, so that a required slot value is filled in the next wheel dialogue, and reply strategy selection is obtained based on the analysis of the existing data set dialogue state and strategy corresponding relation;
the natural language generation module adopts a matching model, and each strategy is actually in a one-to-one relationship with the customer service reply, so that a reply text is directly generated by a conversation strategy;
and the dialog output module outputs the reply text generated by the natural language generation module through a screen.
2. A multi-turn dialog system for a value-machine scene according to claim 1, characterized in that: the method for word segmentation and vocabulary construction in the final preparation step of the machine learning model input comprises the following steps: the method comprises the steps of segmenting five types of information, namely user input, customer service reply, intention, slot text and value text, establishing a vocabulary table based on segmented words and carrying out word vector indexing, and manually adding a starting identifier, a terminating identifier, an unknown identifier and five types of placeholders when constructing a word index so as to obtain the word index in the same way; the method for associating the word indexes with the word vectors is realized by using a skip-gram + negative sampling method, and training the word vectors on corpora such as question and answer websites, news and literary works and the like by using two granularities of words and characters; the ontology extraction method considers the training set, the verification set and the test set simultaneously to extract the ontology.
3. A multi-turn dialog system for a value-machine scene according to claim 2, characterized in that: the user input encoder obtains the output value of the final preparation step of the machine learning model input, and inputs an I table by a user The strategy encoder obtains the output value of the previous round conversation strategy, the previous round conversation strategy A is represented, the user input I and the previous round conversation strategy A are firstly filled to adapt to word sequences with different lengths, then the user input I and the conversation strategy A are inquired by a word embedding LUT, each word index is converted into a corresponding Chinese word vector, the encoder utilizes a bidirectional GRU to generate intermediate representation of a text, the intermediate representation obtained by the bidirectional GRU is subjected to an attention mechanism specific to a specific slot k to obtain the output of the context vector specific to the slot k
Figure FDA0002580655280000021
In the slot k-specific attention mechanism, its request is set to a trainable vector unique to the slot k in a user input encoder
Figure FDA0002580655280000022
Figure FDA0002580655280000023
The round-up dialog strategy A passes through a dialog strategy encoder part, the encoder structure is consistent with a user input encoder, but the bidirectional GRU parameter and the groove-specific attention mechanism parameter are not shared with the user input encoder, namely, another trainable vector needs to be set for the groove k
Figure FDA0002580655280000024
4. A multi-turn dialog system for a value-machine scene according to claim 3, characterized in that: the handwriting rule matching strategy of the artificial model is as follows: the matching of the intention text can be directly carried out by utilizing the text characteristics of the intention containing obvious mark words, one or more mark words are identified, continuous if-else judgment is adopted in an implementation system, the intention categories with less occurrence and obvious text characteristics in a data set are judged first, and the identification mark words are selected more carefully in the keyword field of part categories which are easy to be confused, so that the intention is accurately identified; the keyword extraction and intention comparison strategy firstly utilizes a TextRank algorithm to extract a plurality of keywords in a text input by a user, after extraction is finished, the keywords are spliced into a text, and then similarity calculation is carried out on the text and all intention types contained in the text. Finally, the intention with the minimum distance from the text of the keyword sequence input by the user is used as the dialogue intention obtained by using the strategy; and comparing the data set text similarity calculation strategy user input text with the existing user input text in the data set, finding the piece with the maximum similarity, and taking the intention marked in the data set as the conversation intention obtained by adopting the strategy.
5. A multi-turn dialog system for a value-machine scene according to claim 4, characterized in that: the improved EDA algorithm is: when the EDA algorithm is applied, in order to avoid semantic change caused by excessive change, only one of the four strategies is adopted at random each time, and only one operation is performed, in order to avoid overfitting caused by data amplification, the EDA operation is only applied to a pre-divided training set.
CN202010666709.0A 2020-07-13 2020-07-13 Multi-round dialogue system of check-in scene Active CN111858888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010666709.0A CN111858888B (en) 2020-07-13 2020-07-13 Multi-round dialogue system of check-in scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010666709.0A CN111858888B (en) 2020-07-13 2020-07-13 Multi-round dialogue system of check-in scene

Publications (2)

Publication Number Publication Date
CN111858888A true CN111858888A (en) 2020-10-30
CN111858888B CN111858888B (en) 2023-05-30

Family

ID=72984398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666709.0A Active CN111858888B (en) 2020-07-13 2020-07-13 Multi-round dialogue system of check-in scene

Country Status (1)

Country Link
CN (1) CN111858888B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112632259A (en) * 2020-12-30 2021-04-09 中通天鸿(北京)通信科技股份有限公司 Automatic dialog intention recognition system based on linguistic rule generation
CN113326702A (en) * 2021-06-11 2021-08-31 北京猎户星空科技有限公司 Semantic recognition method and device, electronic equipment and storage medium
CN114416971A (en) * 2021-11-10 2022-04-29 北京邮电大学 Equipment intention analysis method and device based on artificial intelligence and electronic equipment
CN114970559A (en) * 2022-05-18 2022-08-30 马上消费金融股份有限公司 Intelligent response method and device
CN117093698A (en) * 2023-10-19 2023-11-21 四川蜀天信息技术有限公司 Knowledge base-based dialogue generation method and device, electronic equipment and storage medium
CN117271290A (en) * 2023-11-20 2023-12-22 北京智谱华章科技有限公司 Fair and efficient multi-dialogue system evaluation system and method
CN117909487A (en) * 2024-03-20 2024-04-19 北方健康医疗大数据科技有限公司 Medical question-answering service method, system, device and medium for old people

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493166A (en) * 2018-10-23 2019-03-19 深圳智能思创科技有限公司 A kind of construction method for e-commerce shopping guide's scene Task conversational system
US10303978B1 (en) * 2018-03-26 2019-05-28 Clinc, Inc. Systems and methods for intelligently curating machine learning training data and improving machine learning model performance
CN110175228A (en) * 2019-05-27 2019-08-27 苏州课得乐教育科技有限公司 Based on basic module and the loop embedding of machine learning dialogue training method and system
CN110209791A (en) * 2019-06-12 2019-09-06 百融云创科技股份有限公司 It is a kind of to take turns dialogue intelligent speech interactive system and device more

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303978B1 (en) * 2018-03-26 2019-05-28 Clinc, Inc. Systems and methods for intelligently curating machine learning training data and improving machine learning model performance
CN109493166A (en) * 2018-10-23 2019-03-19 深圳智能思创科技有限公司 A kind of construction method for e-commerce shopping guide's scene Task conversational system
CN110175228A (en) * 2019-05-27 2019-08-27 苏州课得乐教育科技有限公司 Based on basic module and the loop embedding of machine learning dialogue training method and system
CN110209791A (en) * 2019-06-12 2019-09-06 百融云创科技股份有限公司 It is a kind of to take turns dialogue intelligent speech interactive system and device more

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘继明;孟亚磊;万晓榆;: "基于小样本机器学习的跨任务对话系统", 重庆邮电大学学报(自然科学版) *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112632259A (en) * 2020-12-30 2021-04-09 中通天鸿(北京)通信科技股份有限公司 Automatic dialog intention recognition system based on linguistic rule generation
CN113326702A (en) * 2021-06-11 2021-08-31 北京猎户星空科技有限公司 Semantic recognition method and device, electronic equipment and storage medium
CN113326702B (en) * 2021-06-11 2024-02-20 北京猎户星空科技有限公司 Semantic recognition method, semantic recognition device, electronic equipment and storage medium
CN114416971A (en) * 2021-11-10 2022-04-29 北京邮电大学 Equipment intention analysis method and device based on artificial intelligence and electronic equipment
CN114416971B (en) * 2021-11-10 2024-09-06 北京邮电大学 Equipment intention analysis method and device based on artificial intelligence and electronic equipment
CN114970559A (en) * 2022-05-18 2022-08-30 马上消费金融股份有限公司 Intelligent response method and device
CN114970559B (en) * 2022-05-18 2024-02-02 马上消费金融股份有限公司 Intelligent response method and device
CN117093698A (en) * 2023-10-19 2023-11-21 四川蜀天信息技术有限公司 Knowledge base-based dialogue generation method and device, electronic equipment and storage medium
CN117093698B (en) * 2023-10-19 2024-01-23 四川蜀天信息技术有限公司 Knowledge base-based dialogue generation method and device, electronic equipment and storage medium
CN117271290A (en) * 2023-11-20 2023-12-22 北京智谱华章科技有限公司 Fair and efficient multi-dialogue system evaluation system and method
CN117271290B (en) * 2023-11-20 2024-02-20 北京智谱华章科技有限公司 Fair and efficient multi-dialogue system evaluation system and method
CN117909487A (en) * 2024-03-20 2024-04-19 北方健康医疗大数据科技有限公司 Medical question-answering service method, system, device and medium for old people

Also Published As

Publication number Publication date
CN111858888B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN111858888B (en) Multi-round dialogue system of check-in scene
CN108304468B (en) Text classification method and text classification device
CN108959242B (en) Target entity identification method and device based on part-of-speech characteristics of Chinese characters
CN108304372B (en) Entity extraction method and device, computer equipment and storage medium
CN110020424B (en) Contract information extraction method and device and text information extraction method
CN104503998B (en) For the kind identification method and device of user query sentence
CN110413972B (en) Intelligent table name field name complementing method based on NLP technology
CN115292463B (en) Information extraction-based method for joint multi-intention detection and overlapping slot filling
CN114757176A (en) Method for obtaining target intention recognition model and intention recognition method
CN112417823B (en) Chinese text word order adjustment and word completion method and system
CN114153971A (en) Error-containing Chinese text error correction, identification and classification equipment
CN111737990A (en) Word slot filling method, device, equipment and storage medium
TW202034207A (en) Dialogue system using intention detection ensemble learning and method thereof
CN113868422A (en) Multi-label inspection work order problem traceability identification method and device
CN117332789A (en) Semantic analysis method and system for dialogue scene
CN115878778A (en) Natural language understanding method facing business field
CN113609267B (en) Speech relation recognition method and system based on GCNDT-MacBERT neural network framework
CN117371534B (en) Knowledge graph construction method and system based on BERT
CN111178080A (en) Named entity identification method and system based on structured information
CN113378024B (en) Deep learning-oriented public inspection method field-based related event identification method
CN114065749A (en) Text-oriented Guangdong language recognition model and training and recognition method of system
CN113065352B (en) Method for identifying operation content of power grid dispatching work text
CN113486143A (en) User portrait generation method based on multi-level text representation and model fusion
CN117454898A (en) Method and device for realizing legal entity standardized output according to input text
CN112765977A (en) Word segmentation method and device based on cross-language data enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant