CN111414466A - Multi-round dialogue modeling method based on depth model fusion - Google Patents

Multi-round dialogue modeling method based on depth model fusion Download PDF

Info

Publication number
CN111414466A
CN111414466A CN202010186401.6A CN202010186401A CN111414466A CN 111414466 A CN111414466 A CN 111414466A CN 202010186401 A CN202010186401 A CN 202010186401A CN 111414466 A CN111414466 A CN 111414466A
Authority
CN
China
Prior art keywords
model
layer
knowledge
dialog
conversation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010186401.6A
Other languages
Chinese (zh)
Inventor
周奕
周波
王天宇
张堃
李文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Borazhe Technology Co ltd
Original Assignee
Hangzhou Borazhe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Borazhe Technology Co ltd filed Critical Hangzhou Borazhe Technology Co ltd
Priority to CN202010186401.6A priority Critical patent/CN111414466A/en
Publication of CN111414466A publication Critical patent/CN111414466A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a multi-round dialogue modeling method based on depth model fusion, which comprises a knowledge embedded model and a language embedded model, wherein the two models jointly predict the next dialogue state; in the prediction phase, when a sentence of user text input is received, the current dialogue state is in accordance with nhistory-1 previous state is vectorised as model input, the model calculates and predicts an action as system response; in the training stage, the natural process of the conversation is organized into a conversation story, and further training is carried out. The context information of the conversation can be effectively utilized, the control of the whole conversation process is realized by utilizing an end-to-end model, and the complexity of the framework of the conversation management system is effectively reduced.

Description

Multi-round dialogue modeling method based on depth model fusion
Technical Field
The invention relates to the technical field of learning models, in particular to a multi-round dialogue modeling method based on depth model fusion.
Background
In recent years, with the rapid development of artificial intelligence, many voice conversation robots appear in the world, and are used in scenes such as mobile phone assistants, intelligent customer service, voice navigation, intelligent sound boxes and the like. The core modules of these voice interactive systems generally include modules for speech recognition, language understanding, dialog management, and the like. The conversation management model is responsible for tracking the change of the state of the whole conversation process and controlling the conversation trend, and is the key point for judging whether the multiple rounds of conversations can be carried out correctly and smoothly. For dialogue modeling, there are mainly the following:
(1) dialog system based on artificial template
The technology based on the manual template is realized by manually setting dialog scenes and writing some targeted dialog templates for each scene, wherein the templates describe possible questions of a user and corresponding answer templates. Chat is restricted to a particular scene or a particular topic and a set of template rules is used to generate a response.
(2) Search-based dialog system
The chat robot based on the retrieval technology uses a method similar to a search engine, a conversation library is stored in advance, an index is established, and fuzzy matching is carried out in the conversation library according to a question of a user to find out the most appropriate response content.
(3) Deep learning-based dialog generation model
The application of the deep learning technology in conversation generation is mainly oriented to an open domain chat robot, because large-scale general linguistic data is easy to obtain, most commonly used Sequence to Sequence model by machine translation is used for reference, and the whole process from question to reply in conversation generation is regarded as a translation process from a source language to a target language in machine translation.
The above methods can achieve better effect for a single round of dialog, but for multiple rounds of dialog, a dialog process control module needs to be written by using a method such as a finite state machine, and under the condition that the dialog process is complex, the complexity of system state jump is rapidly increased, so that the dialog system is difficult to maintain.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a multi-round dialogue modeling method based on deep model fusion, which adopts a deep learning model to model the whole multi-round dialogue system, realizes end-to-end training and effectively reduces the complexity of the system architecture.
A multi-round dialogue modeling method based on depth model fusion comprises a knowledge embedding model and a language embedding model, wherein the two models jointly predict the next dialogue state;
in the prediction phase, when a sentence of user text input is received, the current dialogue state is in accordance with nhistory-1 previous state is vectorised as model input, the model calculates and predicts an action as system response; in the training stage, the natural process of the conversation is organized into a conversation story, and further training is carried out.
Further, the scenario, organizing domain knowledge, the underlying domain knowledge is represented as:
Figure BDA0002414351520000021
wherein I is the possible intention of the user in the scene; e is an entity involved in the scene; s is a semantic slot which needs to be filled in the conversation process and is used for storing key information provided by a user in the conversation; a is the machine-side executable action.
Furthermore, a transformer-like model is adopted in the knowledge embedding model, the domain knowledge is processed by a featurer layer to obtain a characteristic matrix, the characteristic matrix is processed by a coding layer to obtain a knowledge embedding matrix, and then the knowledge embedding matrix enters a model fusion layer.
Further, for the featurer layer: each dialog story is divided into dialog states at several time steps, and a single time step dialog state is a state in which onehot codes of dialog state elements are connected to form a feature vector:
Vstate=Concat(Vl,VE,VS,VA)
wherein, VI,VE,VS,VARespectively corresponding to one hot vectors of I, E, S and A in the domain knowledge model;
the dialog story is a feature matrix obtained by stacking the dialog states at each time step:
Figure BDA0002414351520000022
the coding layer is composed of n identical multi-head attention layers, each multi-head attention layer is composed of two sub-layers which are respectively composed of a multi-head attention mechanism and a fully-connected feedforward neural network, the input and the output of each sub-layer have a short-cut connection, and then layer _ norm is executed, so that the output of each sub-layer can be expressed as:
sub_layer_output=LayerNorm(x+(SubLayer(x)))
the multi-head attention mechanism layer calculates the attention of the input layer through h different linear transformations, where each self-attention calculation is as follows:
Figure BDA0002414351520000023
wherein X is input, Q ═ XWQ,K=XWK,V=XWV,dkIs the dimension of K;
the multi-head attention calculation formula is as follows:
MultiHead(X,Q,K,V)=Concat(head1,head2...headi)WO
the output of the multi-head attention layer is provided with a nonlinear transformation by using a fully-connected neural network:
Figure BDA0002414351520000024
wherein gelu is an activation function;
the output of the coding layer is a knowledge embedding matrix EKnowledge
Further, pre-training:obtaining a domain text, pre-training a language model, and obtaining a domain language model MD
An input layer: text T for each dialog story at each time stepiAnd performing one hot coding by utilizing a dictionary to generate a corresponding sentence vector:
VTi=oneHot(Ti);
the feature matrix of the entire dialog story is then:
Figure BDA0002414351520000031
and (3) coding layer: inputting the characteristic matrix into 1) the language model to generate a corresponding story embedding matrix:
Etext=MD(MIstory)。
further, after the knowledge embedding matrix and the text embedding matrix pass through a fusion layer, the probability of action is calculated by utilizing softmax, and the highest probability is taken as the next action:
next_action=argmax(softmax(projection(Etext,EKnowledge)))。
compared with the prior art, the invention has the advantages that:
the context information of the conversation can be effectively utilized, the control of the whole conversation process is realized by utilizing an end-to-end model, and the complexity of the framework of the conversation management system is effectively reduced.
Drawings
Fig. 1 is an application scenario diagram provided in an embodiment of the present invention;
FIG. 2 is a block diagram of a voice interaction system based on an end-to-end model according to an embodiment of the present invention;
FIG. 3 is an example of a domain knowledge organization provided by an embodiment of the invention;
FIG. 4 is an example of a conversation story provided by an embodiment of the invention;
FIG. 5 is an example of language understanding data provided by an embodiment of the present invention;
FIG. 6 is an example of a domain language model provided by an embodiment of the present invention;
FIG. 7 is an example of a voice annotation provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1, as shown in fig. 1 to 7:
the speech model comprises a knowledge embedding model and a language embedding model, which jointly predict the next dialog state, and in the prediction phase, when a sentence of user text input is received, the current dialog state and nhistory-1 previous state is vectorized as model input, the model calculates and predicts an action as system response, and in the training phase, the natural process of the dialog is organized as dialog story for further training.
Firstly, the method comprises the following steps: representation of domain knowledge
For a dialog scenario, domain knowledge is organized, with the underlying domain knowledge represented as:
Figure BDA0002414351520000041
wherein I is the possible intention of the user in the scene; e is an entity involved in the scene; s is a semantic slot which needs to be filled in the conversation process and is used for storing key information provided by a user in the conversation; a is the machine-side executable action.
The method comprises the following steps of compiling a dialogue story, simulating a natural dialogue process, compiling a man-machine dialogue template for model training, further carrying out man-machine dialogue in real time by utilizing an interactive training program, recording the dialogue process, forming the dialogue story and quickly finishing training data preparation work.
The dialogue story forms two types of samples, namely a knowledge sample and a natural language sample.
II, secondly: knowledge embedding model
Knowledge embedding model employs a transformer-like model, as follows
Featurizer layer
Each dialog story is divided into dialog states of a plurality of time steps, and a single time step dialog state is formed by connecting one hot codes of dialog state elements to form a feature vector:
Vstate=Concat(VI,VE,VS,VA)
wherein, VI,VE,VS,VARespectively, the one hot vectors corresponding to I, E, S, a in the domain knowledge model.
The dialog story is a feature matrix obtained by stacking the dialog states at each time step:
Figure BDA0002414351520000042
and (3) coding layer:
the coding layer is composed of n same multi-head attention layers, each multi-head attention layer is composed of two sub-layers which are respectively composed of a multi-head attention mechanism and a fully-connected feedforward neural network, the input and the output of each sub-layer have a short-cut connection, and then layer norm is executed, so that the output of each sub-layer can be expressed as:
sub_layer_output=LayerNorm(x+(SubLayer(x)))
multi-head attention mechanism layer: this layer calculates the attention of the input layer through h different linear transformations, where each self-attention calculation is as follows:
Figure BDA0002414351520000043
x is input, Q ═ XWQ,K=XWK,V=XWV,dkIs the dimension of K.
The multi-head attention calculation formula is as follows:
MultiHead(X,Q,K,V)=Concat(head1,head2...headi)WO
fully-connected feedforward neural network
A fully-connected neural network is utilized to provide a non-linear transformation on the output of the multi-head attention layer.
Figure BDA0002414351520000051
Wherein gelu is an activation function;
the output of the coding layer is a knowledge embedding matrix EKnowledge
Thirdly, the method comprises the following steps: text embedding model
And pre-training to obtain a pre-language model by using massive field texts based on models such as BERT, GPT, A L BERT and the like, and vectorizing a natural language sample of the dialogue story by using the pre-language model, wherein the natural language sample is used as a feature to jointly predict actioin of the next step with the knowledge embedding model.
Pre-training
Obtaining field texts as much as possible, pre-training the language model to obtain a field language model MD
Input layer
Similarly as described in the second paragraph, the text Ti at each time step of each dialogue story is one hot-coded using a dictionary to generate a corresponding sentence vector.
VTi=oneHot(Ti)
The feature matrix of the entire conversational story is
Figure BDA0002414351520000052
Coding layer
Inputting the characteristic matrix into the language model 1) to generate a corresponding story embedding matrix.
Etext=MD(MTstory)
Thirdly, the method comprises the following steps: fusion layer
After the knowledge embedding matrix and the text embedding matrix pass through a fusion layer, calculating the probability of action by utilizing softmax, wherein the highest probability is the next action:
next_action=argmax(softmax(projection(Etext,EKnowedge)))。
in the present embodiment, all the components are general standard components or components known to those skilled in the art, and the structure and principle thereof can be known to those skilled in the art through technical manuals or through routine experiments.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature. Also, a first feature "above," "over," and "on" a second feature includes that the first feature is directly above and obliquely above the second feature, or merely means that the first feature is at a higher level than the second feature, and a first feature "below," "under," and "under" the second feature includes that the first feature is directly above and obliquely above the second feature, or merely means that the first feature is at a lower level than the first feature.
In the description of the present specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention, and schematic representations of the terms in this specification do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that those skilled in the art can make changes, modifications, substitutions and alterations to the above embodiments without departing from the principles and spirit of the present invention.

Claims (6)

1. A multi-round dialogue modeling method based on depth model fusion is characterized in that: the method comprises a knowledge embedding model and a language embedding model, wherein the knowledge embedding model and the language embedding model jointly predict a next dialogue state;
in the prediction phase, when a sentence of user text input is received, the current dialogue state is in accordance with nhistory-1 previous state is vectorised as model input, the model calculates and predicts an action as system response; in the training stage, the natural process of the conversation is organized into a conversation story, and further training is carried out.
2. The multi-turn dialogue modeling method based on depth model fusion of claim 1, wherein: scenario, organizing domain knowledge, the underlying domain knowledge is represented as:
Figure FDA0002414351510000011
wherein I is the possible intention of the user in the scene; e is an entity involved in the scene; s is a semantic slot which needs to be filled in the conversation process and is used for storing key information provided by a user in the conversation; a is the machine-side executable action.
3. The multi-turn dialogue modeling method based on depth model fusion of claim 1, wherein:
the knowledge embedding model adopts a transformer-like model, the domain knowledge obtains a characteristic matrix after passing through a featurer layer, the characteristic matrix obtains the knowledge embedding matrix through an encoding layer, and then the knowledge embedding matrix enters a model fusion layer.
4. The knowledge embedding model of claim 2, wherein: for the featurer layer: each dialog story is divided into dialog states at several time steps, and a single time step dialog state is a state in which onehot codes of dialog state elements are connected to form a feature vector:
Vstate=Concat(VI,VE,VS,VA)
wherein, VI,VE,VS,VARespectively corresponding to one hot vectors of I, E, S and A in the domain knowledge model;
the dialog story is a feature matrix obtained by stacking the dialog states at each time step:
Figure FDA0002414351510000012
the coding layer is composed of n same multi-head attention layers, each multi-head attention layer is composed of two sub-layers which are respectively composed of a multi-head attention mechanism and a fully-connected feedforward neural network, the input and the output of each sub-layer have a short-cut connection, and then layerorm is executed, so that the output of each sub-layer can be expressed as:
sub_layer_output=LayerNorm(x+(SubLayer(x)))
the multi-head attention mechanism layer calculates the attention of the input layer through h different linear transformations, where each self-attention calculation is as follows:
Figure FDA0002414351510000013
wherein X is input, Q ═ XWQ,K=XWK,V=XWV,dkIs the dimension of K;
the multi-head attention calculation formula is as follows:
MultiHead(X,Q,K,V)=Concat(head1,head2…headi)WO
the output of the multi-head attention layer is provided with a nonlinear transformation by using a fully-connected neural network:
Figure FDA0002414351510000021
wherein gelu is an activation function;
the output of the coding layer is a knowledge embedding matrix eknowlage.
5. The multi-turn dialogue modeling method based on depth model fusion of claim 2, wherein:
pre-training: obtaining a domain text, pre-training a language model, and obtaining a domain language model MD
An input layer: text T for each dialog story at each time stepiAnd performing one hot coding by utilizing a dictionary to generate a corresponding sentence vector:
VTi=oneHot(Ti);
the feature matrix of the entire dialog story is then:
Figure FDA0002414351510000022
and (3) coding layer: inputting the characteristic matrix into 1) the language model to generate a corresponding story embedding matrix:
Etext=MD(MTstory)。
6. the multi-turn dialogue modeling method based on depth model fusion of claim 2, wherein: after the knowledge embedding matrix and the text embedding matrix pass through a fusion layer, calculating the probability of action by utilizing softmax, wherein the highest probability is the next action:
next_action=argmax(softmax(projection(Etext,EKnowledge)))。
CN202010186401.6A 2020-03-17 2020-03-17 Multi-round dialogue modeling method based on depth model fusion Pending CN111414466A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010186401.6A CN111414466A (en) 2020-03-17 2020-03-17 Multi-round dialogue modeling method based on depth model fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010186401.6A CN111414466A (en) 2020-03-17 2020-03-17 Multi-round dialogue modeling method based on depth model fusion

Publications (1)

Publication Number Publication Date
CN111414466A true CN111414466A (en) 2020-07-14

Family

ID=71491252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010186401.6A Pending CN111414466A (en) 2020-03-17 2020-03-17 Multi-round dialogue modeling method based on depth model fusion

Country Status (1)

Country Link
CN (1) CN111414466A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515617A (en) * 2021-07-30 2021-10-19 中央财经大学 Method, device and equipment for generating model by conversation
CN115310429A (en) * 2022-08-05 2022-11-08 厦门靠谱云股份有限公司 Data compression and high-performance calculation method in multi-turn listening dialogue model

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257793A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Communicating Context Across Different Components of Multi-Modal Dialog Applications
US20180137854A1 (en) * 2016-11-14 2018-05-17 Xerox Corporation Machine reading method for dialog state tracking
US20180260384A1 (en) * 2012-04-20 2018-09-13 Maluuba Inc. Conversational agent
WO2019132135A1 (en) * 2017-12-26 2019-07-04 주식회사 머니브레인 Interactive ai agent system and method for actively monitoring and intervening in dialogue session between users, and computer readable recording medium
CN110188167A (en) * 2019-05-17 2019-08-30 北京邮电大学 A kind of end-to-end session method and system incorporating external knowledge
CN110399460A (en) * 2019-07-19 2019-11-01 腾讯科技(深圳)有限公司 Dialog process method, apparatus, equipment and storage medium
CN110503550A (en) * 2019-07-23 2019-11-26 周奕 A kind of stock certificate data analysis system
CN110704588A (en) * 2019-09-04 2020-01-17 平安科技(深圳)有限公司 Multi-round dialogue semantic analysis method and system based on long-term and short-term memory network
CN110704641A (en) * 2019-10-11 2020-01-17 零犀(北京)科技有限公司 Ten-thousand-level intention classification method and device, storage medium and electronic equipment
US20200042642A1 (en) * 2018-08-02 2020-02-06 International Business Machines Corporation Implicit dialog approach for creating conversational access to web content
CN110795549A (en) * 2019-10-31 2020-02-14 腾讯科技(深圳)有限公司 Short text conversation method, device, equipment and storage medium
CN110838288A (en) * 2019-11-26 2020-02-25 杭州博拉哲科技有限公司 Voice interaction method and system and dialogue equipment
WO2020051192A1 (en) * 2018-09-06 2020-03-12 Google Llc Dialogue systems

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260384A1 (en) * 2012-04-20 2018-09-13 Maluuba Inc. Conversational agent
US20140257793A1 (en) * 2013-03-11 2014-09-11 Nuance Communications, Inc. Communicating Context Across Different Components of Multi-Modal Dialog Applications
US20180137854A1 (en) * 2016-11-14 2018-05-17 Xerox Corporation Machine reading method for dialog state tracking
WO2019132135A1 (en) * 2017-12-26 2019-07-04 주식회사 머니브레인 Interactive ai agent system and method for actively monitoring and intervening in dialogue session between users, and computer readable recording medium
US20200042642A1 (en) * 2018-08-02 2020-02-06 International Business Machines Corporation Implicit dialog approach for creating conversational access to web content
WO2020051192A1 (en) * 2018-09-06 2020-03-12 Google Llc Dialogue systems
CN110188167A (en) * 2019-05-17 2019-08-30 北京邮电大学 A kind of end-to-end session method and system incorporating external knowledge
CN110399460A (en) * 2019-07-19 2019-11-01 腾讯科技(深圳)有限公司 Dialog process method, apparatus, equipment and storage medium
CN110503550A (en) * 2019-07-23 2019-11-26 周奕 A kind of stock certificate data analysis system
CN110704588A (en) * 2019-09-04 2020-01-17 平安科技(深圳)有限公司 Multi-round dialogue semantic analysis method and system based on long-term and short-term memory network
CN110704641A (en) * 2019-10-11 2020-01-17 零犀(北京)科技有限公司 Ten-thousand-level intention classification method and device, storage medium and electronic equipment
CN110795549A (en) * 2019-10-31 2020-02-14 腾讯科技(深圳)有限公司 Short text conversation method, device, equipment and storage medium
CN110838288A (en) * 2019-11-26 2020-02-25 杭州博拉哲科技有限公司 Voice interaction method and system and dialogue equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨成彪等: "一种基于记忆网络的多轮对话下的意图识别方法", pages 194 - 195 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515617A (en) * 2021-07-30 2021-10-19 中央财经大学 Method, device and equipment for generating model by conversation
CN115310429A (en) * 2022-08-05 2022-11-08 厦门靠谱云股份有限公司 Data compression and high-performance calculation method in multi-turn listening dialogue model
CN115310429B (en) * 2022-08-05 2023-04-28 厦门靠谱云股份有限公司 Data compression and high-performance calculation method in multi-round listening dialogue model

Similar Documents

Publication Publication Date Title
CN110795549B (en) Short text conversation method, device, equipment and storage medium
CN112905772B (en) Semantic correlation analysis method and device and related products
CN115964467A (en) Visual situation fused rich semantic dialogue generation method
Li et al. Learning fine-grained cross modality excitement for speech emotion recognition
CN104882141A (en) Serial port voice control projection system based on time delay neural network and hidden Markov model
CN116579339A (en) Task execution method and optimization task execution method
CN114398976A (en) Machine reading understanding method based on BERT and gate control type attention enhancement network
CN111414466A (en) Multi-round dialogue modeling method based on depth model fusion
Liu et al. Cross-domain slot filling as machine reading comprehension: A new perspective
CN112463935B (en) Open domain dialogue generation method and system with generalized knowledge selection
CN117149977A (en) Intelligent collecting robot based on robot flow automation
CN116863920A (en) Voice recognition method, device, equipment and medium based on double-flow self-supervision network
CN115858756A (en) Shared emotion man-machine conversation system based on perception emotional tendency
CN115795010A (en) External knowledge assisted multi-factor hierarchical modeling common-situation dialogue generation method
CN115221315A (en) Text processing method and device, and sentence vector model training method and device
CN114490974A (en) Automatic information reply method, device, system, electronic equipment and readable medium
Le et al. Towards a human-like chatbot using deep adversarial learning
Yin et al. Speech Recognition for Power Customer Service Based on DNN and CNN Models
Gupta A Review of Generative AI from Historical Perspectives
KR102385198B1 (en) Conversation generating system and method for chatting between artificial intelligences
CN114265920B (en) Intelligent robot conversation method and system based on signals and scenes
Chepin et al. Developing a Voice Control System for a Wheeled Robot
Guo et al. Optimization of Text Generation Method for Task-based Human-machine Dialogue System
CN116628203A (en) Dialogue emotion recognition method and system based on dynamic complementary graph convolution network
CN114281973A (en) Intelligent question answering method, system and device under Rasa framework and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination