CN113672714A - Multi-turn dialogue device and method - Google Patents

Multi-turn dialogue device and method Download PDF

Info

Publication number
CN113672714A
CN113672714A CN202110958910.0A CN202110958910A CN113672714A CN 113672714 A CN113672714 A CN 113672714A CN 202110958910 A CN202110958910 A CN 202110958910A CN 113672714 A CN113672714 A CN 113672714A
Authority
CN
China
Prior art keywords
module
turn
data
conversation
dialogue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110958910.0A
Other languages
Chinese (zh)
Inventor
曾祥云
朱姬渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tianchen Health Technology Co ltd
Original Assignee
Shanghai Dashanlin Medical Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dashanlin Medical Health Technology Co ltd filed Critical Shanghai Dashanlin Medical Health Technology Co ltd
Priority to CN202110958910.0A priority Critical patent/CN113672714A/en
Publication of CN113672714A publication Critical patent/CN113672714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a multi-turn dialogue device and a method, wherein the multi-turn dialogue device comprises a data processing module, a representation module, a feature extraction module, a question-answer feature similarity module and an objective function module, wherein: the data processing module is used for analyzing the multi-turn dialogue data of the historical chat to obtain input data; the representation module is used for mapping input data to obtain a sentence vector set; the characteristic extraction module is used for analyzing the sentence vector set; the question-answer characteristic similarity module is used for processing the sentence vector set to obtain a scoring matrix; and the target function module is used for setting a target function suitable for the multi-turn dialogue device according to the grading matrix. Under the condition of small sample size, the multi-turn dialogue device can learn good context characteristics, so that the problems of the user can be predicted more accurately and answers can be provided, the network structure is simple, and a lightweight memory and energy-saving model are realized.

Description

Multi-turn dialogue device and method
Technical Field
The invention relates to the field of natural language processing, in particular to a multi-turn dialogue device and a multi-turn dialogue method.
Background
A universal pre-training language model, such as a bert model, is a multi-layer bidirectional transformer network structure, self-supervision learning is carried out on the basis of massive linguistic data, the accuracy of natural language processing tasks is greatly improved through feature representation obtained through the bert model, but each layer of the bert model is self-supervised, so that the overall complexity of the bert model is O (n2), and a large amount of machine resources are needed.
In a multi-turn chat system with context association, the bert model is not ideal, except for large calculation amount, low speed and high training cost, the biggest defect is that the bert model is based on general corpus training, because semantic features learned by general corpuses lack strongly-related contextual dialogue information, and because corpuses are mostly based on documents and lack dialogue data, the use of the bert model in multi-turn dialogue cannot improve the natural language understanding ability and the accuracy of intention judgment of a robot. Especially in the field of spoken language, specific scenes or professional industry knowledge, the multi-turn chat system with semantic association of upper and lower sentences has limited expressive force and low accuracy.
Disclosure of Invention
The invention provides a multi-turn dialogue device and a multi-turn dialogue method for solving the technical problems in the prior art.
In order to achieve the above object, the present invention provides a multi-round dialog device, which includes a data processing module, a characterization module, a feature extraction module, a question-answer feature similarity module, and an objective function module, wherein:
the data processing module is used for analyzing the multi-round conversation data of the historical chat to obtain input data: the above dialogue text data, question data, and answer data;
the representation module is used for mapping input data to obtain a sentence vector set;
the feature extraction module is used for analyzing the sentence vector set to obtain the above feature vector, the question feature vector and the answer feature vector;
the question-answer feature similarity module is used for processing the above feature vectors, the question feature vectors and the answer feature vectors to obtain a scoring matrix;
and the target function module is used for setting a target function suitable for the multi-turn dialogue device according to the grading matrix.
Further, the mapping the input data by the characterization module comprises:
dividing each sentence of the above dialogue text data, the question data and the answer data into words;
the position of each word is represented by ID;
representing each ID by a random vector with N dimensions;
and obtaining a sentence vector set.
Further, the question-answer feature similarity module is configured to process the feature vectors to obtain a scoring matrix, and the scoring matrix includes:
splicing and summing the above feature vectors and the problem feature vectors;
and carrying out matrix multiplication on the answer feature vector and the features obtained after splicing to obtain a scoring matrix.
Further, the target function module obtains a target function by using softmax as an activation function and carrying out derivation on the cross entropy by using a loss function.
Furthermore, the feature extraction module is formed by stacking a plurality of double encoder modules, wherein each double encoder module is structured by a self-attention layer, a normalization layer, a feedforward neural network layer and a normalization layer which are sequentially connected.
Further, the normalization layer is formed by performing normalization processing on the output vectors after residual connection and input vector addition residual connection.
The invention also discloses a multi-turn dialogue method, which is applied to a multi-turn dialogue device and comprises the following steps:
converting the current input sound of the user into a natural language text;
inputting a multi-turn dialogue device by combining the historical dialogue state and the current natural language text;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the next round of voice input by the user to carry out the next round of conversation;
the multi-turn dialog device is any one of the above multi-turn dialog devices.
The invention also discloses a multi-turn dialogue method, which is applied to a multi-turn dialogue device and comprises the following steps:
receiving a natural language text input by a current user;
combining the historical dialogue state and the current natural language text information to input a multi-turn dialogue device;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the natural language text input by the user in the next round to carry out the next round of conversation;
the multi-turn dialog device is any one of the above multi-turn dialog devices.
The present invention also discloses an electronic device, comprising: the multi-turn dialog method comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium are communicated through the bus, and the processor executes the machine-readable instructions to execute the multi-turn dialog method.
The invention also discloses a storage medium, wherein a computer program is stored on the storage medium, and the computer program is executed by a processor to execute the multi-turn dialogue method.
In practical applications, the modules in the method and system disclosed by the invention can be deployed on one target server, or each module can be deployed on different target servers independently, and particularly, the modules can be deployed on cluster target servers according to needs in order to provide stronger computing processing capacity.
Therefore, even under the condition that the sample size is not large, the multi-round conversation device can learn good context characteristics, so that the problems of the user can be predicted more accurately and answers can be provided, the network structure is simple, a lightweight memory and an energy-saving model are realized, the model can be trained under a small amount of conversation linguistic data, and the natural language understanding capability of the robot is further improved.
In order that the invention may be more clearly and fully understood, specific embodiments thereof are described in detail below with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a multi-turn dialog apparatus according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an embodiment of a multi-turn dialog apparatus according to the present application.
Wherein: the system comprises a data processing module 1, a representation module 2, a feature extraction module 3, a question answering feature similarity module 4 and an objective function module 5.
Detailed Description
Referring to fig. 1, fig. 1 shows a schematic structural diagram of a multi-turn dialog device,
the multi-turn dialogue device is characterized in that the previous scenes, the current problems and the corresponding reply contents of historical multi-turn dialogue are respectively put into a shared transform encoder to obtain representations based on the previous scenes, the problems and the answers, then the representations of the previous scenes and the current problems are fused to obtain fusion characteristics, then the fusion characteristics and the answer representations are interacted to obtain a similarity matrix, and finally the number of turns of the next turn of dialogue is predicted by the representations of the similarity matrix to construct an objective function.
The multi-turn dialog device constructed as described above can learn good contextual characteristics even under a condition that the sample size is not large, thereby being able to predict the user's question and provide an answer more accurately.
As an implementation manner, the multi-round dialog device in the embodiment of the present application includes a data processing module, a characterization module, a feature extraction module, a question-answer feature similarity module, and an objective function module, where:
the data processing module splits and analyzes the multi-round conversation data of the historical chat, divides the multi-round conversation data into the text (namely historical conversation content), the questions and the answers, and constructs the text data, the question data and the answer data of the conversation, the human questions and the robot replies, namely the text data, the question data and the answer data of the conversation, and the text data, the question data and the answer data are used as the input data of the model or used for training the model.
The human question refers to a question which is provided by a user to the chat robot or the intelligent client question-answering system, and the answer data replied by the robot refers to answer data replied by the chat robot or the intelligent client question-answering system according to the question provided by the user.
The representation module maps or converts the input data, namely the above dialogue text data, the question data and the answer data, to obtain a sentence vector set.
As a preferred embodiment, the realization mode is as follows:
the words of each sentence of the above dialogue text data, question data and answer data are firstly segmented, then the position or address of each word is represented by ID, and the ID information of each word is represented by N-dimensional (for example, 512-dimensional) random vector, so as to construct a sentence vector set of the above dialogue text data, question data and answer data, and then the sentence vector set can be input into the feature extraction module for feature extraction or extraction.
In this embodiment, the position or address of each word is represented by an ID, so that in the multi-turn dialogue apparatus of the present application, it is used to select a mask for predictive training at the time of pre-training. When the method is specifically implemented, a certain probability algorithm can be designed, each ID has a certain probability or is randomly selected to be replaced by being shielded, and then the front and rear words of the words to be shielded can be used for guessing what the words to be shielded are.
In addition, the turns of the current conversation and the occluded words are used as labels for sentences and words, respectively.
The feature extraction module is used for analyzing the sentence vector set to obtain the above feature vector, the question feature vector and the answer feature vector.
As a preferred implementation mode, the feature extraction module is formed by stacking a plurality of double-encoder modules, a double encoder sharing 4 layers is adopted, each layer adopts a self-attention mechanism, and the structure of each double encoder is composed of a self-attention layer, a normalization layer, a feedforward neural network layer and a normalization layer which are connected in sequence.
In this embodiment, the feature extraction module uses a self-attention layer of a self-attention mechanism as a part of its technical solution, the self-attention mechanism may fully consider semantics and grammatical relations between different words in a sentence, and the word vector obtained by such calculation may further consider a relation between contexts, for example, "the birdcanflyblueriesutiawing" in a sentence, in which the machine may be able to link it and bird, and in a system of multiple rounds of conversations, the semantics of the contexts may be understood more.
All be provided with the normalization layer after self-attention layer and feedforward neural network layer, the advantage of normalization makes the characteristic distribute in less controllable value domain space, reduces the search range, not only accelerates the speed of training, has also improved the stability of training moreover, lets the many rounds of dialogue device convergence of this application faster, and the rate of accuracy is higher. The method and the device can realize a lightweight design system, and the multi-turn dialogue device can well understand the context and predict the context even under the condition that the sample size is not large.
In a more preferred embodiment, the normalization layer is performed after the output vector is subjected to residual concatenation and the input vector addition residual concatenation. In the preferred embodiment, the technical effect obtained by residual connection is to prevent information loss when the number of network layers is too large, and further improve the accuracy of the present application.
The question-answer feature similarity module is used for processing the above feature vectors, the question feature vectors and the answer feature vectors to obtain a scoring matrix.
Referring to fig. 2, as a preferred implementation manner, the question-answer feature similarity module in the embodiment of the present application fuses the above feature vectors and the question feature vectors, that is, performs splicing summation, so as to obtain richer semantic features, that is, global features of the above dialog and local features of the current question.
And then carrying out matrix multiplication on the answer feature vector and the spliced above and question features to obtain a scoring matrix. The purpose of the scoring matrix is to enable the combined features formed by the spliced text and the question features to be distributed more closely to the answer features, so that the characterization capability of the multi-turn dialogue device is improved, and the combined features are also used as input features of a cross entropy loss function of the dynamic turn number label.
And the target function module is used for setting a target function suitable for the multi-turn dialogue device according to the grading matrix. In a preferred embodiment, the objective function module of the present application obtains the objective function by derivation using softmax as an activation function and a loss function as a cross entropy.
Based on the multi-turn dialogue device of the above embodiment, the present application also discloses a multi-turn dialogue method, which includes the steps of:
converting the current input sound of the user into a natural language text;
inputting a multi-turn dialogue device by combining the historical dialogue state and the current natural language text;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the next round of voice input by the user to carry out the next round of conversation;
the multi-turn dialog device used is the multi-turn dialog device of the above-described embodiment.
In addition, based on the above embodiment, a variation of the multi-turn dialog method includes:
receiving a natural language text input by a current user;
combining the historical dialogue state and the current natural language text information to input a multi-turn dialogue device;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the natural language text input by the user in the next round to carry out the next round of conversation;
the multi-turn dialog device used is the multi-turn dialog device of the above-described embodiment.
The present application further provides an electronic device, comprising: the system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor is communicated with the storage medium through the bus, and the processor executes the machine-readable instructions to execute the method according to the embodiment.
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the method as described in the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The utility model provides a many rounds of dialogue devices, characterized by includes data processing module, characterization module, characteristic extraction module, question answering characteristic similarity module, objective function module, wherein:
the data processing module is used for analyzing the multi-round conversation data of the historical chat to obtain input data: the above dialogue text data, question data, and answer data;
the representation module is used for mapping input data to obtain a sentence vector set;
the feature extraction module is used for analyzing the sentence vector set to obtain the above feature vector, the question feature vector and the answer feature vector;
the question-answer feature similarity module is used for processing the above feature vectors, the question feature vectors and the answer feature vectors to obtain a scoring matrix;
and the target function module is used for setting a target function suitable for the multi-turn dialogue device according to the grading matrix.
2. The multi-turn dialog device of claim 1, wherein the characterization module mapping the input data comprises:
dividing each sentence of the above dialogue text data, the question data and the answer data into words;
the position of each word is represented by ID;
representing each ID by a random vector with N dimensions;
and obtaining a sentence vector set.
3. The multi-turn dialog device of claim 1, wherein the question-answer feature similarity module is configured to process the feature vectors to obtain a scoring matrix comprising:
splicing and summing the above feature vectors and the problem feature vectors;
and carrying out matrix multiplication on the answer feature vector and the features obtained after splicing to obtain a scoring matrix.
4. The multi-turn dialog device of claim 1, wherein the objective function module derives the objective function using softmax as an activation function and the loss function as a cross-entropy.
5. The multi-turn dialog device of claim 1, wherein the feature extraction module is formed by stacking a plurality of dual encoder modules, wherein each dual encoder module is configured to have a self-attention layer, a normalization layer, a feedforward neural network layer, and a normalization layer connected in sequence.
6. The multi-turn dialog device of claim 5, wherein the normalization layer is performed after the output vector is connected to the input vector-added residual via residual connection.
7. A multi-turn dialogue method is applied to a multi-turn dialogue device and comprises the following steps:
converting the current input sound of the user into a natural language text;
inputting a multi-turn dialogue device by combining the historical dialogue state and the current natural language text;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the next round of voice input by the user to carry out the next round of conversation;
characterized in that the multi-turn dialog device is the multi-turn dialog device of any of claims 1-6.
8. A multi-turn dialogue method is applied to a multi-turn dialogue device and comprises the following steps:
receiving a natural language text input by a current user;
combining the historical dialogue state and the current natural language text information to input a multi-turn dialogue device;
predicting the current conversation state by the multi-turn conversation device according to the historical conversation state and the current natural language text;
outputting corresponding system behaviors according to the current conversation state;
converting the system behavior into natural language text or voice to form a round of conversation;
waiting for the natural language text input by the user in the next round to carry out the next round of conversation;
characterized in that the multi-turn dialog device is the multi-turn dialog device of any of claims 1-6.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the method of claim 7 or 8.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to claim 7 or 8.
CN202110958910.0A 2021-08-20 2021-08-20 Multi-turn dialogue device and method Pending CN113672714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110958910.0A CN113672714A (en) 2021-08-20 2021-08-20 Multi-turn dialogue device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110958910.0A CN113672714A (en) 2021-08-20 2021-08-20 Multi-turn dialogue device and method

Publications (1)

Publication Number Publication Date
CN113672714A true CN113672714A (en) 2021-11-19

Family

ID=78544173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110958910.0A Pending CN113672714A (en) 2021-08-20 2021-08-20 Multi-turn dialogue device and method

Country Status (1)

Country Link
CN (1) CN113672714A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422950A (en) * 2022-09-01 2022-12-02 美的集团(上海)有限公司 Method and device for evaluating dialog system, electronic equipment and storage medium
CN115952272A (en) * 2023-03-10 2023-04-11 杭州心识宇宙科技有限公司 Method, device and equipment for generating dialogue information and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503805A (en) * 2016-11-14 2017-03-15 合肥工业大学 A kind of bimodal based on machine learning everybody talk with sentiment analysis system and method
CN106997342A (en) * 2017-03-27 2017-08-01 上海奔影网络科技有限公司 Intension recognizing method and device based on many wheel interactions
CN108170764A (en) * 2017-12-25 2018-06-15 上海大学 A kind of man-machine more wheel dialog model construction methods based on scene context
CN109101545A (en) * 2018-06-29 2018-12-28 北京百度网讯科技有限公司 Natural language processing method, apparatus, equipment and medium based on human-computer interaction
CN110008322A (en) * 2019-03-25 2019-07-12 阿里巴巴集团控股有限公司 Art recommended method and device under more wheel session operational scenarios
CN110309283A (en) * 2019-06-28 2019-10-08 阿里巴巴集团控股有限公司 A kind of answer of intelligent answer determines method and device
CN112527986A (en) * 2020-12-10 2021-03-19 平安科技(深圳)有限公司 Multi-round dialog text generation method, device, equipment and storage medium
CN112818105A (en) * 2021-02-05 2021-05-18 江苏实达迪美数据处理有限公司 Multi-turn dialogue method and system fusing context information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503805A (en) * 2016-11-14 2017-03-15 合肥工业大学 A kind of bimodal based on machine learning everybody talk with sentiment analysis system and method
CN106997342A (en) * 2017-03-27 2017-08-01 上海奔影网络科技有限公司 Intension recognizing method and device based on many wheel interactions
CN108170764A (en) * 2017-12-25 2018-06-15 上海大学 A kind of man-machine more wheel dialog model construction methods based on scene context
CN109101545A (en) * 2018-06-29 2018-12-28 北京百度网讯科技有限公司 Natural language processing method, apparatus, equipment and medium based on human-computer interaction
CN110008322A (en) * 2019-03-25 2019-07-12 阿里巴巴集团控股有限公司 Art recommended method and device under more wheel session operational scenarios
CN110309283A (en) * 2019-06-28 2019-10-08 阿里巴巴集团控股有限公司 A kind of answer of intelligent answer determines method and device
CN112527986A (en) * 2020-12-10 2021-03-19 平安科技(深圳)有限公司 Multi-round dialog text generation method, device, equipment and storage medium
CN112818105A (en) * 2021-02-05 2021-05-18 江苏实达迪美数据处理有限公司 Multi-turn dialogue method and system fusing context information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422950A (en) * 2022-09-01 2022-12-02 美的集团(上海)有限公司 Method and device for evaluating dialog system, electronic equipment and storage medium
CN115952272A (en) * 2023-03-10 2023-04-11 杭州心识宇宙科技有限公司 Method, device and equipment for generating dialogue information and readable storage medium
CN115952272B (en) * 2023-03-10 2023-05-26 杭州心识宇宙科技有限公司 Method, device and equipment for generating dialogue information and readable storage medium

Similar Documents

Publication Publication Date Title
CN110427461B (en) Intelligent question and answer information processing method, electronic equipment and computer readable storage medium
CN117521675A (en) Information processing method, device, equipment and storage medium based on large language model
CN113127624B (en) Question-answer model training method and device
JP2023535709A (en) Language expression model system, pre-training method, device, device and medium
CN113987179A (en) Knowledge enhancement and backtracking loss-based conversational emotion recognition network model, construction method, electronic device and storage medium
WO2023137911A1 (en) Intention classification method and apparatus based on small-sample corpus, and computer device
CN108416032A (en) A kind of file classification method, device and storage medium
CN111460132A (en) Generation type conference abstract method based on graph convolution neural network
CN113672714A (en) Multi-turn dialogue device and method
CN112860871B (en) Natural language understanding model training method, natural language understanding method and device
CN112925904A (en) Lightweight text classification method based on Tucker decomposition
CN111858898A (en) Text processing method and device based on artificial intelligence and electronic equipment
CN112905772A (en) Semantic correlation analysis method and device and related products
CN117725163A (en) Intelligent question-answering method, device, equipment and storage medium
CN118378148A (en) Training method of multi-label classification model, multi-label classification method and related device
CN112989843B (en) Intention recognition method, device, computing equipment and storage medium
CN113420111A (en) Intelligent question-answering method and device for multi-hop inference problem
Sawant et al. Analytical and Sentiment based text generative chatbot
CN116108856B (en) Emotion recognition method and system based on long and short loop cognition and latent emotion display interaction
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium
WO2023279921A1 (en) Neural network model training method, data processing method, and apparatuses
CN114970666A (en) Spoken language processing method and device, electronic equipment and storage medium
CN111091011B (en) Domain prediction method, domain prediction device and electronic equipment
CN115600635A (en) Training method of neural network model, and data processing method and device
CN114330701A (en) Model training method, device, computer equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Building 10, No. 860, Xinyang Road, Lingang New District, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Shanghai Yikangyuan Medical Health Technology Co.,Ltd.

Address before: Building 10, No. 860, Xinyang Road, Lingang New District, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant before: Shanghai dashanlin Medical Health Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221222

Address after: Room 2703, No. 277, Xingang East Road, Haizhu District, Guangzhou, Guangdong 510220

Applicant after: Guangzhou Tianchen Health Technology Co.,Ltd.

Address before: Building 10, No. 860, Xinyang Road, Lingang New District, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant before: Shanghai Yikangyuan Medical Health Technology Co.,Ltd.