CN113468312A - Reply generation method and device based on multi-turn dialogue knowledge transfer - Google Patents
Reply generation method and device based on multi-turn dialogue knowledge transfer Download PDFInfo
- Publication number
- CN113468312A CN113468312A CN202110825018.5A CN202110825018A CN113468312A CN 113468312 A CN113468312 A CN 113468312A CN 202110825018 A CN202110825018 A CN 202110825018A CN 113468312 A CN113468312 A CN 113468312A
- Authority
- CN
- China
- Prior art keywords
- knowledge
- reply
- transfer
- text
- conversation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012546 transfer Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000013598 vector Substances 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 230000002452 interceptive effect Effects 0.000 claims description 4
- 230000000306 recurrent effect Effects 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 8
- 230000001427 coherent effect Effects 0.000 abstract description 2
- 238000013135 deep learning Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a reply generation method and a reply generation device based on multi-round dialogue knowledge transfer, wherein the method comprises four steps of session coding, knowledge transfer state prediction, knowledge retrieval and reply generation; the method simplifies knowledge into knowledge tags, calculates the probability of knowledge tag transfer in each round of conversation by using a conditional random field so as to capture the knowledge transfer process in multiple rounds of conversation, further utilizes knowledge to guide reply generation, enriches the reply content and improves the reply continuity; the device comprises a session coding module, a knowledge transfer state prediction module, a knowledge retrieval module and a reply generation module, and can utilize knowledge transfer information to guide a knowledge selection process and generate replies which are richer in content and more coherent with the above by giving a session history.
Description
Technical Field
The invention relates to the technical field of natural language processing, in particular to a reply generation method and device based on multi-turn dialogue knowledge transfer.
Background
In a real conversation scene, a speaker can reply based on self knowledge storage, and knowledge conversation introduces knowledge into a conversation system, so that the conversation system generates replies with richer information and reduces general/meaningless replies. With the continuous development of deep learning and the continuous emergence of excellent data sets, knowledge conversations attract the attention of a large number of researchers, more and more excellent deep learning knowledge conversation models are proposed, and the researches also prove that the effect of reply generation can be obviously improved by introducing knowledge into a conversation system.
At present, the main processing flow of the deep learning model based on knowledge dialogue can be summarized into two points: 1. knowledge selection, namely extracting knowledge related to session history; 2. generating a reply based on the selected knowledge. In the step of knowledge selection, the mainstream knowledge dialog focuses on making knowledge selection according to the input of the current user and the interaction of the dialog history, and the problems exist: 1. in the prior art, the methods such as keyword matching and attention mechanism can capture the knowledge related to the user input in the conversation history, but the knowledge transfer (knowledge transitions) generated along with the progress of the conversation is not considered, but the knowledge transfer is ubiquitous in the daily conversation (such as topic transfer phenomenon in multiple rounds of conversation); 2. the attention mechanism usually extracts repeated and redundant contents, for example, a plurality of words in the conversation history are related to the input of the user, and the words contain the same key words, repeated knowledge is extracted, and the quality of the reply is necessarily influenced by using the repeated knowledge for the reply generation.
Disclosure of Invention
The present invention aims to solve the above problems and provide a reply generation method and apparatus based on multi-turn dialog knowledge transfer, which simplify knowledge into knowledge tags, and calculate the probability of knowledge tag transfer in each turn of dialog by using conditional random field to capture the knowledge transfer process in the multi-turn dialog, so as to guide reply generation by using knowledge, enrich the reply content, and improve the consistency of reply.
The invention realizes the purpose through the following technical scheme:
a reply generation method based on multi-turn dialogue knowledge transfer comprises the following steps:
A. session coding: segmenting the conversation history text and the user input text into words and converting the words into word vectors, and simultaneously respectively encoding each round of conversation and user input by using an encoder to obtain semantic codes of each round of conversation history and semantic codes of user input;
B. predicting the state of the knowledge transfer: based on session history and semantic coding input by a user, predicting the current knowledge transfer probability distribution by using a historical knowledge transfer path so as to obtain a current knowledge tag;
C. knowledge retrieval: retrieving a knowledge text most relevant to the current input of the user based on the current knowledge tag in a knowledge base;
D. and (3) reply generation: and generating a reply containing knowledge according to the retrieved knowledge text, the session history semantic code and the user input semantic code.
Further, the word vector of the session history text and the user input text in the step a includes: a randomly initialized word vector, or a word vector trained based on a deeply learned language model.
Further, the encoder in step a includes: a recurrent neural network based encoder.
In a further aspect, the method for predicting probability distribution of knowledge transfer in step B includes: LSTM incorporates a method of conditional random fields.
Further, the knowledge retrieval method in step C includes: a retrieval method based on text similarity.
Further, the reply generation method in step D includes: interactive alignment of session history, knowledge and user input; a cyclic neural network based decoder; attention in the decoder.
The application also provides a reply generating device based on multi-round dialogue knowledge transfer, including: the conversation coding module is used for coding according to the conversation historical text and the user input text to obtain semantic codes of each round of historical conversation and semantic codes input by the user;
the knowledge transfer state prediction module is used for predicting the current knowledge transfer probability distribution according to the historical session semantic code, the user input semantic code and the historical knowledge transfer path so as to obtain a current knowledge tag;
the knowledge retrieval module is used for retrieving a knowledge text most relevant to the current input of the user based on the current knowledge label in a knowledge base;
and the reply generation module is used for generating a reply containing knowledge according to the retrieved knowledge text, the session history semantic code and the user input semantic code.
The invention has the beneficial effects that:
according to the reply generation method based on multi-turn dialogue knowledge transfer, knowledge is simplified into knowledge labels, a conditional random field is utilized to simulate the knowledge transfer process under a real scene in multi-turn dialogue, and then the knowledge is utilized to guide reply generation, so that the effect of reply generation can be optimized by fully utilizing knowledge transfer information.
According to the reply generation device based on multi-round conversation knowledge transfer, conversation history is given, knowledge transfer information can be used for guiding the knowledge selection process, and replies which are richer in content and more coherent with the above are generated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following briefly introduces the embodiments or the drawings needed to be practical in the prior art description, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a reply generation method based on multi-turn dialogue knowledge transfer according to the present invention;
FIG. 2 is a block diagram of a reply generation apparatus based on multi-turn dialog knowledge transfer according to the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
The first embodiment is as follows:
fig. 1 shows a flowchart of a reply generation method based on multi-turn dialogue knowledge transfer, which comprises the following steps:
A. session coding: and segmenting the conversation history text and the user input text into words and converting the words into word vectors, and simultaneously respectively encoding each round of conversation and user input by using an encoder to obtain semantic codes of each round of conversation history and semantic codes of user input.
In any particular embodiment, the text word vector includes: the Word vectors are updated according to iterative updating of model parameters in a model training process by adopting randomly initialized Word vectors, or Word vectors obtained by deep learning-based language model training, such as Glove, Word2Vec, Bert and the like.
In any particular embodiment, the encoder includes: a recurrent neural network based encoder, such as an LSTM based encoder, or a GRU based encoder.
B. Predicting the state of the knowledge transfer: based on session history and semantic coding input by a user, predicting the current knowledge transfer probability distribution by using a historical knowledge transfer path so as to obtain a current knowledge tag.
A method of predicting a probability distribution of knowledge transfer, comprising: LSTM incorporates a method of conditional random fields.
In any embodiment, the concrete expression form of knowledge is a knowledge tag, and the knowledge content of the whole knowledge dialogue is specified through the predefined knowledge tag.
In any embodiment, the method for combining the LSTM with the conditional random field is divided into two steps: iteratively calculating the probability distribution D of the knowledge label at each moment of the session history by using a bidirectional LSTM; secondly, a state transition matrix M of the knowledge label is calculated by using the conditional random field and combining the probability distribution D of the knowledge label at each moment of the session history, and then the knowledge label with the maximum possibility at the current moment is calculated according to the state transition matrix M of the knowledge label.
In any particular embodiment, the state label predicted penalty function employs log-likelihood penalties.
C. Knowledge retrieval: in the knowledge base, based on the current knowledge tags, the knowledge text most relevant to the user's current input is retrieved.
A method of retrieving knowledge, comprising: a retrieval method based on text similarity.
In any embodiment, the search method based on text similarity includes: BM25, TF-IDF, etc.
In any particular embodiment, the loss function of knowledge retrieval employs cross-entropy loss.
D. And (3) reply generation: and generating a reply containing knowledge according to the retrieved knowledge text, the session history semantic code and the user input semantic code.
The reply generation method comprises the following steps: session history, knowledge, and attention alignment of user input; a cyclic neural network based decoder; attention in the decoder.
In any embodiment, the interactive Alignment method for session history, knowledge and user input may adopt Deep Alignment Network (DAN).
In any particular embodiment, the interactive alignment may be repeated multiple times.
In either embodiment, the recurrent neural network-based decoder may employ network elements that are LSTM or GRU.
In either embodiment, the decoder's attention mechanism focuses primarily on the conversation history, user input, knowledge text, and attention between the decoded output, making decoding more focused on important word information.
In any particular embodiment, the loss function of the decoder employs cross-entropy loss.
According to the reply generation method based on multi-turn dialogue knowledge transfer provided by the embodiment of the invention, knowledge is simplified into the knowledge tags, the knowledge transfer process under the real scene is simulated in the multi-turn dialogue by using the conditional random field, and the reply generation is guided by using the knowledge, so that the reply generation effect can be optimized by fully using the knowledge transfer information.
Example two:
fig. 2 shows a block diagram of a reply generation device based on multi-turn dialog knowledge transfer in the present invention, which includes:
a session encoding module: the system comprises a database, a database and a database, wherein the database is used for storing a historical conversation text and a user input text;
the knowledge transfer state prediction module: the system is used for predicting the current knowledge transfer probability distribution according to the historical session semantic codes, the user input semantic codes and the historical knowledge transfer paths so as to obtain a current knowledge label;
a knowledge retrieval module: the knowledge database is used for retrieving knowledge texts most relevant to the current input of the user based on the current knowledge labels;
a reply generation module: and the system is used for generating a reply containing knowledge according to the retrieved knowledge text, the session history semantic code and the user input semantic code.
It should be noted that, in this embodiment, each module (or unit) is in a logical sense, and in particular, when the embodiment is implemented, a plurality of modules (or units) may be combined into one module (or unit), and one module (or unit) may also be split into a plurality of modules (or units).
By the reply generation device based on multi-turn dialogue knowledge transfer provided by the embodiment of the invention, given the conversation history, the knowledge transfer information can be used for guiding the knowledge selection process, and the reply with richer content and more consistent with the content can be generated.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims. It should be noted that the various technical features described in the above embodiments can be combined in any suitable manner without contradiction, and the invention is not described in any way for the possible combinations in order to avoid unnecessary repetition. In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as the disclosure of the present invention as long as it does not depart from the spirit of the present invention.
Claims (7)
1. A reply generation method based on multi-turn dialogue knowledge transfer is characterized by comprising the following steps:
A. session coding: segmenting the conversation history text and the user input text into words and converting the words into word vectors, and simultaneously respectively encoding each round of conversation and user input by using an encoder to obtain semantic codes of each round of conversation history and semantic codes of user input;
B. predicting the state of the knowledge transfer: based on session history and semantic coding input by a user, predicting the current knowledge transfer probability distribution by using a historical knowledge transfer path so as to obtain a current knowledge tag;
C. knowledge retrieval: retrieving a knowledge text most relevant to the current input of the user based on the current knowledge tag in a knowledge base;
D. and (3) reply generation: and generating a reply containing knowledge according to the retrieved knowledge text, the session history semantic code and the user input semantic code.
2. The method for generating replies based on the multi-turn dialog knowledge transfer of claim 1, wherein the word vectors of the session history text and the user input text in step A comprise: a randomly initialized word vector, or a word vector trained based on a deeply learned language model.
3. The method for generating reply based on multi-turn dialog knowledge transfer of claim 1, wherein the encoder in the step a comprises: a recurrent neural network based encoder.
4. The method for generating the reply based on the multi-turn dialog knowledge transfer of claim 1, wherein the method for predicting the probability distribution of the knowledge transfer in the step B comprises the following steps: LSTM incorporates a method of conditional random fields.
5. The reply generation method based on multi-turn dialog knowledge transfer of claim 1, wherein the knowledge retrieval method in the step C comprises: a retrieval method based on text similarity.
6. The reply generation method based on multi-turn dialog knowledge transfer as claimed in claim 1, wherein the reply generation method in the step D comprises: interactive alignment of session history, knowledge and user input; a cyclic neural network based decoder; attention in the decoder.
7. A reply generation apparatus based on multi-turn dialog knowledge transfer, comprising: the conversation coding module is used for coding according to the conversation historical text and the user input text to obtain semantic codes of each round of historical conversation and semantic codes input by the user;
the knowledge transfer state prediction module is used for predicting the current knowledge transfer probability distribution according to the historical session semantic code, the user input semantic code and the historical knowledge transfer path so as to obtain a current knowledge tag;
the knowledge retrieval module is used for retrieving a knowledge text most relevant to the current input of the user based on the current knowledge label in a knowledge base;
and the reply generation module is used for generating a reply containing knowledge according to the retrieved related knowledge, the session history semantic code and the user input semantic code.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110825018.5A CN113468312A (en) | 2021-07-21 | 2021-07-21 | Reply generation method and device based on multi-turn dialogue knowledge transfer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110825018.5A CN113468312A (en) | 2021-07-21 | 2021-07-21 | Reply generation method and device based on multi-turn dialogue knowledge transfer |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113468312A true CN113468312A (en) | 2021-10-01 |
Family
ID=77881508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110825018.5A Pending CN113468312A (en) | 2021-07-21 | 2021-07-21 | Reply generation method and device based on multi-turn dialogue knowledge transfer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113468312A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579728A (en) * | 2022-03-15 | 2022-06-03 | 四川新网银行股份有限公司 | Dialogue generation method, device, equipment and medium applied to multi-turn dialogue system |
CN114579728B (en) * | 2022-03-15 | 2024-07-12 | 四川新网银行股份有限公司 | Dialogue generation method, device, equipment and medium applied to multi-round dialogue system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462750A (en) * | 2020-03-20 | 2020-07-28 | 北京邮电大学 | End-to-end task type dialogue system and method for semantic and knowledge enhancement |
CN112084314A (en) * | 2020-08-20 | 2020-12-15 | 电子科技大学 | Knowledge-introducing generating type session system |
-
2021
- 2021-07-21 CN CN202110825018.5A patent/CN113468312A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462750A (en) * | 2020-03-20 | 2020-07-28 | 北京邮电大学 | End-to-end task type dialogue system and method for semantic and knowledge enhancement |
CN112084314A (en) * | 2020-08-20 | 2020-12-15 | 电子科技大学 | Knowledge-introducing generating type session system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579728A (en) * | 2022-03-15 | 2022-06-03 | 四川新网银行股份有限公司 | Dialogue generation method, device, equipment and medium applied to multi-turn dialogue system |
CN114579728B (en) * | 2022-03-15 | 2024-07-12 | 四川新网银行股份有限公司 | Dialogue generation method, device, equipment and medium applied to multi-round dialogue system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110222164B (en) | Question-answer model training method, question and sentence processing device and storage medium | |
CN110427490B (en) | Emotional dialogue generation method and device based on self-attention mechanism | |
CN111639175B (en) | Self-supervision dialogue text abstract method and system | |
CN107680580B (en) | Text conversion model training method and device, and text conversion method and device | |
CN114020862B (en) | Search type intelligent question-answering system and method for coal mine safety regulations | |
CN111950296B (en) | Comment target emotion analysis based on BERT fine tuning model | |
CN111460800A (en) | Event generation method and device, terminal equipment and storage medium | |
CN113392265A (en) | Multimedia processing method, device and equipment | |
CN116303977B (en) | Question-answering method and system based on feature classification | |
CN112528654A (en) | Natural language processing method and device and electronic equipment | |
CN112560456A (en) | Generation type abstract generation method and system based on improved neural network | |
CN110516053B (en) | Dialogue processing method, device and computer storage medium | |
Bai et al. | Loopy residual hashing: Filling the quantization gap for image retrieval | |
CN111444399A (en) | Reply content generation method, device, equipment and readable storage medium | |
CN110891201B (en) | Text generation method, device, server and storage medium | |
CN111191023B (en) | Automatic generation method, device and system for topic labels | |
CN112417118A (en) | Dialog generation method based on marked text and neural network | |
CN116958738A (en) | Training method and device of picture recognition model, storage medium and electronic equipment | |
CN113741759B (en) | Comment information display method and device, computer equipment and storage medium | |
CN114519353B (en) | Model training method, emotion message generation method and device, equipment and medium | |
CN113468312A (en) | Reply generation method and device based on multi-turn dialogue knowledge transfer | |
Li et al. | Diverter-guider recurrent network for diverse poems generation from image | |
CN112150103B (en) | Schedule setting method, schedule setting device and storage medium | |
CN113392639A (en) | Title generation method and device based on artificial intelligence and server | |
CN111078848A (en) | Input prompting method and device for conversation robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211001 |
|
RJ01 | Rejection of invention patent application after publication |