CN112199502B - Verse generation method and device based on emotion, electronic equipment and storage medium - Google Patents

Verse generation method and device based on emotion, electronic equipment and storage medium Download PDF

Info

Publication number
CN112199502B
CN112199502B CN202011155029.9A CN202011155029A CN112199502B CN 112199502 B CN112199502 B CN 112199502B CN 202011155029 A CN202011155029 A CN 202011155029A CN 112199502 B CN112199502 B CN 112199502B
Authority
CN
China
Prior art keywords
emotion
verse
target
poem
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011155029.9A
Other languages
Chinese (zh)
Other versions
CN112199502A (en
Inventor
欧文杰
林悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011155029.9A priority Critical patent/CN112199502B/en
Publication of CN112199502A publication Critical patent/CN112199502A/en
Application granted granted Critical
Publication of CN112199502B publication Critical patent/CN112199502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure relates to a method and a device for generating a verse based on emotion, electronic equipment and a computer readable storage medium, relates to the technical field of artificial intelligence, and can be applied to a scene for generating a target verse with controllable emotion. The method comprises the following steps: acquiring a verse data set to be marked, and acquiring a presupposed verse emotion classification model; inputting the verse data set to be marked into a verse emotion classification model to obtain emotion labels of all the verses in the verse data set to be marked, and generating an emotion marking verse set according to all the verses and the emotion labels of all the verses; training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model; and acquiring the target emotion, and inputting the target emotion into the emotion verse generation model to generate a target verse corresponding to the target emotion. The present disclosure may generate the target verse based on the emotion information according to predetermined emotion information.

Description

Verse generation method and device based on emotion, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a verse generating method based on emotion, a verse generating device based on emotion, electronic equipment and a computer readable storage medium.
Background
The Chinese ancient poems are the magnifications of the traditional Chinese cultures, the graceful and simplified sentences of the ancient poems are neat and the sentence patterns of the fight plane are pleasing to people. In recent years, with the continuous and deep research of the technology related to artificial intelligence and natural language processing, more and more deep learning technologies are continuously practiced and landed. People begin to apply artificial intelligence technology in poetry generation, for example, in some game scenes, corresponding ancient poems can be generated for users in a personalized manner according to data sheets, certain topics and the like, and freshness and interaction of players are improved; in some marketing scenes, ancient poems are generated in a self-defined mode according to user demands, spontaneous popularization of users is facilitated, marketing popularization cost is reduced, and popularization efficiency is greatly improved.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a verse generating method, a verse generating device, an electronic device and a computer readable storage medium, so as to solve the problem that the emotion correlation of generated verses cannot be controlled by taking emotion as a variable in the process of generating the verses at present at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the invention.
According to a first aspect of the present disclosure, there is provided a verse generating method based on emotion, including: acquiring a verse data set to be marked, and acquiring a presupposed verse emotion classification model; inputting the verse data set to be marked into a verse emotion classification model to obtain emotion labels of all the verses in the verse data set to be marked, and generating an emotion marking verse set according to all the verses and the emotion labels of all the verses; training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model; and acquiring the target emotion, and inputting the target emotion into the emotion verse generation model to generate a target verse corresponding to the target emotion.
Optionally, before acquiring the pre-constructed verse emotion classification model, the method further includes: acquiring an initial poem data set and an initial classification model; inputting an initial verse data set into an initial classification model, acquiring a verse emotion vector output by the initial classification model, and determining probability distribution of each emotion label in the verse emotion vector; determining a loss function according to each emotion label and probability distribution of each emotion label; optimizing the initial classification model according to the loss function to obtain the verse emotion classification model.
Optionally, inputting the verse data set to be marked into a verse emotion classification model to obtain emotion labels of all verses in the verse data set to be marked, including: acquiring a to-be-marked verse from a to-be-marked verse data set; inputting the verses to be marked into an encoder of the verse emotion classification model, and outputting hidden state vectors corresponding to the verses to be marked by the encoder; inputting the hidden state vector to a full-connection layer, and outputting a target emotion vector corresponding to the verse to be marked by the full-connection layer; and determining the emotion label of the verse to be marked according to the target emotion vector.
Optionally, inputting the target emotion to the emotion verse generation model to generate a target verse corresponding to the target emotion, including: the following steps are executed by the emotion verse generating model until a target verse is generated: determining the current moment and determining an input vector of the current moment; wherein the input vector includes a target emotion; acquiring a generated historical information vector, and generating a current information vector according to the input vector and the historical information vector at the current moment; the probability distribution of the candidate generated words is determined from the current information vector to determine the target generated words from the probability distribution.
Optionally, determining the input vector at the current time includes: determining the upper semantic information of the upper poem corresponding to the current moment; acquiring a history latest generated word generated by history and determining context associated information generated by an attention model; and taking the latest historical generated word, the historical information vector, the context semantic information, the context associated information and the target emotion as input vectors at the current moment.
Optionally, the context-related information includes a weighted average of each word in the context verse, and determining the context-related information generated by the attention model includes: determining the word weight of each word in the upper poem according to the historical information vector and the upper semantic information; a weighted average of each word in the above verses is determined based on the weights of the words.
Optionally, determining the target generated word according to the probability distribution includes: acquiring a pre-configured word number threshold; the word number threshold is used for determining the candidate number of the target candidate words; sorting all initial candidate words according to probability distribution, and determining candidate quantity target candidate words according to word quantity threshold; and acquiring a preconfigured cumulative probability threshold value, and determining a target generated word from the target candidate word according to the cumulative probability threshold value.
According to a second aspect of the present disclosure, there is provided an emotion-based verse generating device including: the acquisition module is used for acquiring a verse data set to be marked and acquiring a presupposed verse emotion classification model; the marking poem set generation module is used for inputting the poem data set to be marked into the poem emotion classification model to obtain emotion labels of all the poems in the poem data set to be marked, and generating emotion marking poem sets according to all the poems and the emotion labels of all the poems; the poem model generation module is used for training the initial poem generation model by adopting the emotion marking poem set so as to generate an emotion poem generation model; the poem generation module is used for acquiring the target emotion, and inputting the target emotion into the emotion poem generation model so as to generate a target poem corresponding to the target emotion.
Optionally, the emotion-based verse generating device further includes a model generating module, configured to acquire an initial verse data set, and acquire an initial classification model; inputting an initial verse data set into an initial classification model, acquiring a verse emotion vector output by the initial classification model, and determining probability distribution of each emotion label in the verse emotion vector; determining a loss function according to each emotion label and probability distribution of each emotion label; optimizing the initial classification model according to the loss function to obtain the verse emotion classification model.
Optionally, the labeling poem set generating module includes an emotion tag determining unit, configured to obtain a to-be-labeled poem from the to-be-labeled poem data set; inputting the verses to be marked into an encoder of the verse emotion classification model, and outputting hidden state vectors corresponding to the verses to be marked by the encoder; inputting the hidden state vector to a full-connection layer, and outputting a target emotion vector corresponding to the verse to be marked by the full-connection layer; and determining the emotion label of the verse to be marked according to the target emotion vector.
Optionally, the verse generating module includes a generating word generating sub-module, configured to execute the following steps by the emotion verse generating model until the target verse is generated: determining the current moment and determining an input vector of the current moment; wherein the input vector includes a target emotion; acquiring a generated historical information vector, and generating a current information vector according to the input vector and the historical information vector at the current moment; the probability distribution of the candidate generated words is determined from the current information vector to determine the target generated words from the probability distribution.
Optionally, the generating word generating submodule includes an input vector determining unit, configured to determine a previous sentence corresponding to the current moment and previous semantic information of the previous sentence; acquiring a history latest generated word generated by history and determining context associated information generated by an attention model; and taking the latest historical generated word, the historical information vector, the context semantic information, the context associated information and the target emotion as input vectors at the current moment.
Optionally, the input vector determining unit includes an average value determining subunit, configured to determine a word weight of each word in the above verse according to the historical information vector and the above semantic information; a weighted average of each word in the above verses is determined based on the weights of the words.
Optionally, the generated word generating submodule includes a generated word generating unit, configured to obtain a preconfigured word number threshold; the word number threshold is used for determining the candidate number of the target candidate words; sorting all initial candidate words according to probability distribution, and determining candidate quantity target candidate words according to word quantity threshold; and acquiring a preconfigured cumulative probability threshold value, and determining a target generated word from the target candidate word according to the cumulative probability threshold value.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory having stored thereon computer readable instructions which when executed by the processor implement the emotion-based verse generation method according to any of the above.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a verse generating method according to any of the above.
The technical scheme provided by the disclosure can comprise the following beneficial effects:
according to the emotion-based verse generating method in the exemplary embodiment of the disclosure, an initial verse data set is obtained, and an initial classification model is trained according to the initial verse data set to obtain a verse emotion classification model; the method comprises the steps of obtaining a verse data set to be marked, inputting the verse data set to be marked into a verse emotion classification model, obtaining emotion labels of all verses in the verse data set to be marked, and generating an emotion marking verse set according to all verses and the emotion labels of all verses; training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model; and acquiring the target emotion, and inputting the target emotion into the emotion verse generation model to generate a target verse corresponding to the target emotion. According to the emotion-based verse generating method, on one hand, the initial verse data set can be trained to obtain a verse emotion classification model, the verse emotion classification model is used for marking the verse data set to be marked, and the emotion marking verse set with emotion labels can be obtained. On the other hand, the emotion label verse set with the emotion labels is adopted to generate the emotion verse generation model, so that the interpretability and the interveneability of the generated model are higher. On the other hand, the target emotion is used as a verse to generate an influence factor, so that the target verse conforming to the target emotion can be generated, and the generated target verse has emotion controllability, thereby improving the interactivity of users.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 schematically illustrates a flow chart of an emotion-based verse generation method according to an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of training a verse emotion classification model and emotion verse generation model according to an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a model structure diagram of a verse emotion tag obtained by a verse emotion classification model according to an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a model structure diagram of generating a target verse using an emotion verse generation model in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an emotion-based verse generating device according to an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure;
fig. 7 schematically illustrates a schematic diagram of a computer-readable storage medium according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
In recent years, people begin to apply artificial intelligence technology in poetry generation process, for example, in some game scenes, natural language processing (Natural Language Processing, NLP) technology can be adopted, corresponding ancient poems are generated for users in a personalized manner according to data sheets, certain subjects and the like, and freshness and interaction of players are improved; in some marketing scenes, ancient poems are generated in a self-defined mode according to user demands, spontaneous popularization of users is facilitated, marketing popularization cost is reduced, and popularization efficiency is greatly improved.
Some existing controllable text generation techniques model emotion information implicitly through variational self-encoder (variational autoencoder, VAE) modeling, so that emotion controllability can be achieved at the time of generation. However, this method has not been applied to the field of ancient poetry generation, specifically because: a complete ancient poem data set with emotion labels for each sentence is not yet existed; meanwhile, the implicit modeling is easy to excessively depend on the classification target of the emotion information, so that inaccuracy of emotion information modeling is brought.
In the technical field of ancient poetry generation, modeling is generally performed through a sequence-to-sequence (Seq 2 Seq) model, and a verse generated by the Seq2Seq model is more smooth, but the model often cannot generate a verse containing emotion information. At present, in the field of information-controllable ancient poems, there is a theme-based ancient poems generation model, but there is no emotion-controllable ancient poems generation model.
Based on this, in the present exemplary embodiment, there is provided first an emotion-based verse generating method, which may be implemented by a server, or may be implemented by a terminal device, where the terminal described in the present disclosure may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a personal digital assistant (Personal Digital Assistant, PDA), a wearable device, a smart bracelet, and a fixed terminal such as a desktop computer. FIG. 1 schematically illustrates a schematic diagram of a flow of an emotion-based verse generation method according to some embodiments of the present disclosure. Referring to fig. 1, the emotion-based verse generation method may include the steps of:
Step S110, a verse data set to be marked is obtained, and a presupposed verse emotion classification model is obtained.
Step S120, inputting the verse data set to be marked into a verse emotion classification model to obtain emotion labels of all the verses in the verse data set to be marked, and generating emotion marking verse sets according to all the verses and the emotion labels of all the verses.
Step S130, training the initial poem generation model by using the emotion marking poem set to generate an emotion poem generation model.
Step S140, obtaining the target emotion, and inputting the target emotion into the emotion verse generation model to generate a target verse corresponding to the target emotion.
According to the emotion-based verse generating method in the embodiment of the invention, on one hand, a verse emotion classification model can be trained by adopting an initial verse data set, and a verse labeling verse data set with emotion labels can be obtained by labeling the verse emotion classification model. On the other hand, the emotion label verse set with the emotion labels is adopted to generate the emotion verse generation model, so that the interpretability and the interveneability of the generated model are higher. On the other hand, the target emotion is used as a verse to generate an influence factor, so that the target verse conforming to the target emotion can be generated, and the generated target verse has emotion controllability, thereby improving the interactivity of users.
Next, the emotion-based verse generation method in the present exemplary embodiment will be further described.
In step S110, a verse dataset to be marked is obtained, and a presupposed verse emotion classification model is obtained.
In some exemplary embodiments of the present disclosure, the verse data set to be annotated may be a data set of verses without emotion annotation. The verse emotion classification model may be a classification model that receives an input verse and outputs a corresponding emotion tag.
Because a large-scale poem data set with emotion marks does not exist at present, a poem data set to be marked can be firstly obtained, and a presupposed poem emotion classification model is obtained, so that emotion labels corresponding to each poem mark in the poem data set to be marked are marked by adopting the presupposed poem emotion classification model.
According to some exemplary embodiments of the present disclosure, an initial set of verse data is obtained, and an initial classification model is obtained; inputting an initial verse data set into an initial classification model, acquiring a verse emotion vector output by the initial classification model, and determining probability distribution of each emotion label in the verse emotion vector; determining a loss function according to each emotion label and probability distribution of each emotion label; optimizing the initial classification model according to the loss function to obtain the verse emotion classification model.
The initial verse dataset may be a dataset of a number (fewer) of verses with emotion tags, the verses in the initial verse dataset having corresponding emotion tags. The initial classification model can be a pre-constructed classification model, and the verse emotion classification model can be obtained by training the initial classification model through an initial verse data set. The verse emotion vector may be a vector formed by inputting the initial verse data set into the initial classification model and outputting the scores of emotion labels corresponding to the verses in the initial verse data set by the initial classification model. The emotion tags may be tags for representing the emotion degree of a verse, i.e. all possible emotion values of each verse in the initial verse dataset, e.g. the emotion tags may comprise 5 emotion categories representing emotion degrees, where 1 represents very negative, 2 represents generally negative, 3 represents neutral emotion, 4 represents generally positive, 5 represents very positive. The probability distribution may be a distribution of probability values for each emotion tag in the target emotion vector. The loss function maps the probability value of the emotion label to a non-negative real number to represent the loss function of the emotion label corresponding to a certain verse.
Acquiring an initial verse dataset, namely a small-scale number of verse emotion datasets with emotion marks; and acquiring an initial classification model, inputting the initial verse data set into the initial classification model for training, and obtaining the verse emotion classification model. The specific process is as follows: inputting the initial poem data set into an initial classification model, and obtaining target emotion vectors corresponding to each poem in the initial poem data set output by the initial classification model; the probability distribution of each emotion tag in the target emotion vector is determined, for example, the target emotion vector may be s= (s 1, s2, s3, s4, s 5), the scores of different emotions corresponding to the verse are represented, and the probability distribution p= (p 1, p2, p3, p4, p 5) of the different emotions can be obtained by performing calculation processing through a normalized exponential function (softmax function). After obtaining the probability distribution of the emotion tags, a Loss function may be determined according to each emotion tag and the probability distribution of each emotion tag, for example, the Loss function may be a Cross-Entropy Loss (CE Loss). Optimizing the initial classification model according to the loss function to obtain a verse emotion classification model; for example, the initial classification model may be optimized using a gradient descent method.
Those skilled in the art will readily appreciate that the corresponding loss function and model optimization methodology may be determined according to specific needs, and this disclosure is not limited in any way.
For example, training the initial classification model, the process of obtaining the verse emotion classification model is as follows: an initial poem data set is obtained, wherein the initial poem data set comprises 20000 poems with emotion labels. According to the initial verse data set, training data input into single verses and output into emotion labels can be constructed, wherein the initial verse data set contains 17356 seven-word verses 2644 five-word verses, and the training set and the verification set are divided according to the proportion of 9:1. Training the initial classification model by adopting a training set, and verifying the output result of the trained initial classification model by a verification set so as to optimize the model and obtain the verse emotion classification model.
In step S120, the set of verses to be marked is input to a verse emotion classification model, emotion labels of the verses in the set of verses to be marked are obtained, and emotion marking verses are generated according to the verses and the emotion labels of the verses.
In some exemplary embodiments of the present disclosure, each of the verses in the verse data set to be annotated may be all of the verses contained in the verse data set to be annotated. The emotion labels of the respective poems may be emotion labels respectively corresponding to the respective poems. The emotion marking poem set can be a data set consisting of each poem and an emotion label corresponding to each poem.
Since the presently existing story datasets generally include small-scale affective annotation data with affective annotations and large-scale unlabeled ancient poem datasets. The following difficulties exist in manually labeling emotion on a large-scale ancient poetry dataset: (1) high labor cost. Labeling a verse with an emotion tag by manual labor typically requires a significant amount of time and resources. (2) the labeling accuracy is low. For subjective emotion marking, there is usually no clear margin, and the poems themselves are obscure and difficult for humans to reach the available marking accuracy. Therefore, in the method, a small-scale poem emotion marking dataset (namely an initial dataset) is adopted to train to obtain a poem emotion classification model based on a neural network, so that a large-scale ancient poem dataset is subjected to machine marking through the poem emotion classification model, and part of data with fuzzy judgment is judged and removed through a confidence threshold method. A large-scale ancient poem data set with emotion marks is constructed with low cost and high accuracy.
After the verse data set to be marked is obtained, the verse data set to be marked can be input into a verse emotion classification model, and emotion labels of all verses in the verse data set to be marked are obtained. In addition, in order to generate an effective emotion labeling verse, a probability threshold may be set in the present disclosure, and verses with probability of emotion predicted by the verse emotion classification model higher than the probability threshold are retained, and the remaining verses may be considered to be difficult to judge corresponding emotion labels and be removed due to noise introduction. After marking and cleaning, all the poems obtained after cleaning and the emotion labels of the poems form an emotion marking poem set.
Referring to fig. 2, fig. 2 schematically illustrates a flow chart of training a verse emotion classification model and an emotion verse generation model according to an exemplary embodiment of the present disclosure. Inputting the verse data set to be marked into the initial classification model, obtaining a verse emotion classification model, and determining the emotion label corresponding to the verse to be marked by the verse emotion classification model.
According to some exemplary embodiments of the present disclosure, a verse to be annotated is obtained from a verse to be annotated dataset; inputting the verses to be marked into an encoder of the verse emotion classification model, and outputting hidden state vectors corresponding to the verses to be marked by the encoder; inputting the hidden state vector to a full-connection layer, and outputting a target emotion vector corresponding to the verse to be marked by the full-connection layer; and determining the emotion label of the verse to be marked according to the target emotion vector. The verse to be marked may be a verse not marked with an emotion tag obtained from the verse dataset to be marked. The encoder may be an encoding network that encodes the verses to be annotated. The hidden state vector may be a set of fixed-dimension vectors generated by an encoder after encoding a verse to be annotated. The target emotion vector may be an emotion vector corresponding to a verse to be annotated.
Obtaining a to-be-marked verse from a verse data set to be marked, inputting the to-be-marked verse into a verse emotion classification model, and obtaining a to-be-marked verse from the verse data set to be markedAnd outputting the emotion labels corresponding to the poems to be marked by the poems emotion classification model. Specific process of determining emotion tags of a verse to be annotated referring to fig. 3, fig. 3 schematically illustrates a model structure diagram of emotion tags of verses obtained by a verse emotion classification model according to an exemplary embodiment of the present disclosure. The present disclosure may employ a bi-directional long short-term memory (BiLSTM) network as an encoder to convert a sentence-length T verse into a set of fixed-dimension hidden state vectors h= (h) 1 ,h 2 ,…,h T ). The bidirectional LSTM network includes a front-to-back encoding network and a back-to-front encoding network, so that modeling effects of front and back portions of ancient poems on emotion can be better learned. After the hidden state vector h is obtained, the hidden state vector h is input into a one-layer fully-connected linear layer network, the hidden state vector h is converted into a group of emotion vectors s= (s 1, s2, s3, s4, s 5) with the dimension equal to 5, the emotion vectors are used for representing scores of different emotions corresponding to the verses, probability distribution p= (p 1, p2, p3, p4, p 5) of different emotions is obtained through a normalized exponential function (softmax function), and after the emotion probability distribution p is obtained, emotion values corresponding to the maximum probability value of the emotion probability distribution p can be taken as emotion labels corresponding to the verses to be marked. For example, the input poem to be marked is "silence and yellow and rainy, and the emotion label corresponding to the poem is" general negative ".
In step S130, training the initial verse generating model with the emotion markup verse set to generate an emotion verse generating model.
In some example embodiments of the present disclosure, the initial verse generation model may be a verse generation model that is not trained with emotion-labeling verse sets. The emotion verse generation model may be a verse generation model that may generate a corresponding target verse based on emotion tags.
After the emotion marking poem set is obtained, training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model, wherein the emotion poem generation model can generate corresponding poems according to the input emotion labels. Referring to fig. 2, the initial verse generation model may be trained using a set of emotion markup verses and an emotion verse generation model may be generated.
In step S140, a target emotion is acquired, and the target emotion is input to the emotion verse generation model to generate a target verse corresponding to the target emotion.
In some example embodiments of the present disclosure, the target emotion may be an emotion value corresponding to the verse to be generated, e.g., the target emotion may be "positive", "negative", etc. The emotion verse generation model may be a verse generation model that generates a corresponding verse according to a target emotion, the input of the emotion verse generation model may include the target emotion, and the output may be a verse whose emotion mood is the target emotion. The target verse may be a verse generated by the emotion verse generation model based on the target emotion.
When the target emotion is acquired, the target emotion can be input into the emotion poem generation model, in addition, a corresponding ancient poem theme can be used as input of the emotion poem generation model, and the emotion poem generation model generates the target poem according to the target emotion and the ancient poem theme.
According to some exemplary embodiments of the present disclosure, the following steps are performed by the emotion verse generation model until a target verse is generated: determining the current moment and determining an input vector of the current moment; wherein the input vector includes a target emotion; acquiring a generated historical information vector, and generating a current information vector according to the input vector and the historical information vector at the current moment; the probability distribution of the candidate generated words is determined from the current information vector to determine the target generated words from the probability distribution. The current time may be a time corresponding to a current state in generating the target verse. The input vector may be the determined vector that was input to the decoder of the emotion verse generation model at the current time. The historical information vector can be the information vector which is determined by a memory unit in the emotion verse generation model and is relative to the current moment. The current information vector may be an information vector composed of an input vector at the current time and a history information vector. The candidate generated word may be a candidate word for the next output word to be generated. The target generated word may be a word generated by the emotion verse generation model according to all information at the current moment.
When generating a target poem by using the emotion poem generation model, generating target generation words word by word, and finally generating the target poem. The specific process is as follows: and determining the current time t and acquiring an input vector of the current time. Referring to fig. 4, fig. 4 schematically illustrates a model structure diagram of generating a target verse using an emotion verse generation model according to an exemplary embodiment of the present disclosure. The emotion verse generation model may include four parts: an encoder (encoder) 410, a decoder (decoder) 420, an attention header (attention head) 430, and an emotion header (emotion head) 440. Decoder 420 in emotion verse generation model is composed of first full-connection layer 421, memory unit 422, and second full-connection layer 423; the first full-connection layer 421 may be a full-connection layer+nonlinear activation layer (Full Connection Network, FC 1) responsible for summarizing input information, the memory unit 422 may be an LSTM unit responsible for information memory, and the second full-connection layer 423 may be a full-connection layer network FC2 responsible for outputting probability of a next word.
For the current time t, the input vector x of the current time t can be obtained through the summation of FC1 t The method comprises the steps of carrying out a first treatment on the surface of the In addition, the history information vector h of the current time t is obtained through a memory cell (LSTM cell) t . According to the input vector x at the current time t t And a history information vector h t Forming a current information vector; the probability distribution of the candidate generated words can be determined according to the current information vector and through FC2, so that the target generated words can be determined according to the probability distribution.
Because the emotion verse generation model takes one variable of the target emotion as a part of input for generating the target verse, the generated target verse is closely related to the target emotion, and meanwhile, different emotion inputs can be manually controlled to generate verses corresponding to different emotions. In addition, when the complete ancient poems are generated, the emotion of the ancient poems title is judged first, emotion of the verses is restrained according to the emotion of the title, the situation that the input is a negative title and positive target verses are generated can be avoided, and therefore the problem that emotion in a traditional ancient poems generation model is incoherent is effectively avoided.
According to some exemplary embodiments of the present disclosure, a superior verse corresponding to a current time and superior semantic information of the superior verse are determined; acquiring a history latest generated word generated by history and determining context associated information generated by an attention model; and taking the latest historical generated word, the historical information vector, the context semantic information, the context associated information and the target emotion as input vectors at the current moment. The above verses may be verses that have been generated at the current time, e.g., when generating the first verse verses, the above verses may be topics from which the ancient verses are to be generated; after the first verse is generated, the above verses may be the ancient verses and the verses generated. The above semantic information may be semantic information corresponding to the above verse, and may be represented by context, abbreviated as ctx. The latest generated word of the history may be the generated word whose generation time is closest to the current time, i.e., the generated word of the last time, among all the generated words. The context-related information may be information obtained by computing the attention of the contextual poem by the attention header in the emotion poem generation model.
Referring to fig. 4, an encoder in the emotion verse generation model may employ the use of BiLSTM to content encode the above verse sequence to obtain the above content hidden vector ctx to represent semantic information of the above verse. The emotion head can map five different emotion labels to five embedded vectors (emmbedding) with the same dimension respectively, and the five embedded vectors respectively represent information of five different emotions, and the embedded vectors are continuously learned and optimized in the model training process. The decoder is used for generating the target poem word by word, and for the current moment of generating the poem, firstly, the input information of the decoder including the generated word y at the last moment can be obtained t-1 The already generated history information vector h t-1 Context semantic information ctx, context associated information c obtained by attention header t And emotion vectors (sentiment embedding, se) corresponding to the target emotion, and input vector x at current time t is obtained by integrating FC1 t =FC1(y t-1 ,h t-1 ,ctx,c t Se). Next, a new time is obtained by the memory unitHistorical information vector h t =LSTM(x t ,h t-1 ). Again, the probability distribution p of the next word can be output by FC2 t =FC2(h t ). The input of the present moment in the decoding end is the word actually output at the last moment, and is not the word generated in an iteration mode, so that model training can be performed in parallel, and the training process of the model is accelerated.
According to some exemplary embodiments of the present disclosure, a word weight for each word in the above verse is determined from the historical information vector and the above semantic information; a weighted average of each word in the above verses is determined based on the weights of the words. The word weight may be the weight of all generated words in the generated verse and the weighted average may be a weighted average for each word determined from the word weights.
Attention header in emotion verse generation model can be generated by receiving historical information vector h in decoder t-1 And performing attention calculation with the above content information to obtain the attention degree of the current semantic information on each word of the above content, wherein the word weight of each word is shown in a formula 1.
Wherein e i =h t-1 T ctx i ,ctx i Is the i-th word of the history information.
Weighted averages represented by hidden vectors of each word in the attention-header output history information, i.e
According to some exemplary embodiments of the present disclosure, a preconfigured word count threshold is obtained; the word number threshold is used for determining the candidate number of the target candidate words; sorting all initial candidate words according to probability distribution, and determining candidate quantity target candidate words according to word quantity threshold; and acquiring a preconfigured cumulative probability threshold value, and determining a target generated word from the target candidate word according to the cumulative probability threshold value. The word count threshold may be a pre-configured threshold employed for determining the number of candidates of the target candidate word. The number of candidates may be the number of target candidate words determined. The initial candidate word may be a candidate word corresponding to the word generated at the current time. The target candidate word may be a candidate number of candidate words having the highest probability value in the probability distribution. The cumulative probability threshold value may be a cumulative probability value of the target candidate word having the highest probability value.
At each moment in the stage of generating the poem by model prediction, the target generated word (i.e. the word at the current moment) may be generated in a probabilistic sampling manner, and each poem may be generated serially. For example, a pruning strategy may be used in a probability sampling scheme for generating a target generated word, specifically, two thresholds may be set to prune the word probability generated at each moment: (1) The word number threshold is used for determining the number of candidates of the target candidate word, that is, the initial candidate word with the highest probability value in the probability distribution can be selected from the initial candidate words and sampled as the target candidate word, so that the selected target candidate word can be limited to be relevant, for example, the word number threshold can be set to be 5, 6, 8 and the like. (2) The cumulative probability threshold is set to 0.95, for example, after the probability distribution size of the target candidate words is ordered, the word with the high previous probability and the cumulative probability not exceeding 0.95 is taken as the target candidate word, and the definite output of a certain model can be constrained through the cumulative probability threshold.
And obtaining target candidate words under the constraint of the two thresholds, and generating final words at the current moment in the verses in a random sampling mode, namely target generated words. Since the target generated words are generated by means of random sampling, it is ensured that each generated verse is unique.
In summary, an initial poem data set is obtained, and training is performed on the initial classification model according to the initial poem data set to obtain a poem emotion classification model; the method comprises the steps of obtaining a verse data set to be marked, inputting the verse data set to be marked into a verse emotion classification model, obtaining emotion labels of all verses in the verse data set to be marked, and generating an emotion marking verse set according to all verses and the emotion labels of all verses; training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model; and acquiring the target emotion, and inputting the target emotion into the emotion verse generation model to generate a target verse corresponding to the target emotion. According to the emotion-based verse generating method, on one hand, the initial verse data set can be trained to obtain a verse emotion classification model, the verse emotion classification model is used for marking the verse data set to be marked, and the emotion marking verse set with emotion labels can be obtained. On the other hand, the emotion label verse set with the emotion labels is adopted to generate the emotion verse generation model, so that the interpretability and the interveneability of the generated model are higher. On the other hand, the target emotion is used as a verse to generate an influence factor, so that the target verse conforming to the target emotion can be generated, and the generated target verse has emotion controllability, thereby improving the interactivity of users. In yet another aspect, a coherent target verse is generated by a probabilistic sampling scheme such that the generated verse is unique while generating the target verse from a custom target emotion.
It should be noted that although the steps of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in that particular order or that all of the illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In addition, in the present exemplary embodiment, a verse generating device based on emotion is also provided. Referring to fig. 5, the emotion-based verse generation apparatus 500 may include: the system comprises an acquisition module 510, a labeling poem set generation module 520, a poem model generation module 530 and a poem generation module 540.
Specifically, the obtaining module 510 is configured to obtain a verse dataset to be marked, and obtain a presupposed verse emotion classification model; the labeling verse set generating module 520 is configured to input a verse data set to be labeled into a verse emotion classification model, obtain emotion labels of the verses in the verse data set to be labeled, and generate an emotion labeling verse set according to the verses and the emotion labels of the verses; the verse model generating module 530 is configured to train the initial verse generating model by using the emotion labeling verse set to generate an emotion verse generating model; the verse generating module 540 is configured to obtain a target emotion, and input the target emotion to the emotion verse generating model to generate a target verse corresponding to the target emotion.
The emotion-based verse generating device 500 may input the verse data set to be annotated into the verse emotion classification model, obtain emotion labels of the verses in the verse data set to be annotated, generate the emotion annotation verse set based on the verses and the emotion labels, perform model training on the initial verse generating model according to the emotion annotation verse set, and generate the emotion-controllable target verse.
In an exemplary embodiment of the present disclosure, the emotion-based verse generating device further includes a model generating module for acquiring an initial verse dataset and acquiring an initial classification model; inputting an initial verse data set into an initial classification model, acquiring a verse emotion vector output by the initial classification model, and determining probability distribution of each emotion label in the verse emotion vector; determining a loss function according to each emotion label and probability distribution of each emotion label; optimizing the initial classification model according to the loss function to obtain the verse emotion classification model.
In an exemplary embodiment of the present disclosure, the labeling verse set generating module includes an emotion tag determining unit, configured to obtain a verse to be labeled from a verse data set to be labeled; inputting the verses to be marked into an encoder of the verse emotion classification model, and outputting hidden state vectors corresponding to the verses to be marked by the encoder; inputting the hidden state vector to a full-connection layer, and outputting a target emotion vector corresponding to the verse to be marked by the full-connection layer; and determining the emotion label of the verse to be marked according to the target emotion vector.
In one exemplary embodiment of the present disclosure, the verse generation module includes a generated word generation sub-module for performing the following steps by the emotion verse generation model until a target verse is generated: determining the current moment and determining an input vector of the current moment; wherein the input vector includes a target emotion; acquiring a generated historical information vector, and generating a current information vector according to the input vector and the historical information vector at the current moment; the probability distribution of the candidate generated words is determined from the current information vector to determine the target generated words from the probability distribution.
In an exemplary embodiment of the present disclosure, the generating word generating submodule includes an input vector determining unit for determining a previous sentence corresponding to a current time and the previous semantic information of the previous sentence; acquiring a history latest generated word generated by history and determining context associated information generated by an attention model; and taking the latest historical generated word, the historical information vector, the context semantic information, the context associated information and the target emotion as input vectors at the current moment.
In an exemplary embodiment of the present disclosure, the input vector determination unit includes an average value determination subunit for determining a word weight for each word in the above verse from the historical information vector and the above semantic information; a weighted average of each word in the above verses is determined based on the weights of the words.
In one exemplary embodiment of the present disclosure, the generated word generation submodule includes a generated word generation unit for acquiring a preconfigured word count threshold; the word number threshold is used for determining the candidate number of the target candidate words; sorting all initial candidate words according to probability distribution, and determining candidate quantity target candidate words according to word quantity threshold; and acquiring a preconfigured cumulative probability threshold value, and determining a target generated word from the target candidate word according to the cumulative probability threshold value.
The specific details of the virtual module of each emotion-based verse generating device in the foregoing description are already described in detail in the corresponding emotion-based verse generating method, and therefore will not be repeated here.
It should be noted that although several modules or units of emotion-based verse generating device are mentioned in the above detailed description, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to such an embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, a bus 630 connecting the different system components (including the memory unit 620 and the processing unit 610), a display unit 640.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 621 and/or cache memory 622, and may further include Read Only Memory (ROM) 623.
The storage unit 620 may include a program/utility 624 having a set (at least one) of program modules 625, such program modules 625 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may represent one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 670 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. As shown, network adapter 660 communicates with other modules of electronic device 600 over bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 7, a program product 700 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of generating a verse based on emotion comprising:
Acquiring a verse data set to be marked, and acquiring a presupposed verse emotion classification model;
inputting the verse data set to be marked into the verse emotion classification model to obtain emotion labels of all verses in the verse data set to be marked, and generating emotion marking verse sets according to all verses and the emotion labels of all verses;
training the initial poem generation model by adopting the emotion marking poem set to generate an emotion poem generation model;
and obtaining a target emotion and a historical information vector, wherein the target emotion comprises an input vector at the current moment, the input vector and the historical information vector are input into the emotion verse generation model to generate a target verse corresponding to the target emotion, and the input vector and the historical information vector are used for determining probability distribution of a target generation word in the target verse.
2. The method of claim 1, wherein prior to the obtaining the pre-constructed verse emotion classification model, the method further comprises:
acquiring an initial poem data set and an initial classification model;
inputting the initial verse data set into an initial classification model, obtaining a target emotion vector output by the initial classification model, and determining probability distribution of each emotion label in the target emotion vector;
Determining a cross entropy loss function according to each emotion label and the probability distribution of each emotion label;
optimizing the initial classification model according to the cross entropy loss function in a gradient descent mode to obtain the verse emotion classification model.
3. The method according to claim 1, wherein the inputting the verse data set to be annotated into the verse emotion classification model to obtain emotion labels of respective verses in the verse data set to be annotated comprises:
acquiring a to-be-marked poem from the to-be-marked poem data set;
inputting the verses to be marked into an encoder of a verse emotion classification model, and outputting hidden state vectors corresponding to the verses to be marked by the encoder;
inputting the hidden state vector to a full-connection layer, and outputting a target emotion vector corresponding to the verse to be marked by the full-connection layer;
and determining the emotion label of the verse to be marked according to the target emotion vector.
4. The method of claim 1, wherein the inputting the target emotion to the emotion verse generation model to generate a target verse corresponding to the target emotion comprises:
Executing the following steps by the emotion verse generation model until the target verse is generated:
determining a current moment and determining an input vector of the current moment;
acquiring a generated historical information vector, and generating a current information vector according to the input vector at the current moment and the historical information vector;
and determining probability distribution of candidate generated words according to the current information vector so as to determine target generated words according to the probability distribution.
5. The method of claim 4, wherein the determining the input vector for the current time instant comprises:
determining the upper text semantic information of the upper text verse corresponding to the current moment;
acquiring a history latest generated word generated by history and determining context associated information generated by an attention model;
and taking the latest historical generated word, the historical information vector, the context semantic information, the context associated information and the target emotion as input vectors of the current moment.
6. The method of claim 5, wherein the context-related information comprises a weighted average of each word in the context verse, and wherein the determining the context-related information generated by the attention model comprises:
Determining the word weight of each word in the upper verse according to the historical information vector and the upper semantic information;
and determining a weighted average value of each word in the above verse according to the weight of each word.
7. The method of claim 4, wherein said determining a target generated word from said probability distribution comprises:
acquiring a pre-configured word number threshold; the word number threshold is used for determining the candidate number of target candidate words;
sorting all initial candidate words according to the probability distribution, and determining the candidate number of target candidate words according to a word number threshold;
and acquiring a preconfigured cumulative probability threshold value, and determining the target generated word from the target candidate word according to the cumulative probability threshold value.
8. A verse generating device based on emotion, comprising:
the acquisition module is used for acquiring a verse data set to be marked and acquiring a presupposed verse emotion classification model;
the marking poem set generation module is used for inputting the poem data set to be marked into the poem emotion classification model to obtain emotion labels of all the poems in the poem data set to be marked, and generating emotion marking poem sets according to all the poems and the emotion labels of all the poems;
The poem model generation module is used for training the initial poem generation model by adopting the emotion marking poem set so as to generate an emotion poem generation model;
the poem generation module is used for acquiring target emotion and a historical information vector, wherein the target emotion comprises an input vector at the current moment, the input vector and the historical information vector are input into the emotion poem generation model so as to generate a target poem corresponding to the target emotion, and the input vector and the historical information vector are used for determining probability distribution of a target generation word in the target poem.
9. An electronic device, comprising:
a processor; and
a memory having stored thereon computer readable instructions which when executed by the processor implement the emotion-based verse generation method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements an emotion-based verse generation method according to any of claims 1 to 7.
CN202011155029.9A 2020-10-26 2020-10-26 Verse generation method and device based on emotion, electronic equipment and storage medium Active CN112199502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011155029.9A CN112199502B (en) 2020-10-26 2020-10-26 Verse generation method and device based on emotion, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011155029.9A CN112199502B (en) 2020-10-26 2020-10-26 Verse generation method and device based on emotion, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112199502A CN112199502A (en) 2021-01-08
CN112199502B true CN112199502B (en) 2024-03-15

Family

ID=74011546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011155029.9A Active CN112199502B (en) 2020-10-26 2020-10-26 Verse generation method and device based on emotion, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112199502B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010717B (en) * 2021-04-26 2022-04-22 中国人民解放军国防科技大学 Image verse description generation method, device and equipment
CN113705206B (en) * 2021-08-13 2023-01-03 北京百度网讯科技有限公司 Emotion prediction model training method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563622A (en) * 2018-05-04 2018-09-21 清华大学 A kind of poem of four lines generation method and device with style varied
CN110196908A (en) * 2019-04-17 2019-09-03 深圳壹账通智能科技有限公司 Data classification method, device, computer installation and storage medium
WO2019242001A1 (en) * 2018-06-22 2019-12-26 Microsoft Technology Licensing, Llc Method, computing device and system for generating content
CN111143564A (en) * 2019-12-27 2020-05-12 北京百度网讯科技有限公司 Unsupervised multi-target chapter-level emotion classification model training method and unsupervised multi-target chapter-level emotion classification model training device
CN111368078A (en) * 2020-02-28 2020-07-03 腾讯科技(深圳)有限公司 Model training method, text classification device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563622A (en) * 2018-05-04 2018-09-21 清华大学 A kind of poem of four lines generation method and device with style varied
WO2019242001A1 (en) * 2018-06-22 2019-12-26 Microsoft Technology Licensing, Llc Method, computing device and system for generating content
CN110196908A (en) * 2019-04-17 2019-09-03 深圳壹账通智能科技有限公司 Data classification method, device, computer installation and storage medium
CN111143564A (en) * 2019-12-27 2020-05-12 北京百度网讯科技有限公司 Unsupervised multi-target chapter-level emotion classification model training method and unsupervised multi-target chapter-level emotion classification model training device
CN111368078A (en) * 2020-02-28 2020-07-03 腾讯科技(深圳)有限公司 Model training method, text classification device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《sentiment-controllable Chinese poetry generation》;Huimin Chen等;《第28届国际人工智能联合会议论文集》;全文 *
《基于Seq2Seq模型的自定义古诗生成》;王乐为等;《计算机科学与探索》;全文 *
古典诗词语句的标签模型研究;张欣;陆颖隽;李立睿;邓仲华;;信息资源管理学报(第02期);全文 *
更具有感情色彩的诗歌生成模型;廖荣凡;沈希忠;刘爽;;计算机系统应用(第05期);全文 *

Also Published As

Publication number Publication date
CN112199502A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN107832299B (en) Title rewriting processing method and device based on artificial intelligence and readable medium
CN112069302B (en) Training method of conversation intention recognition model, conversation intention recognition method and device
CN113205817B (en) Speech semantic recognition method, system, device and medium
CN110163181B (en) Sign language identification method and device
CN113158665A (en) Method for generating text abstract and generating bidirectional corpus-based improved dialog text
CN110990555B (en) End-to-end retrieval type dialogue method and system and computer equipment
CN112199502B (en) Verse generation method and device based on emotion, electronic equipment and storage medium
JP7229345B2 (en) Sentence processing method, sentence decoding method, device, program and device
WO2019235103A1 (en) Question generation device, question generation method, and program
CN110188175A (en) A kind of question and answer based on BiLSTM-CRF model are to abstracting method, system and storage medium
CN113987169A (en) Text abstract generation method, device and equipment based on semantic block and storage medium
CN110413743A (en) A kind of key message abstracting method, device, equipment and storage medium
CN112507695A (en) Text error correction model establishing method, device, medium and electronic equipment
CN111598979B (en) Method, device and equipment for generating facial animation of virtual character and storage medium
CN111563158A (en) Text sorting method, sorting device, server and computer-readable storage medium
CN114445832A (en) Character image recognition method and device based on global semantics and computer equipment
CN113705315A (en) Video processing method, device, equipment and storage medium
CN114091452A (en) Adapter-based transfer learning method, device, equipment and storage medium
CN112183062B (en) Spoken language understanding method based on alternate decoding, electronic equipment and storage medium
CN116680575B (en) Model processing method, device, equipment and storage medium
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
CN110362734A (en) Text recognition method, device, equipment and computer readable storage medium
CN113704466B (en) Text multi-label classification method and device based on iterative network and electronic equipment
CN115587184A (en) Method and device for training key information extraction model and storage medium thereof
CN114973421A (en) Dual transformation based semi-supervised sign language generation method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant