CN113553822B - Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium - Google Patents

Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium Download PDF

Info

Publication number
CN113553822B
CN113553822B CN202110869571.9A CN202110869571A CN113553822B CN 113553822 B CN113553822 B CN 113553822B CN 202110869571 A CN202110869571 A CN 202110869571A CN 113553822 B CN113553822 B CN 113553822B
Authority
CN
China
Prior art keywords
ancient poetry
input
different formats
word
poetry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110869571.9A
Other languages
Chinese (zh)
Other versions
CN113553822A (en
Inventor
欧文杰
林悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110869571.9A priority Critical patent/CN113553822B/en
Publication of CN113553822A publication Critical patent/CN113553822A/en
Application granted granted Critical
Publication of CN113553822B publication Critical patent/CN113553822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The application provides a method, equipment and a storage medium for training an ancient poetry generating model, wherein the training method comprises the following steps: an input module for inputting the ancient poetry dataset into the ancient poetry generating model; the ancient poetry dataset comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry; mapping the global control information into a plurality of independent words and/or identifiers, and explicitly constructing global control templates under different formats for each word or identifier; for each character in the ancient poetry, a format control template under different formats is explicitly constructed based on the format control information; the global control template and the format control template are input into a decoding module of the ancient poetry generating model to train the decoding module. The method and the device can be used for carrying out model training by mixing the ancient poetry data in various different formats, and can manually control the ancient poetry formats required to be generated in the ancient poetry generating process, so that interactivity of users and flexible controllability of poetry are improved.

Description

Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium
Technical Field
The application relates to the technical field of ancient poetry generation, in particular to a method, equipment and a storage medium for training an ancient poetry generation model and generating the ancient poetry.
Background
In recent years, with the continuous and deep research of the technology related to artificial intelligence and natural language processing, more and more deep learning technologies are continuously practiced and landed. The Chinese ancient poems are the magnifications of the traditional Chinese cultures, the graceful and simplified sentences of the ancient poems are neat and the sentence patterns of the fight can be appreciated by people. In the game, corresponding ancient poems are generated for some data sheets, some themes and even players in a personalized way, so that freshness and interaction of the players can be improved. In some marketing scenes, ancient poems are generated in a self-defined mode according to the needs of players, so that the players can popularize spontaneously, the marketing cost is reduced, and the popularization efficiency is greatly improved.
Deep learning-based ancient poetry generation technology has also matured in recent years, but conventional text generation models based on Recurrent Neural Networks (RNNs) have difficulty in explicitly controlling the generated text formats, such as the length, the level and the rhyme of each poetry. A conventional encoder-decoder (encoder-decoder) architecture based on a transducer models text format information by adding different position codes, so that the length control can be realized at the time of generation. The reason that the methods are not well applied to the field of ancient poetry generation is that different format requirements need to train a model respectively, and the resource waste rate is high; or using an encoder-decoder architecture model, the overall calculation amount is too complex, and the real-time request of a user cannot be responded quickly; and the formats such as rhyme, level and zepe cannot be well controlled explicitly.
Disclosure of Invention
In view of this, the present application aims to provide a model training for generating ancient poetry, a method, a device and a storage medium for generating ancient poetry, which can enable mixing of ancient poetry data in various formats to train in the process of model training, and can artificially control the format of ancient poetry to be generated in the process of generating ancient poetry, thereby improving interactivity of users and flexible controllability of poetry.
In a first aspect, an embodiment of the present application provides a training method of an ancient poetry generating model, where the training method includes:
acquiring a data set of the ancient poetry, and acquiring a pre-constructed generation model of the ancient poetry; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry;
inputting the ancient poetry dataset into an input module of the ancient poetry generating model;
extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry;
mapping global control information corresponding to the ancient poetry to a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark;
For each character in the ancient poetry, a format control template under different formats is explicitly constructed based on format control information corresponding to the ancient poetry;
and inputting the global control templates and the format control templates corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generating model, and training the decoding module.
In one possible implementation, the global control information includes at least one of: theme information and emotion information.
In one possible implementation, the step of mapping global control information corresponding to the ancient poetry to a plurality of individual words and/or logos includes:
mapping the theme information corresponding to the ancient poetry into a plurality of individual words; and/or
And mapping emotion information corresponding to the ancient poetry to a plurality of independent identifications.
In one possible implementation, the format control information includes at least one of: level and level information, rhyme information, verse length and verse number.
In one possible implementation manner, for each word in the ancient poetry, the step of explicitly constructing a format control template under different formats based on the format control information corresponding to the ancient poetry includes:
Aiming at each character in the ancient poetry, a format control template for representing the level and the narrow is explicitly constructed based on the level and the narrow information corresponding to the ancient poetry; and/or
For each character in the ancient poetry, a format control template for representing the rhyme is explicitly constructed based on the rhyme information corresponding to the ancient poetry; and/or
For each character in the ancient poetry, a format control template for representing the relative position and/or the global position is explicitly constructed based on the length of the corresponding poetry; and/or
And for each character in the ancient poetry, explicitly constructing a format control template for representing the type of the poetry based on the corresponding poetry number of the ancient poetry.
In one possible implementation manner, the step of inputting the global control template and the format control template corresponding to the ancient poetry in different formats into the decoding module of the ancient poetry generating model includes:
converting the global control templates under different formats of the display construction into global control vectors under different formats according to each word or mark corresponding to the global control information;
adding the global control vectors under different formats to obtain a target global control vector corresponding to each word or mark;
Converting the format control templates under different formats of the display structure into format control vectors under different formats aiming at each character in the ancient poetry;
adding the word vector of each previous moment and the format control vector of the word to be generated at the current moment under different formats to obtain a target format control vector corresponding to the current moment in the ancient poetry;
and inputting each word or the target global control vector corresponding to the identifier and the target format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
In a second aspect, an embodiment of the present application further provides a method for generating ancient poetry, where the method includes:
acquiring input global control information and/or input format control information matched with a user;
inputting the input global control information and/or the input format control information into the trained ancient poetry generating model;
mapping the input global control information into a plurality of independent words and/or marks through an input module in the ancient poetry generating model, and matching the input global control templates under different formats aiming at each word or mark;
Matching input format control templates under different formats based on the input format control information for each character of the ancient poetry to be generated;
and inputting the input global control templates under different formats and the input format control templates of each word under different formats into a decoding module in the ancient poetry generating model, and autoregressively decoding the predicted text of the next word of the word, thereby generating the ancient poetry.
In one possible implementation, the input global control information includes at least one of: inputting theme information and emotion information.
In a possible implementation manner, the step of mapping the input global control information into a plurality of individual words and/or marks through an input module in the ancient poetry generating model includes:
mapping the input theme information into a plurality of individual words through an input module in the ancient poetry generating model; and/or
And mapping the input emotion information into a plurality of independent identifications through an input module in the ancient poetry generating model.
In one possible implementation, the input format control information includes at least one of: inputting level and level information, inputting rhyme information, inputting a verse length and inputting a verse number.
In one possible implementation manner, the step of matching input format control templates under different formats based on the input format control information for each word of the ancient poetry to be generated includes:
matching an input format control template representing the level and the narrow based on the input level and the narrow information aiming at each character of the ancient poetry to be generated; and/or
For each character of the ancient poetry to be generated, matching an input format control template representing the rhyme based on the input rhyme information; and/or
Matching an input format control template representing a relative position and/or a global position based on an input verse length for each word of the ancient poetry to be generated; and/or
For each word of the ancient poetry to be generated, matching an input format control template representing a type of the poetry based on the number of input poetry.
In one possible implementation manner, the step of inputting the input global control template under different formats and the input format control template of each word under different formats into the decoding module in the ancient poetry generating model to autoregressively decode the predictive text of the next word of the word includes:
aiming at each word or mark corresponding to the input global control information, converting the matched input global control templates under different formats into input global control vectors under different formats;
Adding the input global control vectors under different formats to obtain a target input global control vector corresponding to each word or mark;
aiming at each character of the ancient poetry to be generated, converting the matched input format control templates under different formats into input format control vectors under different formats;
adding the word vector of each previous moment and the input format control vector of the word to be generated at the current moment under different formats to obtain a target input format control vector corresponding to the current moment;
and inputting each word or mark corresponding target input global control vector and the target input format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
In a third aspect, an embodiment of the present application further provides a training device for generating a model by using ancient poetry, where the training device includes:
the antique poetry acquisition module is used for acquiring an antique poetry data set and acquiring a pre-constructed antique poetry generation model; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry;
The ancient poetry input module is used for inputting the ancient poetry data set into the input module of the ancient poetry generation model;
the information extraction module is used for extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry;
the global construction module is used for mapping global control information corresponding to the ancient poetry of the sentence into a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark;
the format construction module is used for explicitly constructing format control templates under different formats according to the format control information corresponding to the ancient poetry aiming at each character in the ancient poetry;
the training module is used for inputting the global control templates and the format control templates corresponding to the ancient poetry in different formats into the decoding module of the ancient poetry generating model, and training the decoding module.
In a fourth aspect, an embodiment of the present application further provides an ancient poetry generating device, where the generating device includes:
the information acquisition module is used for acquiring input global control information and/or input format control information matched with a user;
the information input module is used for inputting the input global control information and/or the input format control information into the trained ancient poetry generating model;
The global matching module is used for mapping the input global control information into a plurality of independent words and/or marks through the input module in the ancient poetry generation model, and matching the input global control templates under different formats for each word or mark;
the format matching module is used for matching input format control templates under different formats based on the input format control information aiming at each character of the ancient poetry to be generated;
the generation module is used for inputting the input global control templates under different formats and the input format control templates of each character under different formats into the decoding module in the ancient poetry generation model, and decoding the predictive text of the next character of the character in an autoregressive manner, so that the ancient poetry is generated.
In a fifth aspect, embodiments of the present application further provide an electronic device, including: a processor, a storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over a bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method in any of the possible implementations of the first or second aspects.
In a sixth aspect, the present embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the possible implementations of the first or second aspects.
The embodiment of the application provides a training method of an ancient poetry generating model, which comprises the steps of firstly, acquiring an ancient poetry data set and acquiring a pre-constructed ancient poetry generating model; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry; that is, the present application may mix various different formats of the ancient poetry data for model training. Then, inputting the ancient poetry dataset into an input module of the ancient poetry generating model; and extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry. Then, mapping global control information corresponding to the ancient poetry of the sentence into a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark; for each character in the ancient poetry, a format control template under different formats is explicitly constructed based on format control information corresponding to the ancient poetry; that is, the present application may explicitly construct global control templates and format control templates under different formats, so that the generated text formats, such as the length, the level, the rhyme, and the like, of each poem may be explicitly controlled. And finally, inputting the global control templates and the format control templates corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generating model, and training the decoding module. According to the embodiment of the application, the ancient poetry data in various different formats can be mixed for training in the model training process, the global control template and the format control template under different formats can be explicitly constructed and input into the decoding module, so that the generated ancient poetry text strictly meets the preset format requirements, various playing method choices are provided for games, and the game content of players is enriched. Meanwhile, the defect that the semantics of the traditional RNN-model-based generation result are not smooth enough is overcome.
The embodiment of the application also provides a method for generating the ancient poetry, which comprises the steps of firstly acquiring input global control information and/or input format control information matched with a user; inputting the input global control information and/or the input format control information into the trained ancient poetry generating model; that is, the format of the paleopoetry to be generated can be controlled manually according to the needs of the user. Then, mapping the input global control information into a plurality of independent words and/or marks through an input module in the ancient poetry generating model, and matching the input global control templates under different formats aiming at each word or mark; matching input format control templates under different formats based on the input format control information for each character of the ancient poetry to be generated; that is, the present application may explicitly control the generated text formats, such as the length of each poem, the level, the rhyme, and the like. And finally, inputting the input global control templates under different formats and the input format control templates of each character under different formats into a decoding module in the ancient poetry generating model, and decoding the predictive text of the next character of the character in an autoregressive manner, so as to generate the ancient poetry. According to the method and the device for generating the ancient poetry, the format of the ancient poetry to be generated can be controlled manually in the ancient poetry generating process, and therefore interactivity of a user and flexible controllability of the poetry are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic structural diagram of an ancient poetry generating model according to an embodiment of the present application;
FIG. 2 is a flowchart of a training method of an ancient poetry generating model according to an embodiment of the present application;
fig. 3 shows a flowchart of a method for generating ancient poetry according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a training device for generating models of ancient poetry according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an ancient poetry generating device according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in the embodiments of the present application to indicate the presence of the features stated hereinafter, but not to exclude the addition of other features.
In the conventional scheme, on one hand, a conventional text generation model based on a Recurrent Neural Network (RNN) has difficulty in explicitly controlling the generated text format, such as the format of length, flat, rhyme, and the like of each poem. On the other hand, a conventional encoder-decoder (encoder-decoder) architecture model based on a transducer can model text format information by adding different position codes, so that the length control can be realized at the time of generation. The reason that the methods are not well applied to the field of ancient poetry generation is that different format requirements need to train a model respectively, and the resource waste rate is high; or using an encoder-decoder architecture model, the overall calculation amount is too complex, and the real-time request of a user cannot be responded quickly; and the formats such as rhyme, level and zepe cannot be well controlled explicitly. Based on this, the embodiment of the application provides a method, a device and a storage medium for generating a model training and a method, a device and a storage medium for generating ancient poetry, and the following description is made by embodiments.
Fig. 1 is a schematic structural diagram of an ancient poetry generating model according to an embodiment of the present application. The ancient poetry generating model comprises an input module 1 and a decoding module 2, wherein the input module 1 comprises an input layer and control templates in a plurality of different formats, and the decoding module 2 comprises a decoder (Transformer Decoder) and an output layer.
The training method of the ancient poetry generating model provided in the embodiment of the present application is described in detail below with reference to the description of the ancient poetry generating model shown in fig. 1.
Referring to fig. 2, a flowchart of a training method of an ancient poetry generating model according to an embodiment of the present application is shown. The training method may include the steps of:
s201, acquiring a data set of the ancient poetry, and acquiring a pre-constructed generation model of the ancient poetry; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry;
s202, inputting the ancient poetry dataset into an input module of the ancient poetry generation model;
s203, extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry;
S204, mapping global control information corresponding to the ancient poetry of the sentence into a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark;
s205, for each character in the ancient poetry, explicitly constructing format control templates under different formats based on format control information corresponding to the ancient poetry;
s206, inputting the global control templates and the format control templates corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generating model, and training the decoding module.
In step S201, the ancient poetry dataset includes a plurality of ancient poetry of different formats, and global control information and format control information corresponding to each ancient poetry. Wherein, the ancient poetry of different formats can include five poems, seven poems, words and the like. The global control information may include at least one of: the topic information may include keywords of the poetry, and the emotion information may include information indicating emotion degrees of the poetry, such as very negative, generally negative, neutral emotion, generally positive, very positive, and the like. The format control information may include at least one of: level and level information, rhyme information, verse length and verse number.
In step S204, the global control information corresponding to each ancient poetry includes theme information and/or emotion information, and the theme information corresponding to the ancient poetry is mapped into a plurality of individual words; and/or mapping emotion information corresponding to the sentence ancient poetry into a plurality of independent identifications. For each word or identity, a global control template under a different format is explicitly constructed. As shown in FIG. 1, different formats may include "CHAR", "SPOS", "Pingze", "TYPE", and "POS", among others. Where "CHAR" represents information of the word itself, "SPOS" represents relative position information in the entire control template, "Pingze" represents level information (e.g., level, zepe) of the word in the verse, and "TYPE" represents the verse TYPE, and "POS" represents global position information in the entire control template.
As can be seen from the first and second leftmost columns of fig. 1, the subject information corresponding to the ancient poetry of the sentence is "bright moon", and is mapped into "bright" and "month". For the "bright" word, a global control template under different formats is explicitly constructed: "E Ming dynasty ”、“E t0 ”、“E t0 ”、“E tpc And E 0 ", wherein" E Ming dynasty The subscript of "indicates the information of the" Ming "word itself," E t0 The subscript of "indicates time t0," E tpc Subscripts of "indicate subject," E 0 The subscript of "indicates global position 0. For the "month" word, a global control template under different formats is explicitly constructed: "E Month of moon ”、“E t1 ”、“E t1 ”、“E tpc And E 1 ", wherein" E Month of moon The subscript of "indicates the information of the" month "word itself," E t1 The subscript of "indicates time t1," E tpc Subscripts of "indicate subject," E 1 The subscript of "indicates global position 1.
As can be seen from the third and fourth columns on the leftmost side of fig. 1, the emotion information corresponding to the ancient poetry is "generally negative", and is mapped into an "S3" identifier and an "S2" identifier. For the "S3" identification, a global control template under different formats is explicitly constructed: "E S3 ”、“E S0 ”、“E S0 ”、“E stm And E 2 ", wherein" E S3 The subscript of "indicates the information of the" S3 "identity itself," E S0 The subscript of "indicates a randomly defined S0 tag, which has no practical meaning," E stm Subscripts of "indicate emotion," E 2 "of the genusThe subscript indicates global position 2. For the "S2" identification, a global control template under different formats is explicitly constructed: "E S2 ”、“E S1 ”、“E S1 ”、“E stm And E 3 ", wherein" E S2 The subscript of "indicates the information of the" S2 "identity itself," E S1 The subscript of "indicates a randomly defined S1 tag, which has no practical meaning," E stm Subscripts of "indicate emotion," E 3 The subscript of "indicates global position 3.
In step S205, the format control information corresponding to each ancient poetry includes at least one of the following: level and level information, rhyme information, verse length and verse number. Aiming at each character in the ancient poetry, a format control template for representing the level and the narrow is explicitly constructed based on the level and the narrow information corresponding to the ancient poetry; and/or for each character in the ancient poetry, explicitly constructing a format control template for representing the rhyme based on the rhyme information corresponding to the ancient poetry; and/or for each word in the ancient poetry, explicitly constructing a format control template for representing the relative position and/or the global position based on the length of the corresponding poetry; and/or for each character in the ancient poetry, explicitly constructing a format control template for representing the type of the poetry based on the corresponding poetry number of the ancient poetry.
As can be seen from the sixth column at the leftmost side of fig. 1, for the "lifting" word, based on the information "zepe, zepe corresponding to the sentence ancient poetry, the explicit structure of" zepe, zepe represents the format control template "E" of the zepe PZ1 "format control template" E "representing the relative position is explicitly constructed based on the verse length" 5 "corresponding to the ancient poetry of the sentence P4 "and global position format control template" E4", and based on the corresponding poetry number" 2 "of the ancient poetry, explicitly constructing format control template" E "representing the type of the poetry sn0 ". Aiming at 'head' words, based on 'level and level information' level and level 'corresponding to the ancient poetry, a' level and level 'explicit construction represents a' level and level 'format control template' E PZ0 "format control template" E "representing the relative position is explicitly constructed based on the verse length" 5 "corresponding to the ancient poetry of the sentence P3 "sumThe format control template ' E5 ' of the global position is used for explicitly constructing the format control template ' E ' representing the types of the poems based on the poems ' number ' 2 ' corresponding to the ancient poems of the sentences sn0 ". Other words and the like are not described in detail herein. Wherein, "E PZ1 Subscripts of "indicate" zepton "," E PZ0 The subscript of "means" flat "," E sn0 The subscript of "indicates the first sentence," E sn1 The subscript of "indicates the second sentence.
In one possible implementation, step S206 may include the sub-steps of:
s2061, converting the global control templates under different formats of the display construction into global control vectors under different formats according to each word or mark corresponding to the global control information;
S2062, adding the global control vectors under different formats to obtain a target global control vector corresponding to each word or mark;
s2063, converting the format control templates under different formats of the display structure into format control vectors under different formats according to each character in the ancient poetry;
s2064, adding the word vector of each previous moment and the format control vector of the word to be generated at the current moment under different formats to obtain a target format control vector corresponding to the current moment in the ancient poetry;
s2065, inputting each word or the target global control vector corresponding to the identifier and the target format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
The above sub-steps S2061 to S2065 will be described below by taking fig. 1 as an example.
In step S2061, for example, for the "bright" word, the global control templates under five different formats of the construct will be displayed: "E Ming dynasty ”、“E t0 ”、“E t0 ”、“E tpc And E 0 "convert to global control vector ebedding in five different formats. For the "S3" designation, five of the constructs will be displayedGlobal control templates under different formats: "E S3 ”、“E S0 ”、“E S0 ”、“E stm And E 2 "convert to global control vector ebedding in five different formats.
In step S2062, for example, for the "bright" word, the global control vectors emmbedding in five different formats are summed to obtain the target global control vector emmbedding corresponding to the "bright" word. And adding the global control vectors in five different formats aiming at the 'S3' mark to obtain a target global control vector emmbedding corresponding to the 'S3' mark.
In step S2063, for example, for the "lift" word, the format control templates under five different formats of the construct will be displayed: "E Lifting device ”、“E P4 ”、“E PZ1 ”、“E sn0 ”、“E 4 "convert to format control vector ebedding in five different formats.
In step S2064, for example, regarding the "lifting" word generated at the previous time, "E" is determined Lifting device "corresponding word vector and Format control template of word to be generated at present time under different formats" E P3 ”、“E PZ0 ”、“E sn0 ”、“E 5 And adding the format control vectors emmbedding corresponding to the respective modes to obtain a target format control vector emmbedding corresponding to the current moment.
In step S2065, the "bright" word, the "month" word, the "S3" flag, and the "S2" flag are input to the decoding module 2, respectively, to the target global control vector corresponding to the current time and the target format control vector corresponding to the current time, and the predictive text "head" of the word to be generated at the current time is decoded autoregressively. And then inputting a target format control vector corresponding to the head word generated at the current moment into the decoding module 2, predicting the word 'hope' to be generated at the next moment, and the like, and finally predicting the whole poem. That is, the prediction of the next word depends on the word that has been generated at the previous time. The decoder (Transformer Decoder) has a strong representation capability, so that the generated text semantics are more smooth.
The embodiment of the application provides a training method of an ancient poetry generating model, which comprises the steps of firstly, acquiring an ancient poetry data set and acquiring a pre-constructed ancient poetry generating model; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry; that is, the present application may mix various different formats of the ancient poetry data for model training. Then, inputting the ancient poetry dataset into an input module of the ancient poetry generating model; and extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry. Then, mapping global control information corresponding to the ancient poetry of the sentence into a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark; for each character in the ancient poetry, a format control template under different formats is explicitly constructed based on format control information corresponding to the ancient poetry; that is, the present application may explicitly construct global control templates and format control templates under different formats, so that the generated text formats, such as the length, the level, the rhyme, and the like, of each poem may be explicitly controlled. And finally, inputting the global control templates and the format control templates corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generating model, and training the decoding module. According to the embodiment of the application, the ancient poetry data in various different formats can be mixed for training in the model training process, the global control template and the format control template under different formats can be explicitly constructed and input into the decoding module, so that the generated ancient poetry text strictly meets the preset format requirements, various playing method choices are provided for games, and the game content of players is enriched. Meanwhile, the defect that the semantics of the traditional RNN-model-based generation result are not smooth enough is overcome.
Referring to fig. 3, a flow chart of a method for generating ancient poetry according to an embodiment of the present application is shown. The generating method may include the steps of:
s301, acquiring input global control information and/or input format control information matched with a user;
s302, inputting the input global control information and/or the input format control information into the trained old poetry generation model;
s303, mapping the input global control information into a plurality of independent words and/or marks through an input module in the ancient poetry generation model, and matching the input global control templates under different formats for each word or mark;
s304, matching input format control templates under different formats according to the input format control information aiming at each character of the ancient poetry to be generated;
s305, inputting the input global control templates under different formats and the input format control templates of each character under different formats into a decoding module in the ancient poetry generating model, and autoregressively decoding the predicted text of the next character of the character, thereby generating the ancient poetry.
In step S301, input global control information and/or input format control information manually input by a user may be obtained, and the format of the paleopoetry to be generated may be manually controlled according to the user' S requirement. Or mining input global control information and/or input format control information matched with the user according to the game history record. Wherein the input global control information includes at least one of: the input theme information may include keywords of the poetry, and the input emotion information may include information representing emotion degrees of the poetry, such as very negative, generally negative, neutral emotion, generally positive, very positive, and the like. The input format control information includes at least one of: inputting level and level information, inputting rhyme information, inputting a verse length and inputting a verse number.
In step S303, the input global control information includes at least one of: inputting theme information and emotion information. Mapping the input theme information into a plurality of individual words through an input module in the ancient poetry generating model; and/or mapping the input emotion information into a plurality of individual identifications through an input module in the ancient poetry generation model. For each word or identity, the input global control templates under different formats are matched.
As can be seen from the first and second leftmost columns of fig. 1, the input subject information corresponding to the ancient poetry of the sentence is "open moon", and is mapped into "bright" and "month". For the "Ming" word, matching the input global control templates under different formats: "E Ming dynasty ”、“E t0 ”、“E t0 ”、“E tpc And E 0 ", wherein" E Ming dynasty The subscript of "indicates the information of the" Ming "word itself," E t0 The subscript of "indicates time t0," E tpc Subscripts of "indicate subject," E 0 The subscript of "indicates global position 0. For the 'month' word, matching the input global control templates under different formats: "E Month of moon ”、“E t1 ”、“E t1 ”、“E tpc And E 1 ", wherein" E Month of moon The subscript of "indicates the information of the" month "word itself," E t1 The subscript of "indicates time t1," E tpc Subscripts of "indicate subject," E 1 The subscript of "indicates global position 1.
As can be seen from the third and fourth columns on the leftmost side of fig. 1, the input emotion information corresponding to the ancient poetry is "generally negative", and is mapped into an "S3" identifier and an "S2" identifier. For the "S3" identification, matching the input global control templates under different formats: "E S3 ”、“E S0 ”、“E S0 ”、“E stm And E 2 ", wherein" E S3 The subscript of "indicates the information of the" S3 "identity itself," E S0 The subscript of "indicates a randomly defined S0 tag, which has no practical meaning," E stm Subscripts of "indicate emotion," E 2 The subscript of "indicates global position 2. For the "S2" identification, matching the input global control templates under different formats: "E S2 ”、“E S1 ”、“E S1 ”、“E stm And E 3 ", wherein" E S2 The subscript of "indicates the information of the" S2 "identity itself," E S1 The subscript of "indicates a randomly defined S1 tag, which has no practical meaning," E stm The subscript of "indicates the emotion,“E 3 the subscript of "indicates global position 3.
In step S304, the input format control information includes at least one of: inputting level and level information, inputting rhyme information, inputting a verse length and inputting a verse number. Matching an input format control template representing the level and the narrow based on the input level and the narrow information aiming at each character of the ancient poetry to be generated; and/or matching an input format control template representing the rhyme based on the input rhyme information for each character of the ancient poem to be generated; and/or matching an input format control template representing a relative position and/or a global position based on the input verse length for each word of the ancient poetry to be generated; and/or matching an input format control template representing a type of verse based on the number of input verses for each word of the ancient poetry to be generated.
As can be seen from the sixth column at the leftmost side of fig. 1, for the "lifting" word, based on the input level information "level and level" corresponding to the sentence ancient poem, the level and level "matches the input format control template" E "representing level and level PZ1 "input format control template" E "representing relative position based on the input verse length" 5 "corresponding to the ancient poetry P4 "and global position input format control template" E4", input format control template" E "representing a verse type is matched based on the number of input verses" 2 "corresponding to the ancient poems of the sentence sn0 ”。
In one possible implementation, step S305 may include the sub-steps of:
s3051, converting the matched input global control templates under different formats into input global control vectors under different formats according to each word or mark corresponding to the input global control information;
s3052, adding the input global control vectors under different formats to obtain target input global control vectors corresponding to each word or mark;
s3053, converting the matched input format control templates under different formats into input format control vectors under different formats according to each character of the ancient poetry to be generated;
S3054, adding the word vector of each previous moment and the input format control vector of the word to be generated at the current moment under different formats to obtain a target input format control vector corresponding to the current moment;
s3055, inputting each word or the target input global control vector corresponding to the identifier and the target input format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
The above sub-steps S3051-S3055 will be described below by taking fig. 1 as an example.
In step S3051, for example, for the "bright" word, the matched input global control templates in five different formats: "E Ming dynasty ”、“E t0 ”、“E t0 ”、“E tpc And E 0 "convert to input global control vector ebedding in five different formats. For the "S3" identification, the matched five different formats of input global control templates: "E S3 ”、“E S0 ”、“E S0 ”、“E stm And E 2 "convert to input global control vector ebedding in five different formats.
In step S3052, for example, for the "bright" word, the input global control vectors emmbedding in five different formats are summed to obtain the target input global control vector emmbedding corresponding to the "bright" word. And adding the input global control vectors emmbedding under five different formats aiming at the 'S3' mark to obtain a target input global control vector emmbedding corresponding to the 'S3' mark.
In step S3053, for example, for the "lift" word, the input format control templates under the five different formats are matched: "E Lifting device ”、“E P4 ”、“E PZ1 ”、“E sn0 ”、“E 4 "convert to input format control vector ebedding in five different formats.
In step S3054, for example, regarding the "up" word generated at the previous time, "E" is set Lifting device "corresponding word vector and current time to be treatedInput Format control template "E" for generated words under different formats P3 ”、“E PZ0 ”、“E sn0 ”、“E 5 And adding the input format control vectors emmbedding corresponding to the respective modes to obtain the target input format control vector emmbedding corresponding to the current moment.
In step S3055, the "bright" word, the "month" word, the "S3" identifier and the "S2" identifier are respectively input into the decoding module 2, and the target input global control vector corresponding to the current time and the target input format control vector corresponding to the current time are input into the decoding module 2, so as to autoregressively decode the predicted text "head" of the word to be generated at the current time. And then inputting a target input format control vector corresponding to the head word generated at the current moment into the decoding module 2, predicting the word 'hope' to be generated at the next moment, and the like, and finally predicting the whole poem. That is, the prediction of the next word depends on the word that has been generated at the previous time. The decoder (Transformer Decoder) has a strong representation capability, so that the generated text semantics are more smooth.
The embodiment of the application provides a method for generating ancient poetry, which comprises the steps of firstly, acquiring input global control information and/or input format control information matched with a user; inputting the input global control information and/or the input format control information into the trained ancient poetry generating model; that is, the format of the paleopoetry to be generated can be controlled manually according to the needs of the user. Then, mapping the input global control information into a plurality of independent words and/or marks through an input module in the ancient poetry generating model, and matching the input global control templates under different formats aiming at each word or mark; matching input format control templates under different formats based on the input format control information for each character of the ancient poetry to be generated; that is, the present application may explicitly control the generated text formats, such as the length of each poem, the level, the rhyme, and the like. And finally, inputting the input global control templates under different formats and the input format control templates of each character under different formats into a decoding module in the ancient poetry generating model, and decoding the predictive text of the next character of the character in an autoregressive manner, so as to generate the ancient poetry. According to the method and the device for generating the ancient poetry, the format of the ancient poetry to be generated can be controlled manually in the ancient poetry generating process, and therefore interactivity of a user and flexible controllability of the poetry are improved.
Based on the same technical conception, the embodiment of the application also provides training of the ancient poetry generating model, an ancient poetry generating device, electronic equipment, a computer storage medium and the like, and particularly, the following embodiment can be seen.
Referring to fig. 4, a schematic structural diagram of a training device for generating a model of ancient poetry according to an embodiment of the present application is shown. The training device may include:
the ancient poetry obtaining module 401 is configured to obtain an ancient poetry dataset and obtain a previously constructed ancient poetry generating model; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry;
an ancient poetry input module 402, configured to input the ancient poetry dataset into an input module of the ancient poetry generation model;
the information extraction module 403 is configured to extract, for each ancient poetry, global control information and format control information corresponding to the ancient poetry;
the global construction module 404 is configured to map global control information corresponding to the ancient poetry of the sentence into a plurality of individual words and/or identifiers, and for each word or identifier, explicitly construct a global control template under different formats;
The format construction module 405 is configured to explicitly construct, for each word in the ancient poetry, a format control template under different formats based on format control information corresponding to the ancient poetry;
the training module 406 is configured to input the global control template and the format control template corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generating model, and train the decoding module.
Referring to fig. 5, a schematic structural diagram of an ancient poetry generating device according to an embodiment of the present application is shown. The generating means may include:
an information obtaining module 501, configured to obtain input global control information and/or input format control information matched with a user;
the information input module 502 is configured to input the input global control information and/or input format control information into the trained old poetry generation model;
the global matching module 503 is configured to map the input global control information into a plurality of individual words and/or identifiers through an input module in the ancient poetry generating model, and match, for each word or identifier, an input global control template under different formats;
a format matching module 504, configured to match, for each word of the ancient poetry to be generated, an input format control template under different formats based on the input format control information;
The generating module 505 is configured to input the input global control template under different formats and the input format control template of each word under different formats into the decoding module in the ancient poetry generating model, and autoregressively decode the predicted text of the next word of the word, thereby generating the ancient poetry.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, includes: the processor 601, the memory 602 and the bus 603, the memory 602 stores machine readable instructions executable by the processor 601, when the electronic device is running, the processor 601 communicates with the memory 602 through the bus 603, and the processor 601 executes the machine readable instructions to perform the method described in the foregoing method embodiments, and specific implementation may be referred to the method embodiments and will not be described herein.
The computer program product of the training of the model for generating the ancient poetry and the method for generating the ancient poetry provided in the embodiments of the present application includes a computer readable storage medium storing non-volatile program codes executable by a processor, where the instructions included in the program codes may be used to execute the method described in the foregoing method embodiments, and specific implementation may be referred to the method embodiments and will not be described herein.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, which are not described in detail in this application. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. A training method of an ancient poetry generating model, the training method comprising:
acquiring a data set of the ancient poetry, and acquiring a pre-constructed generation model of the ancient poetry; the ancient poetry data set comprises a plurality of ancient poetry of different formats, global control information and format control information corresponding to each ancient poetry;
inputting the ancient poetry dataset into an input module of the ancient poetry generating model;
extracting global control information and format control information corresponding to each ancient poetry aiming at each ancient poetry;
mapping global control information corresponding to the ancient poetry to a plurality of independent words and/or marks, and explicitly constructing global control templates in different formats according to each word or mark;
for each character in the ancient poetry, a format control template under different formats is explicitly constructed based on format control information corresponding to the ancient poetry;
inputting a global control template and a format control template corresponding to the ancient poetry in different formats into a decoding module of the ancient poetry generation model, and training the decoding module;
wherein the global control information includes at least one of: theme information and emotion information;
The step of mapping global control information corresponding to the ancient poetry into a plurality of individual words and/or marks comprises the following steps:
mapping the theme information corresponding to the ancient poetry into a plurality of individual words; and/or
Mapping emotion information corresponding to the ancient poetry to a plurality of independent identifications;
the format control information includes at least one of: level and level information, rhyme information, verse length and verse number;
for each character in the ancient poetry, the step of explicitly constructing the format control templates under different formats based on the format control information corresponding to the ancient poetry comprises the following steps:
aiming at each character in the ancient poetry, a format control template for representing the level and the narrow is explicitly constructed based on the level and the narrow information corresponding to the ancient poetry; and/or
For each character in the ancient poetry, a format control template for representing the rhyme is explicitly constructed based on the rhyme information corresponding to the ancient poetry; and/or
For each character in the ancient poetry, a format control template for representing the relative position and/or the global position is explicitly constructed based on the length of the corresponding poetry; and/or
And for each character in the ancient poetry, explicitly constructing a format control template for representing the type of the poetry based on the corresponding poetry number of the ancient poetry.
2. The training method of claim 1, wherein the step of inputting the global control template and the format control template corresponding to the sentence in the different formats into the decoding module of the ancient poetry generation model includes:
converting the global control templates under different formats of the display construction into global control vectors under different formats according to each word or mark corresponding to the global control information;
adding the global control vectors under different formats to obtain a target global control vector corresponding to each word or mark;
converting the format control templates under different formats of the display structure into format control vectors under different formats aiming at each character in the ancient poetry;
adding the word vector of each previous moment and the format control vector of the word to be generated at the current moment under different formats to obtain a target format control vector corresponding to the current moment in the ancient poetry;
and inputting each word or the target global control vector corresponding to the identifier and the target format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
3. A method for generating ancient poetry, the method comprising:
acquiring input global control information and/or input format control information matched with a user;
inputting the input global control information and/or the input format control information into the trained ancient poetry generating model;
mapping the input global control information into a plurality of independent words and/or marks through an input module in the ancient poetry generating model, and matching the input global control templates under different formats aiming at each word or mark;
matching input format control templates under different formats based on the input format control information for each character of the ancient poetry to be generated;
inputting an input global control template under different formats and an input format control template of each word under different formats into a decoding module in the ancient poetry generating model, and autoregressively decoding a predicted text of a next word of the word, so that the ancient poetry is generated;
wherein the input global control information includes at least one of: inputting theme information and emotion information;
the step of mapping the input global control information into a plurality of individual words and/or logos by an input module in the ancient poetry generating model, comprising:
Mapping the input theme information into a plurality of individual words through an input module in the ancient poetry generating model; and/or
Mapping the input emotion information into a plurality of independent identifications through an input module in the ancient poetry generating model;
the input format control information includes at least one of: inputting level and narrow information, inputting rhyme information, inputting a verse length and inputting the verse number;
the step of matching input format control templates in different formats based on the input format control information for each word of the ancient poetry to be generated includes:
matching an input format control template representing the level and the narrow based on the input level and the narrow information aiming at each character of the ancient poetry to be generated; and/or
For each character of the ancient poetry to be generated, matching an input format control template representing the rhyme based on the input rhyme information; and/or
Matching an input format control template representing a relative position and/or a global position based on an input verse length for each word of the ancient poetry to be generated; and/or
For each word of the ancient poetry to be generated, matching an input format control template representing a type of the poetry based on the number of input poetry.
4. A method of generating according to claim 3, wherein the step of inputting the input global control template in the different formats and the input format control template for each word in the different formats into the decoding module in the ancient poetry generating model, and autoregressively decoding the predicted text of the next word of the word, comprises:
aiming at each word or mark corresponding to the input global control information, converting the matched input global control templates under different formats into input global control vectors under different formats;
adding the input global control vectors under different formats to obtain a target input global control vector corresponding to each word or mark;
aiming at each character of the ancient poetry to be generated, converting the matched input format control templates under different formats into input format control vectors under different formats;
adding the word vector of each previous moment and the input format control vector of the word to be generated at the current moment under different formats to obtain a target input format control vector corresponding to the current moment;
and inputting each word or mark corresponding target input global control vector and the target input format control vector corresponding to the current moment into a decoding module of the ancient poetry generating model, and autoregressively decoding the predicted text of the word to be generated at the current moment.
5. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1 to 4.
6. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 4.
CN202110869571.9A 2021-07-30 2021-07-30 Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium Active CN113553822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110869571.9A CN113553822B (en) 2021-07-30 2021-07-30 Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110869571.9A CN113553822B (en) 2021-07-30 2021-07-30 Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113553822A CN113553822A (en) 2021-10-26
CN113553822B true CN113553822B (en) 2023-06-30

Family

ID=78133311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110869571.9A Active CN113553822B (en) 2021-07-30 2021-07-30 Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113553822B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086270A (en) * 2018-07-24 2018-12-25 重庆大学 System and method of composing poem automatically based on classic poetry corpus vectorization
CN109815493A (en) * 2019-01-09 2019-05-28 厦门大学 A kind of modeling method that the intelligence hip-hop music lyrics generate
CN111382580A (en) * 2020-01-21 2020-07-07 沈阳雅译网络技术有限公司 Encoder-decoder framework pre-training method for neural machine translation
CN112183109A (en) * 2020-09-22 2021-01-05 甘肃农业大学 MASS-based poetry sentence generation information steganography method
CN112257775A (en) * 2020-10-21 2021-01-22 东南大学 Poetry method by graph based on convolutional neural network and unsupervised language model
CN112287678A (en) * 2020-11-03 2021-01-29 沈阳雅译网络技术有限公司 Ancient poetry automatic generation method based on pre-training model
CN112651235A (en) * 2020-12-24 2021-04-13 北京搜狗科技发展有限公司 Poetry generation method and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086270A (en) * 2018-07-24 2018-12-25 重庆大学 System and method of composing poem automatically based on classic poetry corpus vectorization
CN109815493A (en) * 2019-01-09 2019-05-28 厦门大学 A kind of modeling method that the intelligence hip-hop music lyrics generate
CN111382580A (en) * 2020-01-21 2020-07-07 沈阳雅译网络技术有限公司 Encoder-decoder framework pre-training method for neural machine translation
CN112183109A (en) * 2020-09-22 2021-01-05 甘肃农业大学 MASS-based poetry sentence generation information steganography method
CN112257775A (en) * 2020-10-21 2021-01-22 东南大学 Poetry method by graph based on convolutional neural network and unsupervised language model
CN112287678A (en) * 2020-11-03 2021-01-29 沈阳雅译网络技术有限公司 Ancient poetry automatic generation method based on pre-training model
CN112651235A (en) * 2020-12-24 2021-04-13 北京搜狗科技发展有限公司 Poetry generation method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种宋词自动生成的遗传算法及其机器实现;周昌乐;游维;丁晓君;;软件学报(第03期);31-41 *
基于统计的汉语格律诗生成研究;何晶;周明;蒋龙;;中文信息学报(第02期);98-105 *

Also Published As

Publication number Publication date
CN113553822A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN107680580B (en) Text conversion model training method and device, and text conversion method and device
CN110750959A (en) Text information processing method, model training method and related device
CN108415977A (en) One is read understanding method based on the production machine of deep neural network and intensified learning
CN109271493A (en) A kind of language text processing method, device and storage medium
Marchesi Dante and Augustine: Linguistics, Poetics, Hermeneutics
Sayers et al. The Dawn of the Human-Machine Era: A forecast of new and emerging language technologies.
CN113784199B (en) System, method, storage medium and electronic device for generating video description text
CN109165336A (en) A kind of information output controlling method and private tutor's equipment
CN112199502A (en) Emotion-based poetry sentence generation method and device, electronic equipment and storage medium
CN115348458A (en) Virtual live broadcast control method and system
CN115463424A (en) Display control method and device of virtual role and electronic equipment
CN113553822B (en) Ancient poetry generating model training, ancient poetry generating method, equipment and storage medium
Ramazani Code-Switching, Code-Stitching: A Macaronic Poetics?
Yi Xian et al. Building national identity through the secondary school literature component in Malaysia
CN115205760A (en) Video dense description generation method based on deep local self-attention network
CN111324466B (en) Information processing method, device, system and storage medium
CN113268983A (en) Role-oriented story ending generation method
CN113222790A (en) Online course generation system and equipment based on artificial intelligence
CN113160793A (en) Speech synthesis method, device, equipment and storage medium based on low resource language
Nölle How language adapts to the environment: An evolutionary, experimental approach
Booten Flusser's Demon: Writing Under the Eye of an Automatic Critic.
CN114095754B (en) Video processing method and device and electronic equipment
Tanzer et al. Reconsidering Sentence-Level Sign Language Translation
Hao Reconceptualizing Minzu in Computer-Mediated Communication
Perera et al. AI-Generated Comic Strips

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant