CN111209389B - Movie story generation method - Google Patents

Movie story generation method Download PDF

Info

Publication number
CN111209389B
CN111209389B CN201911422896.1A CN201911422896A CN111209389B CN 111209389 B CN111209389 B CN 111209389B CN 201911422896 A CN201911422896 A CN 201911422896A CN 111209389 B CN111209389 B CN 111209389B
Authority
CN
China
Prior art keywords
nodes
node
text
attribute information
knowledge graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911422896.1A
Other languages
Chinese (zh)
Other versions
CN111209389A (en
Inventor
刘宏伟
刘宏蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Foreign Studies University
Guangdong University of Technology
Original Assignee
Tianjin Foreign Studies University
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Foreign Studies University, Guangdong University of Technology filed Critical Tianjin Foreign Studies University
Priority to CN201911422896.1A priority Critical patent/CN111209389B/en
Publication of CN111209389A publication Critical patent/CN111209389A/en
Application granted granted Critical
Publication of CN111209389B publication Critical patent/CN111209389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a movie story generation method. The method comprises the following steps: extracting fact triples from each sentence of the historical text; determining attribute information of the historical text; wherein the attribute information includes a subject of the history text; establishing a knowledge graph according to the fact triples and attribute information of the historical text; selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm; and sequentially inputting the nodes into the cyclic neural network model according to the selected order of the nodes, and generating a target text. According to the method and the system, the knowledge graph is built according to the fact triples and the attribute information of the historical text, so that the story of the target text generated based on the knowledge graph is more fit with the specified scene.

Description

Movie story generation method
Technical Field
The present disclosure relates to the field of computer software, and in particular, to a method for generating a movie story.
Background
With the development of technology, it is desired that a computer can be written like a human being, and can write high-quality natural language text. Therefore, the method and the device have been widely used in the fields of news writing, document generation, weather forecast information and the like, and natural language texts are generated by using semi-structured and structured text data. Currently, the prior art generally uses neural networks to implement character-level short text generation, word-level text generation.
However, the prior art still has some drawbacks, for example, the text under a specific scenario cannot be obtained by using the prior art, and therefore, how to fit the generated text to a specified subject or scenario becomes a technical problem to be solved.
Disclosure of Invention
An object of an embodiment of the present disclosure is to provide a movie story generation method, so that generated text fits a specified theme or scene.
To achieve the above object, an embodiment of the present disclosure provides a method for generating a movie story, the method including:
extracting fact triples from each sentence of the historical text;
determining attribute information of the historical text; wherein the attribute information includes a subject of the history text;
establishing a knowledge graph according to the fact triples and attribute information of the historical text;
selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm;
and sequentially inputting the nodes into a cyclic neural network model according to the selected order of the nodes, and generating a target text.
The embodiment of the disclosure also provides a movie story generation device, which comprises:
the fact triplet extraction module is used for extracting a fact triplet from each statement of the historical text;
a text attribute information determining module for determining attribute information of the history text; wherein the attribute information includes a subject of the history text;
the knowledge graph establishing module is used for establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
the node selection module is used for selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm;
and the target text generation module is used for sequentially inputting the nodes into the cyclic neural network model according to the selected sequence of the nodes to generate a target text.
The disclosed embodiments also provide a computer device comprising a processor and a memory for storing processor-executable instructions, which when executed by the processor implement the steps of the movie story generation method of any of the embodiments described above.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer instructions that when executed implement the steps of the movie story generation method of any of the embodiments described above.
According to the technical scheme provided by the embodiment of the disclosure, the knowledge graph is generated by adding the constraint of the text attribute information, such as the topic constraint, on the basis of the fact triples, and further, the plot line of the story is formed by linking a plurality of nodes in the knowledge graph through the preset path planning algorithm, so that the story is richer, and the generated text is more attached to the specified topic.
Drawings
FIG. 1 is a flow chart of a method for generating a movie story provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a syntax dependency tree provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an LSTM model provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an internal structure of an LSTM model provided by an embodiment of the present disclosure;
fig. 5 is a block diagram of a movie story generating apparatus according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a computer device provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a computer-readable storage medium provided by an embodiment of the present disclosure.
Detailed Description
The embodiment of the disclosure provides a movie story generation method.
In order that those skilled in the art will better understand the technical solutions in the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Referring to fig. 1, a flowchart of a method for generating a movie story according to an embodiment of the disclosure may include the following steps:
s1: the fact triples are extracted from each sentence of the history text.
In this embodiment, a plurality of history texts are obtained from a corpus, and then sentence-level information extraction is performed on each history text to obtain a fact triplet of each sentence, i.e., a main predicate of each sentence, which may be expressed as (s, p, o). For sentences with main predicate insufficiency, the fact triples of the sentence can be skipped and not extracted.
Referring to FIG. 2, a syntax dependency tree may be generally employed for extraction.
For example, one sentence is: my dog likes eating extracting through the syntax dependency tree to obtain nmod (compound noun modification), xcomp (x-clause complement), nsubj (noun subject), dobj (direct object), root (root node) of the sentence, thereby determining the main predicate structure of the sentence, and extracting the real triplet.
S2: determining attribute information of the historical text; wherein the attribute information includes a subject of the history text.
In this embodiment, the TF-IDF (Term Frequency-inverse text Frequency index) algorithm may be used; or LDA (Latent Dirichlet Allocation ) model determines the topic of the history text.
For example, determining the subject matter of the history text by the TF-IDF algorithm comprises the steps of:
s21: calculating word frequency
S22: calculating an inverse text frequency index
S23: calculation of TF w -IDF
TF w Idf=word frequency (TF w ) X inverse text frequency (IDF)
The word with the largest TF-IDF value is taken as the theme of the history text.
S3: and establishing a knowledge graph according to the fact triples and the attribute information of the historical text.
In this embodiment, the extended fact triplet is obtained by adding attribute information of the history text to the fact triplet, and a knowledge graph is built based on the extended fact triplet.
For example, a sentence in a certain history text is:
LESTER:I book a movie ticket for the afternoon.
the fact triplet extracted is (left, book, movie ticket).
Further, the scene of the history text is Earth cinema and the type is comdy, and the augmented fact triplet is (LESTER, book, movie ticket, earth cinema, comdy).
Of course, the above examples are only for better explaining the augmented fact triplet, and can also be applied to the Chinese history text, and the application is not limited thereto.
S4: and selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm.
In this embodiment, related nodes conforming to the specified attribute information are randomly selected from the knowledge graph, and after the selection is completed, the selected nodes are connected in the knowledge graph in a path planning manner. The preset path planning algorithm may include an a-star algorithm.
For example, the algorithm a selects the node with the smallest f (i) value (highest priority) from the priority queue as the next node to be traversed each time by the formula f (i) =g (i) +h (i).
Where f (i) is the comprehensive priority of node, and when the next node to be traversed is selected, the node with the highest comprehensive priority (the smallest value) is always selected; g (i) is the cost of node from the starting point; h (i) is the expected cost of node distance from the endpoint. The algorithm a uses two sets to represent the node open_set to be traversed and the node close_set that has been traversed, respectively.
When a plurality of nodes are selected from the knowledge graph by using an A-star algorithm, the distance between the nodes in the knowledge graph is defined as:
dis tan ce=dis Manhattan +sim Semantics
wherein dis is the distance between nodes; dis (dis) Manhattan Is Manhattan distance; sim (sim) Semantics And semantic similarity among nodes to be traversed in the knowledge graph is obtained.
Semantic similarity characterizes semantic similarity among nodes by vectorizing node data and solving cosine values among node vectors. The definition is as follows: sim (sim) Semantics =cos(V(node i ),V(node j ))。
S5: and sequentially inputting the nodes into a cyclic neural network model according to the selected order of the nodes, and generating a target text.
In this embodiment, a cyclic neural network model is obtained by training in advance, and the cyclic neural network model may be an LSTM (Long Short-Term Memory) model or a GRU (Gated Recurrent Unit, gate-controlled cyclic unit) model, and the nodes are sequentially input into the cyclic neural network model to generate a target text.
For example, the recurrent neural network model may be an LSTM model, which is a special RNN (recurrent neural network model) mainly to solve the problems of gradient extinction and gradient explosion during long-sequence training. Specifically, the selected node is input as an initial LSTM long-term memory network node, and the generated word and the output of the current LSTM long-term memory network node are input as the input for predicting the next word.
As shown in fig. 3. The model has two transmission states cell state c at time t t And hidden state h t ,y t Is the output of the LSTM model. The input of the model is represented by x t And h from the last state transfer t-1 Splicing training to obtain four states:
wherein z is i ,z f ,z o The method is characterized in that after a weight matrix is multiplied by a splicing vector, the splicing vector is converted into a numerical value between 0 and 1 through a sigmoid activation function to serve as a gating state, and z is a value between-1 and 1 through a tanh activation function.
As shown in fig. 4, three phases of the internal structure of the LSTM model: forgetting stage, which selects to forget selectively mainly for input from last node, z f Indicating forget to gate to control the last state c t-1 The former part is needed; a selection memory stage, z i Representing a selection gating signal to control input x i Is selected from the group consisting of (1); input stage, z is as above o Control the output of the current stage and also for c of the previous stage o Scaling was performed by tanh.
It can be seen that, according to the technical scheme provided by the disclosure, on the basis of the fact triples, constraint of text attribute information, such as topic constraint, is added to generate a knowledge graph, and further, a plurality of nodes in the knowledge graph are linked through a preset path planning algorithm to form a plot line of a story, so that the story is richer, and the generated text is more attached to a specified topic.
In addition, by utilizing the technical scheme provided by the disclosure, when a text with a longer space needs to be generated, only the length of the plot line is required to be increased, namely the number of nodes in the selected knowledge graph is increased, so that the phenomenon of sentence repetition of a long text generated in the prior art is avoided.
Referring to fig. 5, the present disclosure further provides a movie story generation apparatus, the apparatus including:
a fact triplet extraction module 100, configured to extract a fact triplet from each sentence of the history text;
a text attribute information determining module 200, configured to determine attribute information of the history text; wherein the attribute information includes a subject of the history text;
the knowledge graph establishing module 300 is configured to establish a knowledge graph according to the fact triplet and attribute information of the historical text;
the node selection module 400 is configured to select a plurality of nodes from the knowledge graph according to the specified attribute information and a preset path planning algorithm;
and the target text generation module 500 is used for sequentially inputting the nodes into the cyclic neural network model according to the selected order of the nodes to generate target texts.
Referring to fig. 6, the present disclosure also provides a computer device including a processor and a memory for storing processor-executable instructions that when executed by the processor implement the steps of the movie story method of any of the embodiments described above.
As shown in fig. 7, the embodiments of the present disclosure further provide a computer readable storage medium having stored thereon computer instructions that, when executed, implement the steps of the movie story generation method described in any of the above-described implementations.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., a field programmable gate array (Field Programmable gate array, FPGA)) is an integrated circuit whose logic function is determined by the user programming the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware DescriptionLanguage), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (RubyHardware Description Language), etc., VHDL (Very-High-SpeedIntegrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The apparatus, modules illustrated in the above embodiments may be implemented in particular by a computer chip or entity or by a product having a certain function.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of the various modules may be implemented in the same one or more pieces of software and/or hardware when implementing the present disclosure.
From the description of the embodiments above, it will be apparent to those skilled in the art that the present disclosure may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the aspects of the present disclosure, in essence or contributing to the art, may be embodied in the form of a software product, which in a typical configuration, includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described by various embodiments or portions of embodiments of the present disclosure. The computer software product may be stored in a memory, which may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The disclosure is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the foregoing is directed to embodiments of the present application, other and further details of the application may be had by the present application, it should be understood that the foregoing description is merely illustrative of the present application and that no changes, substitutions, or alterations herein may be made without departing from the spirit and principles of the application.

Claims (7)

1. A method for generating a movie story, comprising:
extracting fact triples from each sentence of the historical text;
determining attribute information of the historical text, wherein the attribute information comprises a theme of the historical text;
establishing a knowledge graph according to the fact triples and attribute information of the historical text;
sequentially selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm;
sequentially inputting the nodes into a cyclic neural network model according to the selected sequence of the nodes, and outputting a target text;
the preset path planning algorithm comprises the following steps: an A-star algorithm;
when a plurality of nodes are selected from the knowledge graph by using an A-star algorithm, the distance between the nodes in the knowledge graph is defined as:
distance=dis Manhattan +sim Semantics
wherein distance is the distance between nodes; dis (dis) Manhattan Is Manhattan distance; sim (sim) Semantics Semantic similarity among nodes to be traversed in the knowledge graph is obtained;
according to the specified attribute information and the A-star algorithm, sequentially selecting a plurality of nodes from the knowledge graph comprises:
selecting a node with the smallest f (i) value from the priority queue as a node to be traversed next time by using the following formula f (i) =g (i) +h (i); wherein f (i) is the comprehensive priority of node, and when selecting the next node to be traversed, the node with the highest comprehensive priority or the smallest value is always selected; g (i) is the cost of node from the starting point; h (i) is the estimated cost of node distance from the endpoint;
wherein the recurrent neural network model comprises: a long-term and short-term memory model; or a gated loop unit model;
the method comprises the steps of sequentially inputting nodes into the long-period memory model according to the selected order of the nodes, and outputting target texts, wherein the steps comprise:
and taking the selected node as a long-short-term memory network node of the long-short-term memory model to input, and taking the generated word and the output of the long-short-term memory network node of the current long-short-term memory model as the input of predicting the next word so as to finish the generation of the target text.
2. The method of claim 1, wherein the fact triples of the respective statements are extracted using a syntactic dependency tree.
3. The method of claim 1, wherein the word frequency-inverse text frequency index algorithm is used; or a latent dirichlet allocation model determines the topic of the history text.
4. The method of claim 1, wherein the specified attribute information comprises: designating a theme; and/or, designating a scene.
5. A movie story generation apparatus, comprising:
the fact triplet extraction module is used for extracting a fact triplet from each statement of the historical text;
a text attribute information determining module for determining attribute information of the history text; wherein the attribute information includes a subject of the history text;
the knowledge graph establishing module is used for establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
the node selection module is used for selecting a plurality of nodes from the knowledge graph according to the appointed attribute information and a preset path planning algorithm;
the target text generation module is used for sequentially inputting the nodes into the cyclic neural network model according to the selected sequence of the nodes to generate a target text;
the preset path planning algorithm comprises the following steps: an A-star algorithm;
when a plurality of nodes are selected from the knowledge graph by using an A-star algorithm, the distance between the nodes in the knowledge graph is defined as:
distance=dis Manhattan +sim Semantics
wherein distance is the distance between nodes; dis (dis) Manhattan Is Manhattan distance; sim (sim) Semantics Semantic similarity among nodes to be traversed in the knowledge graph is obtained;
according to the specified attribute information and the A-star algorithm, sequentially selecting a plurality of nodes from the knowledge graph comprises:
selecting a node with the smallest f (i) value from the priority queue as a node to be traversed next time by using the following formula f (i) =g (i) +h (i); wherein f (i) is the comprehensive priority of node, and when selecting the next node to be traversed, the node with the highest comprehensive priority or the smallest value is always selected; g (i) is the cost of node from the starting point; h (i) is the estimated cost of node distance from the endpoint;
wherein the recurrent neural network model comprises: a long-term and short-term memory model; or a gated loop unit model;
the method comprises the steps of sequentially inputting nodes into the long-period memory model according to the selected order of the nodes, and outputting target texts, wherein the steps comprise:
and taking the selected node as a long-short-term memory network node of the long-short-term memory model to input, and taking the generated word and the output of the long-short-term memory network node of the current long-short-term memory model as the input of predicting the next word so as to finish the generation of the target text.
6. A computer device comprising a processor and a memory for storing processor-executable instructions, which when executed by the processor implement the steps of the method of any one of claims 1-4.
7. A computer readable storage medium having stored thereon computer instructions which when executed implement the steps of the method of any of claims 1-4.
CN201911422896.1A 2019-12-31 2019-12-31 Movie story generation method Active CN111209389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422896.1A CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422896.1A CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Publications (2)

Publication Number Publication Date
CN111209389A CN111209389A (en) 2020-05-29
CN111209389B true CN111209389B (en) 2023-08-11

Family

ID=70788482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422896.1A Active CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Country Status (1)

Country Link
CN (1) CN111209389B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591442B (en) * 2021-10-08 2022-02-18 北京明略软件系统有限公司 Text generation method and device, electronic device and readable storage medium
CN114429198A (en) * 2022-04-07 2022-05-03 南京众智维信息科技有限公司 Self-adaptive layout method for network security emergency treatment script

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550253A (en) * 2015-12-09 2016-05-04 百度在线网络技术(北京)有限公司 Method and device for obtaining type relation
CN108897857A (en) * 2018-06-28 2018-11-27 东华大学 The Chinese Text Topic sentence generating method of domain-oriented
CN110347810A (en) * 2019-05-30 2019-10-18 重庆金融资产交易所有限责任公司 Method, apparatus, computer equipment and storage medium are answered in dialog mode retrieval
CN110390352A (en) * 2019-06-26 2019-10-29 华中科技大学 A kind of dark data value appraisal procedure of image based on similitude Hash
CN110489755A (en) * 2019-08-21 2019-11-22 广州视源电子科技股份有限公司 Document creation method and device
CN110516146A (en) * 2019-07-15 2019-11-29 中国科学院计算机网络信息中心 A kind of author's name disambiguation method based on the insertion of heterogeneous figure convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550253A (en) * 2015-12-09 2016-05-04 百度在线网络技术(北京)有限公司 Method and device for obtaining type relation
CN108897857A (en) * 2018-06-28 2018-11-27 东华大学 The Chinese Text Topic sentence generating method of domain-oriented
CN110347810A (en) * 2019-05-30 2019-10-18 重庆金融资产交易所有限责任公司 Method, apparatus, computer equipment and storage medium are answered in dialog mode retrieval
CN110390352A (en) * 2019-06-26 2019-10-29 华中科技大学 A kind of dark data value appraisal procedure of image based on similitude Hash
CN110516146A (en) * 2019-07-15 2019-11-29 中国科学院计算机网络信息中心 A kind of author's name disambiguation method based on the insertion of heterogeneous figure convolutional neural networks
CN110489755A (en) * 2019-08-21 2019-11-22 广州视源电子科技股份有限公司 Document creation method and device

Also Published As

Publication number Publication date
CN111209389A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
US10769383B2 (en) Cluster-based word vector processing method, device, and apparatus
CN111461004B (en) Event detection method and device based on graph attention neural network and electronic equipment
US11030416B2 (en) Latent ambiguity handling in natural language processing
CN117235226A (en) Question response method and device based on large language model
CN110162796B (en) News thematic creation method and device
CN108415941A (en) A kind of spiders method, apparatus and electronic equipment
US11003701B2 (en) Dynamic faceted search on a document corpus
US11194974B2 (en) Teaching syntax by adversarial distraction
CN110119505A (en) Term vector generation method, device and equipment
US10824819B2 (en) Generating word vectors by recurrent neural networks based on n-ary characters
CN111209389B (en) Movie story generation method
Huang et al. Advancing transformer architecture in long-context large language models: A comprehensive survey
CN116151363B (en) Distributed Reinforcement Learning System
CN111222315B (en) Movie scenario prediction method
US10846483B2 (en) Method, device, and apparatus for word vector processing based on clusters
CN117456028A (en) Method and device for generating image based on text
CN117369783B (en) Training method and device for security code generation model
CN113204637B (en) Text processing method and device, storage medium and electronic equipment
CN117910542A (en) User conversion prediction model training method and device
CN115017915B (en) Model training and task execution method and device
WO2023093909A1 (en) Workflow node recommendation method and apparatus
CN112035622A (en) Integrated platform and method for natural language processing
CN111191010B (en) Movie script multi-element information extraction method
CN116501852B (en) Controllable dialogue model training method and device, storage medium and electronic equipment
CN114817469B (en) Text enhancement method, training method and training device for text enhancement model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant