CN111209389A - Movie story generation method - Google Patents

Movie story generation method Download PDF

Info

Publication number
CN111209389A
CN111209389A CN201911422896.1A CN201911422896A CN111209389A CN 111209389 A CN111209389 A CN 111209389A CN 201911422896 A CN201911422896 A CN 201911422896A CN 111209389 A CN111209389 A CN 111209389A
Authority
CN
China
Prior art keywords
text
attribute information
nodes
knowledge graph
historical text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911422896.1A
Other languages
Chinese (zh)
Other versions
CN111209389B (en
Inventor
刘宏伟
刘宏蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Foreign Studies University
Guangdong University of Technology
Original Assignee
Tianjin Foreign Studies University
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Foreign Studies University, Guangdong University of Technology filed Critical Tianjin Foreign Studies University
Priority to CN201911422896.1A priority Critical patent/CN111209389B/en
Publication of CN111209389A publication Critical patent/CN111209389A/en
Application granted granted Critical
Publication of CN111209389B publication Critical patent/CN111209389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a movie story generation method. The method comprises the following steps: extracting fact triples from each sentence of the historical text; determining attribute information of the historical text; wherein the attribute information comprises a subject of the history text; establishing a knowledge graph according to the fact triples and the attribute information of the historical text; selecting a plurality of nodes from the knowledge graph spectrum according to the designated attribute information and a preset path planning algorithm; and sequentially inputting the nodes into the recurrent neural network model according to the selected sequence of the nodes to generate a target text. The knowledge graph is established according to the fact triples and the attribute information of the historical text, so that the story of the target text generated based on the knowledge graph is more fit with the appointed scene.

Description

Movie story generation method
Technical Field
The disclosure relates to the field of computer software, in particular to a movie story generation method.
Background
With the development of science and technology, it is expected that computers can write like humans and can write high-quality natural language text. Therefore, in the fields of news writing, text generation, weather forecast information and the like, the semi-structured and structured text data is used for generating the natural language text, and the method is more and more widely applied. Currently, the prior art generally uses a neural network to realize short text generation at a character level and text generation at a word level.
However, the prior art still has some defects, for example, the text in a specific scene cannot be obtained by using the prior art, so how to make the generated text fit a specified theme or scene becomes a technical problem to be solved urgently.
Disclosure of Invention
An object of the disclosed embodiments is to provide a movie story generation method, so that the generated text fits a specified theme or scene.
In order to achieve the above object, an embodiment of the present disclosure provides a movie story generation method, where the method includes:
extracting fact triples from each sentence of the historical text;
determining attribute information of the historical text; wherein the attribute information includes a subject of the historical text;
establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
selecting a plurality of nodes from the knowledge graph according to the designated attribute information and a preset path planning algorithm;
and sequentially inputting the nodes into a recurrent neural network model according to the selected sequence of the nodes to generate a target text.
The embodiment of the present disclosure further provides a movie story generation apparatus, including:
the fact triple extraction module is used for extracting fact triples from each statement of the historical text;
the text attribute information determining module is used for determining the attribute information of the historical text; wherein the attribute information includes a subject of the historical text;
the knowledge graph establishing module is used for establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
the node selection module is used for selecting a plurality of nodes from the knowledge graph according to the designated attribute information and a preset path planning algorithm;
and the target text generation module is used for sequentially inputting the nodes into the recurrent neural network model according to the selected sequence of the nodes to generate a target text.
Embodiments of the present disclosure also provide a computer device, including a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the steps of the movie story generation method in any of the above embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer instructions, which when executed, implement the steps of the movie story generation method described in any of the above embodiments.
According to the technical scheme provided by the embodiment of the disclosure, the knowledge graph is generated by adding constraints of text attribute information, such as theme constraints, on the basis of the fact triples, and furthermore, a plurality of nodes in the knowledge graph are linked through a preset path planning algorithm to form a plot line of a story, so that the story is richer, and the generated text fits a specified theme better.
Drawings
Fig. 1 is a flowchart of a movie story generation method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a syntactic dependency tree provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an LSTM model provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of the internal structure of an LSTM model provided by an embodiment of the present disclosure;
fig. 5 is a block diagram of a movie story generation apparatus provided in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a computer device provided by embodiments of the present disclosure;
fig. 7 is a schematic diagram of a computer-readable storage medium provided by an embodiment of the disclosure.
Detailed Description
The embodiment of the disclosure provides a movie story generation method.
In order to make those skilled in the art better understand the technical solutions in the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without any inventive step should fall within the scope of protection of the present disclosure.
Referring to fig. 1, a flowchart of a movie story generation method provided by an embodiment of the present disclosure may include the following steps:
s1: fact triples are extracted from each sentence of the historical text.
In this embodiment, a plurality of history texts are obtained from the corpus, and then sentence-level information extraction is performed on each history text to obtain a fact triple of each sentence, that is, a principal and a predicate of each sentence, which can be represented as (s, p, o). For sentences with incomplete main predicates, the fact triples of the sentences can be skipped and not extracted.
Referring to FIG. 2, a syntactic dependency tree may be generally employed for the extraction.
For example, one sentence is: extracting through a syntactic dependency tree to obtain nmod (compound noun modification), xcomp (x clause complement), nsubj (noun subject), dobj (direct object) and root (root node) of the sentence, thereby determining a main-predicate object structure of the sentence and extracting a fact triple.
S2: determining attribute information of the historical text; wherein the attribute information includes a subject of the history text.
In the present embodiment, a TF-IDF (Term Frequency-Inverse text Frequency index) algorithm may be used; or LDA (Latent Dirichlet Allocation) model determines the topic of the historical text.
For example, determining the topic of the history text by the TF-IDF algorithm comprises the following steps:
s21: calculating word frequency
Figure BDA0002352781970000031
S22: calculating an inverse text frequency index
Figure BDA0002352781970000032
S23: calculating TFw-IDF
TFw-IDF ═ word frequency (TF)w) X inverse text frequency (IDF)
And taking the word with the maximum TF-IDF value as the subject of the historical text.
S3: and establishing a knowledge graph according to the fact triples and the attribute information of the historical text.
In the embodiment, the expanded fact triples are obtained by adding the attribute information of the historical text to the fact triples, and the knowledge graph is established based on the expanded fact triples.
For example, one sentence in a certain historical text is:
LESTER:I book a movie ticket for the afternoon.
the fact triplet extracted is (register, book, movie ticket).
Further, the scene of the history text is Earth cinema, the type is com, and the augmented fact triple is (pointer, book, movie ticket, Earth cinema, com).
Of course, the above example is only for better explaining the augmented fact triples, and may also be applied to the history text of chinese, and the application is not limited thereto.
S4: and selecting a plurality of nodes from the knowledge graph according to the designated attribute information and a preset path planning algorithm.
In the embodiment, relevant nodes meeting the specified attribute information are randomly selected from the knowledge graph, and after the selection is completed, the selected nodes are connected in the knowledge graph in a path planning mode. The preset path planning algorithm may include an a-star (a-star) algorithm.
For example, the algorithm a selects the node with the smallest value (i) from the priority queue as the node to be traversed next, by the formula f (i) ═ g (i) + h (i).
Wherein, f (i) is the comprehensive priority of the node, when selecting the next node to be traversed, the node with the highest comprehensive priority (with the smallest value) is always selected; g (i) is the cost of a node from the origin; h (i) is the expected cost of a node from the end point. The a-algorithm uses two sets to represent the node open _ set to be traversed and the node close _ set that has already been traversed, respectively.
When a plurality of nodes are selected from the knowledge graph by using an A-star algorithm, the distance between the nodes in the knowledge graph is defined as:
dis tan ce=disManhattan+simSemantics
wherein dis tan ce is the distance between nodes; disManhattanIs the manhattan distance; simSemanticsAnd representing the semantic similarity between the nodes to be traversed in the knowledge graph.
The semantic similarity represents the semantic similarity between nodes by vectorizing node data and resolving cosine values between node vectors. Is defined as: simSemantics=cos(V(nodei),V(nodej))。
S5: and sequentially inputting the nodes into a recurrent neural network model according to the selected sequence of the nodes to generate a target text.
In this embodiment, a recurrent neural network model is obtained by pre-training, the recurrent neural network model may be an LSTM (Long Short-Term Memory) model or a GRU (Gated recurrent unit) model, and the nodes are sequentially input to the recurrent neural network model to generate the target text.
For example, the recurrent neural network model may be an LSTM model, which is a special RNN (recurrent neural network model) mainly for solving the problems of gradient extinction and gradient explosion during long sequence training. Specifically, the selected nodes are used as initial LSTM long and short term memory network nodes for input, and the generated words and the output of the current LSTM long and short term memory network nodes are used as the input for predicting the next words.
As shown in fig. 3. At time t, the model has two transmission states cell state ctAnd hidden state ht,ytIs the output of the LSTM model. The input to the model is represented by xtAnd h passed by the last statet-1The stitching training obtains four states:
Figure BDA0002352781970000051
Figure BDA0002352781970000052
Figure BDA0002352781970000053
Figure BDA0002352781970000054
wherein z isi,zf,zoThe splicing vector is multiplied by a weight matrix and then is converted into a value between 0 and 1 through a sigmoid activation function to serve as a gating state, and z is the value between-1 and 1 obtained by converting the result through a tanh activation function.
As shown in fig. 4, three phases of the internal structure of the LSTM model: a forgetting phase, wherein the selection of the forgetting phase is performed selectively according to the input transmitted by the last node, wherein z isfIndicating forgetting to gate to control the last state ct-1The conventional part is required in the process; selecting a memory phase, z aboveiPresentation selection gateControl signal to control input xiSelective memory of (2); input stage of z aboveoControlling the output of the current stage and also for c of the previous stageoScaling was performed by tanh.
According to the technical scheme, constraints of text attribute information, such as theme constraints, are added on the basis of fact triples to generate the knowledge graph, and furthermore, a plurality of nodes in the knowledge graph are linked through a preset path planning algorithm to form a plot line of a story, so that the story is richer, and the generated text fits a specified theme better.
In addition, by using the technical scheme provided by the disclosure, when a text with a longer space is required to be generated, only the length of the plot main line needs to be increased, namely the number of the nodes in the selected knowledge graph is increased, so that the sentence repetition phenomenon of the long text generated in the prior art is avoided.
As shown with reference to fig. 5, the present disclosure also provides a movie story generation apparatus, the apparatus comprising:
a fact triple extracting module 100, configured to extract fact triples from each sentence of the historical text;
a text attribute information determining module 200, configured to determine attribute information of the historical text; wherein the attribute information includes a subject of the historical text;
a knowledge graph establishing module 300, configured to establish a knowledge graph according to the fact triples and the attribute information of the historical text;
a node selection module 400, configured to select a plurality of nodes from the knowledge graph according to the specified attribute information and a preset path planning algorithm;
and a target text generation module 500, configured to sequentially input the nodes into the recurrent neural network model according to the selected sequence of the nodes, and generate a target text.
As shown with reference to fig. 6, the present disclosure also provides a computer device comprising a processor and a memory for storing processor-executable instructions that, when executed by the processor, implement the steps of the movie story method of any of the above embodiments.
Referring to fig. 7, an embodiment of the present disclosure further provides a computer-readable storage medium, on which computer instructions are stored, and the instructions, when executed, implement the steps of the movie story generation method in any of the above embodiments.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Language, HDL, las, software Language (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language (Hardware Description Language). It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The apparatuses and modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations of the present disclosure.
From the above description of the embodiments, it is clear to those skilled in the art that the present disclosure can be implemented by software plus necessary general hardware platform. With this understanding in mind, aspects of the present disclosure may be embodied in software products that are, or that form part of the prior art, typically configured such that the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the various embodiments or portions of embodiments of the present disclosure. The computer software product may be stored in a memory, which may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The disclosure is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A movie story generation method, comprising:
extracting fact triples from each sentence of the historical text;
determining attribute information of the historical text, wherein the attribute information comprises a subject of the historical text;
establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
sequentially selecting a plurality of nodes from the knowledge graph according to the designated attribute information and a preset path planning algorithm;
and sequentially inputting the nodes into the recurrent neural network model according to the selected sequence of the nodes, and outputting the target text.
2. The method of claim 1, wherein the fact triples of each statement are extracted using a syntactic dependency tree.
3. The method of claim 1, characterized by the fact that the said method is performed by a word frequency-inverse text frequency index algorithm; or a potential dirichlet allocation model, determines the topic of the historical text.
4. The method of claim 1, wherein the pre-set path planning algorithm comprises: a-star algorithm.
5. The method of claim 4, wherein when using the A-star algorithm to select a plurality of nodes from the knowledge-graph, the distance between the nodes in the knowledge-graph is defined as:
dis tan ce=disManhattan+simSemantics
wherein dis tan ce is the distance between nodes; disManhattanIs the manhattan distance; simSemanticsAnd representing the semantic similarity between the nodes to be traversed in the knowledge graph.
6. The method of claim 1, wherein the recurrent neural network model comprises:
a long-short term memory model; or gated loop unit models.
7. The method according to claim 1, wherein the specifying attribute information includes: specifying a subject; and/or, specify a scene.
8. A motion picture story generation apparatus, comprising:
the fact triple extraction module is used for extracting fact triples from each statement of the historical text;
the text attribute information determining module is used for determining the attribute information of the historical text; wherein the attribute information includes a subject of the historical text;
the knowledge graph establishing module is used for establishing a knowledge graph according to the fact triples and the attribute information of the historical text;
the node selection module is used for selecting a plurality of nodes from the knowledge graph according to the designated attribute information and a preset path planning algorithm;
and the target text generation module is used for sequentially inputting the nodes into the recurrent neural network model according to the selected sequence of the nodes to generate a target text.
9. A computer device comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1-7.
CN201911422896.1A 2019-12-31 2019-12-31 Movie story generation method Active CN111209389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422896.1A CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422896.1A CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Publications (2)

Publication Number Publication Date
CN111209389A true CN111209389A (en) 2020-05-29
CN111209389B CN111209389B (en) 2023-08-11

Family

ID=70788482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422896.1A Active CN111209389B (en) 2019-12-31 2019-12-31 Movie story generation method

Country Status (1)

Country Link
CN (1) CN111209389B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591442A (en) * 2021-10-08 2021-11-02 北京明略软件系统有限公司 Text generation method and device, electronic device and readable storage medium
CN114429198A (en) * 2022-04-07 2022-05-03 南京众智维信息科技有限公司 Self-adaptive layout method for network security emergency treatment script

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550253A (en) * 2015-12-09 2016-05-04 百度在线网络技术(北京)有限公司 Method and device for obtaining type relation
CN108897857A (en) * 2018-06-28 2018-11-27 东华大学 The Chinese Text Topic sentence generating method of domain-oriented
CN110347810A (en) * 2019-05-30 2019-10-18 重庆金融资产交易所有限责任公司 Method, apparatus, computer equipment and storage medium are answered in dialog mode retrieval
CN110390352A (en) * 2019-06-26 2019-10-29 华中科技大学 A kind of dark data value appraisal procedure of image based on similitude Hash
CN110489755A (en) * 2019-08-21 2019-11-22 广州视源电子科技股份有限公司 Document creation method and device
CN110516146A (en) * 2019-07-15 2019-11-29 中国科学院计算机网络信息中心 A kind of author's name disambiguation method based on the insertion of heterogeneous figure convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550253A (en) * 2015-12-09 2016-05-04 百度在线网络技术(北京)有限公司 Method and device for obtaining type relation
CN108897857A (en) * 2018-06-28 2018-11-27 东华大学 The Chinese Text Topic sentence generating method of domain-oriented
CN110347810A (en) * 2019-05-30 2019-10-18 重庆金融资产交易所有限责任公司 Method, apparatus, computer equipment and storage medium are answered in dialog mode retrieval
CN110390352A (en) * 2019-06-26 2019-10-29 华中科技大学 A kind of dark data value appraisal procedure of image based on similitude Hash
CN110516146A (en) * 2019-07-15 2019-11-29 中国科学院计算机网络信息中心 A kind of author's name disambiguation method based on the insertion of heterogeneous figure convolutional neural networks
CN110489755A (en) * 2019-08-21 2019-11-22 广州视源电子科技股份有限公司 Document creation method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591442A (en) * 2021-10-08 2021-11-02 北京明略软件系统有限公司 Text generation method and device, electronic device and readable storage medium
CN113591442B (en) * 2021-10-08 2022-02-18 北京明略软件系统有限公司 Text generation method and device, electronic device and readable storage medium
CN114429198A (en) * 2022-04-07 2022-05-03 南京众智维信息科技有限公司 Self-adaptive layout method for network security emergency treatment script

Also Published As

Publication number Publication date
CN111209389B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
JP7412060B2 (en) Augmenting training data for natural language classification
Qi et al. Openhownet: An open sememe-based lexical knowledge base
CN110192210B (en) Construction and processing of computational graphs for dynamically structured machine learning models
André What’s decidable about parametric timed automata?
US20170308790A1 (en) Text classification by ranking with convolutional neural networks
KR20210061141A (en) Method and apparatus for processimg natural languages
CN111461004B (en) Event detection method and device based on graph attention neural network and electronic equipment
US10978053B1 (en) System for determining user intent from text
EP3971761A1 (en) Method and apparatus for generating summary, electronic device and storage medium thereof
US11003701B2 (en) Dynamic faceted search on a document corpus
US11500914B2 (en) Query recommendation to locate an application programming interface
CN111209389B (en) Movie story generation method
CN111222315B (en) Movie scenario prediction method
US10846483B2 (en) Method, device, and apparatus for word vector processing based on clusters
WO2023093909A1 (en) Workflow node recommendation method and apparatus
US20200349203A1 (en) Dynamic faceted search on a document corpus
CN111209277A (en) Data processing method, device, equipment and medium
Kreiss et al. Concadia: Tackling image accessibility with context
CN113887234B (en) Model training and recommending method and device
Wakabayashi et al. A Voice Dialog Editor Based on Finite State Transducer Using Composite State for Tablet Devices
Dey et al. A deep dive into supervised extractive and abstractive summarization from text
CN112035622A (en) Integrated platform and method for natural language processing
CN111191010B (en) Movie script multi-element information extraction method
US20240143928A1 (en) Generation of interactive utterances of code tasks
CN115017915B (en) Model training and task execution method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant