CN111506725B - Method and device for generating abstract - Google Patents

Method and device for generating abstract Download PDF

Info

Publication number
CN111506725B
CN111506725B CN202010305488.4A CN202010305488A CN111506725B CN 111506725 B CN111506725 B CN 111506725B CN 202010305488 A CN202010305488 A CN 202010305488A CN 111506725 B CN111506725 B CN 111506725B
Authority
CN
China
Prior art keywords
vector representation
sentence
attention
context
statement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010305488.4A
Other languages
Chinese (zh)
Other versions
CN111506725A (en
Inventor
李伟
肖欣延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010305488.4A priority Critical patent/CN111506725B/en
Publication of CN111506725A publication Critical patent/CN111506725A/en
Application granted granted Critical
Publication of CN111506725B publication Critical patent/CN111506725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The application discloses a method and a device for generating an abstract, and relates to the technical field of natural language processing. The specific implementation scheme is as follows: determining initial input vector representation of each sentence of the document set based on context vector representation of each word in each sentence obtained by encoding each sentence of the document set; constructing a structural relation graph of the document set based on semantic relations among all sentences in the document set; based on the structural relational graph, carrying out context coding on the initial input vector representation of each statement to obtain the context vector representation of each statement; and decoding to obtain the abstract text of the document set based on the initial input vector representation, the structural relationship diagram and the context vector representation of each sentence. The method and the device can generate the abstract text which can reflect the important content of the document set more, the obtained abstract text is more coherent and concise, and the generated abstract information is richer.

Description

Method and device for generating abstract
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for generating an abstract.
Background
The multi-document automatic summarization is to automatically generate a concise summary for a plurality of documents related to a theme, and the summary is required to cover the core content of a document set and has coherent semantics and fluent language. Compared with a single document abstract, the multi-document abstract needs to process longer text input, and contents of different documents have repeated, related or complementary relations with each other.
The multi-document summarization can be applied to scenes such as hot topic summarization, search result summarization and aggregate writing. The most common method for automatically abstracting multiple documents is a abstraction method, i.e. a plurality of important sentences are extracted from a document set and combined into an abstract. In recent years, a generative summarization method is also greatly concerned, and the existing generative multi-document summarization method generally splices a plurality of documents into a single document simply and then generates a summary by using a single document summarization model. The two-stage method is adopted in the work of partial multi-document summarization, namely, a part of important sentences are extracted by an extraction method, and a new summary is generated by a single-document generation method.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for generating a summary.
In a first aspect, an embodiment of the present disclosure provides a method for generating a summary, including: determining initial input vector representation of each sentence of a document set based on context vector representation of each word in each sentence obtained by encoding each sentence in the document set; constructing a structured relationship diagram of the document set based on semantic relationships among the sentences in the document set; based on the structural relational graph, carrying out context coding on the initial input vector representation of each statement to obtain the context vector representation of each statement; and decoding to obtain the abstract text of the document set based on the initial input vector representation of each sentence, the structural relationship diagram and the context vector representation of each sentence.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a summary, including: an initial vector determination module configured to determine an initial input vector representation for each sentence of the document set based on a context vector representation for each word in each sentence resulting from encoding each sentence of the document set; the relation graph constructing module is configured to construct a structured relation graph of the document set based on semantic relations among all sentences in the document set; a context vector determination module configured to perform context coding on the initial input vector representation of each statement based on the structured relational graph to obtain a context vector representation of each statement; and the vector representation decoding module is configured to decode the abstract text of the document set based on the initial input vector representation of each sentence, the structural relation graph and the context vector representation of each sentence.
In a third aspect, an embodiment of the present disclosure provides an electronic device/server/intelligent terminal, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as any one of the embodiments of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method as in any of the first aspect.
The method and the device for generating the abstract provided by the embodiment of the disclosure firstly determine the initial input vector representation of each sentence of a document set based on the context vector representation of each word in each sentence obtained by coding each sentence in the document set; then, constructing a structural relationship graph of the document set based on the semantic relationship among the sentences in the document set; then, based on the structural relational graph, carrying out context coding on the initial input vector representation of each statement to obtain the context vector representation of each statement; and finally, decoding to obtain the abstract text of the document set based on the initial input vector representation, the structural relational graph and the context vector representation of each sentence. In the process, the method and the device can effectively model the semantic relationship in the multiple documents by utilizing the structured relational graph of the document set in the encoding and decoding processes, effectively organize and rewrite important contents input by the multiple documents, can generate an abstract with more coherent and concise semantics, and the generated abstract has richer information.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
other features, objects, and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating a summary according to an embodiment of the present disclosure;
FIG. 3 is an exemplary application scenario of a method of generating a summary according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustration of yet another embodiment of a method of generating a summary according to an embodiment of the disclosure;
FIG. 5 is yet another exemplary application scenario of a method of generating a summary according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device of the present disclosure for implementing a method of generating a summary of an embodiment of the present application;
fig. 7 is a scene diagram of a method for generating a summary, which may implement an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method of generating a summary or the apparatus for generating a summary of the present disclosure may be applied.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method of generating a summary or the apparatus for generating a summary of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various applications, such as a document processing application, an audio playing application, a streaming media processing application, a multi-party interaction application, an artificial intelligence application, a game application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices that support document processing applications, including but not limited to smart terminals, tablets, laptop and desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server can analyze and process the received data such as the request and feed back the processing result to the terminal equipment.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
In practice, the method for generating the summary provided by the embodiment of the present disclosure may be executed by the terminal device 101, 102, 103 or the server 105, and the apparatus for generating the summary may also be disposed in the terminal device 101, 102, 103 or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 illustrates a flow 200 of one embodiment of a method of generating a summary according to the present disclosure. The method for generating the abstract comprises the following steps:
step 201, determining an initial input vector representation of each sentence in the document set based on the context vector representation of each word in each sentence obtained by encoding each sentence in the document set.
In the present embodiment, an execution subject (e.g., a terminal or a server shown in fig. 1) of the method of generating a summary may acquire a document set obtained by splicing a plurality of documents from a local or remote device. Then, the execution body may extract all the sentences in the document set, perform word segmentation on each sentence in the document set to obtain an input sentence set, and then encode the input sentence set to obtain a context vector representation of each word in each sentence.
Specifically, the encoding of each sentence may be performed by using an encoding method in the prior art or a future developed technology, which is not limited in the present application. For example, the word vector fused word position vector for each sentence may be input to the sentence encoder, resulting in a context vector representation for each word in each sentence.
After determining the context vector representation of each word in each sentence, the context vector representation of each word may be directly represented as an initial input vector, or the context vector representations of each word may be pooled to obtain the initial input vector representation of each sentence of the document set.
Step 202, constructing a structured relationship diagram of the document set based on semantic relationships among the sentences in the document set.
In the embodiment, the sentences in the document set composed of multiple documents are represented as nodes, and the semantic relations between the sentences are represented as edges, so that a structured relation graph of the document set can be constructed.
Optionally, the semantic relationship between the sentences in the document set may include at least one of the following: semantic similarity relation among the sentences in the document set, discourse structure relation among the sentences in the document set, topic relation among the sentences in the document set and the like. The structured relational graph is constructed based on the semantic relations in the document set, so that the semantic relations and important contents in the document set can be reflected more accurately and comprehensively by the obtained structured relational graph.
And 203, carrying out context coding on the initial input vector representation of each statement based on the structural relational graph to obtain the context vector representation of each statement.
In this embodiment, the initial input vector representation of each sentence can be encoded according to the constructed structured relational graph, so as to obtain the context vector representation of each sentence. Specifically, the structured relational graph can be used as a constraint when an initial input vector of each sentence is encoded to represent, so that a context vector representation of each sentence with the semantic relationship of the structured relational graph fused is obtained.
And step 204, decoding to obtain the abstract text of the document set based on the initial input vector representation, the structural relationship diagram and the context vector representation of each sentence.
In this embodiment, the initial input vector representation of each sentence represents the context correlation between each word in each sentence and other words in the sentence, the structured relational graph represents the semantic relationship between each sentence in the document set, and the context vector representation of each sentence represents the context correlation between each sentence and other sentences in the document set.
In the method for generating the abstract according to the embodiment of the present disclosure, because the semantic relationship between the sentences, the context correlation between each word in each sentence and other words in the sentence, and the context correlation between each sentence and other sentences in the document set are considered in the encoding and decoding processes, the abstract text that can reflect the important content in the document set more can be generated, and the obtained abstract text is more coherent and concise, and the generated abstract information is richer.
In some optional implementations of the foregoing embodiment, the context coding, based on the structured relational graph, the initial input vector representation of each statement to obtain a context vector representation of each statement may include: determining weights of attention coefficients of a context attention model of the encoder based on the structured relational graph; the initial input vector representation of each sentence is encoded based on the weights of the attention coefficients of the context attention model to obtain a context vector representation of each sentence.
In this implementation, the attention coefficients of the context attention model of the encoder may represent the correlation between each initial input vector representation and other initial input vector representations other than the initial input vector representation.
The execution main body can adjust the correlation between each initial input vector representation and other initial input vector representations when encoding the initial input vector representations of each sentence based on the semantic relationship embodied by the structural relational graph, so that the context vector of each output sentence can embody the influence of the sentence with stronger semantic relationship with the sentence, and the context vector of each output sentence can embody the important sentence content in a document set.
In a specific example, the determining weights of attention coefficients in the context attention model of the encoder based on the structured relational graph may include: and adopting a Gaussian function to convert the semantic relation between the sentences in the structured relational graph into the weight of the attention coefficient in the context attention model of the encoder. And the encoding the initial input vector representation of each sentence based on the weight of the attention coefficient of the context attention model to obtain the context vector representation of each sentence may include: and calculating a weighted value of the correlation between each initial input vector representation and the initial input vector representation having semantic relation with the initial input vector representation by adopting the weight of the attention coefficient to obtain the context vector representation of each statement.
In this example, the semantic relationship between the sentences in the structured relational graph is converted by using a gaussian function, and the weights of the attention coefficients in the context attention model of the encoder are obtained. Then, the weight of the attention coefficient can be adopted to calculate the weighted value of the correlation between each initial input vector representation and other initial input vector representations, so as to obtain the context vector representation of each statement, thereby enabling the output context vector of each statement to better reflect the influence of the statement with a stronger semantic relation with the statement, and enabling the output context vector of each statement to better reflect the important statement content in a document set.
An exemplary application scenario of the method of generating a summary of the present disclosure is described below in conjunction with fig. 3.
As shown in fig. 3, fig. 3 illustrates one exemplary application scenario of the method of generating a summary according to the present disclosure.
As shown in fig. 3, a method 300 of generating a summary, running in an electronic device 310, may include:
first, an initial input vector representation 304 of each sentence of the document set is determined based on a context vector representation 303 of each word in each sentence obtained by encoding each sentence 302 in the document set 301.
Thereafter, a structured relationship graph 306 of the document set is constructed based on semantic relationships 305 between the statements in the document set.
Then, based on the structured relational graph 306, the initial input vector representation 304 of each sentence is context-encoded, resulting in a context vector representation 307 of each sentence.
Finally, the abstract text 308 of the document set is decoded based on the initial input vector representation 304, the structured relational graph 306 and the context vector representation 307 of each sentence.
It should be understood that the application scenario of the method for generating the summary shown in fig. 3 is only an exemplary description of the method for generating the summary, and does not represent a limitation on the method. For example, the steps shown in fig. 3 above may be implemented in further detail. In addition to fig. 3, a step of outputting the generated digest text and further utilizing the digest text may be further added.
With further reference to fig. 4, fig. 4 shows a schematic flow chart diagram of yet another embodiment of a method of generating a summary according to the present disclosure.
As shown in fig. 4, the method 400 for generating a summary of the present embodiment may include the following steps:
step 401, determining an initial input vector representation of each sentence in the document set based on the context vector representation of each word in each sentence obtained by encoding each sentence in the document set.
In the present embodiment, an execution subject (e.g., a terminal or a server shown in fig. 1) of the method of generating a summary may acquire a document set obtained by splicing a plurality of documents from a local or remote device. Then, the execution body may extract all the sentences in the document set, perform word segmentation on each sentence in the document set to obtain an input sentence set, and then encode the input sentence set to obtain a context vector representation of each word in each sentence.
Step 402, constructing a structured relationship diagram of the document set based on semantic relationships among the sentences in the document set.
In the embodiment, the sentences in the document set composed of multiple documents are represented as nodes, and the semantic relations between the sentences are represented as edges, so that a structured relation graph of the document set can be constructed.
Step 403, based on the structural relational graph, performing context coding on the initial input vector representation of each statement to obtain a context vector representation of each statement.
In this embodiment, the initial input vector representation of each sentence can be encoded according to the constructed structured relational graph, so as to obtain the context vector representation of each sentence. Specifically, the structured relational graph can be used as a constraint when an initial input vector of each sentence is encoded to represent, so that a context vector representation of each sentence with the semantic relationship of the structured relational graph fused is obtained.
It should be understood that the operations and features described in the foregoing steps 401 to 403 correspond to those in the foregoing step 201 to step 203 in the embodiment shown in fig. 2, and therefore, the operations and features described in the foregoing step 201 to step 203 in the embodiment shown in fig. 2 are also applicable to those in the foregoing step 401 to step 403, and are not described again here.
Step 404, determining the attention coefficient of the local word-level attention layer based on the similarity between the vector representation of the current decoding step and the initial input vector representation of each sentence.
In this embodiment, the executing entity may adopt a method of calculating similarity between two vectors in the prior art or a future developed technology to determine similarity between the vector representation of the current decoding step and the initial input vector representation of each sentence. For example, the vector dot product of two vectors, the vector Cosine similarity of two vectors or by reintroducing additional neural networks.
Then, a calculation mode similar to logistic regression SoftMax can be introduced to carry out numerical conversion on the similarity, on one hand, normalization can be carried out, and the original calculation scores are sorted into probability distribution with the sum of all element weights being 1; on the other hand, the weight of the important element can be more highlighted through the intrinsic mechanism of SoftMax. And carrying out weighted summation on the initial input vector representation of each sentence based on the calculation result to obtain the correlation between the vector representation of the current decoding step and the context vector representation of each word in each sentence, namely to obtain the attention coefficient of the local word-level attention layer.
And 405, performing weighted constraint on the attention coefficient of the local word-level attention layer by using a structured relational graph to obtain word-level context vector representations of a current word corresponding to the vector representation of the current decoding step and each word in each sentence.
In this embodiment, according to the semantic relationship between the sentences embodied in the structured relational diagram, the relevance between the vector representation in the current decoding step and the context vector representation of each word in each sentence can be weighted and constrained, so as to obtain the word-level context vector representation of each word in each sentence and the current word corresponding to the vector representation in the current decoding step.
Step 406, determining the attention coefficient of the global statement level attention layer based on the similarity of the vector representation of the current decoding step and the context vector representation of each statement.
In this embodiment, the executing entity may adopt a method of calculating similarity between two vectors in the prior art or a future developed technology to determine similarity between the vector representation of the current decoding step and the context vector representation of each sentence. For example, the vector dot product of two vectors, the vector Cosine similarity of two vectors or by reintroducing additional neural networks.
Then, a calculation mode similar to SoftMax can be introduced to carry out numerical conversion on the similarity, on one hand, normalization can be carried out, and the original calculation scores are sorted into probability distribution with the sum of all element weights being 1; on the other hand, the weight of the important element can be more highlighted through the intrinsic mechanism of SoftMax. And carrying out weighted summation on the statement level context vector representations of all statements based on the calculation result to obtain the correlation between the vector representation of the current decoding step and the statement level context vector representation of all statements, namely to obtain the attention coefficient of the global statement level attention layer.
Step 407, using the structural relationship diagram to perform weighted constraint on the attention coefficient of the global statement level attention layer, so as to obtain statement level context vector representations of the current word and each statement corresponding to the vector representation of the current decoding step.
In this embodiment, according to the semantic relationship between the sentences embodied in the structured relational diagram, the relevance between the vector representation of the current decoding step and the sentence-level context vector representation of each sentence may be subjected to weighted constraint, so as to obtain the sentence-level context vector representation of each sentence and the current word corresponding to the vector representation of the current decoding step.
In a specific example, performing weighted constraint on the attention coefficient of the global statement-level attention layer by using the structured relational graph to obtain statement-level context vector representations of all statements and a current word corresponding to the vector representation of the current decoding step may include: firstly, based on the result of aligning the vector representation of the current decoding step with the context vector representation of each statement, determining the central statement in the document set corresponding to the vector representation of the current decoding step; then, determining the semantic relation between the central sentence and other sentences in the document set by adopting a structured relational graph; then, determining the weight of the attention coefficient of the global statement level attention layer based on the semantic relation between the central statement and other statements in the document set; and finally, carrying out weighted constraint on the attention coefficient of the global statement level attention layer by adopting the weight of the attention coefficient of the global statement level attention layer to obtain the vector representation of the current decoding step and statement level context vector representations of all statements.
In this example, the central sentence in the document set corresponding to the vector representation of the current decoding step is the sentence with the highest alignment possibility corresponding to the vector representation of the current decoding step. The execution body can determine the semantic relation between the central sentence and other sentences in the document set from the semantic relation embodied by the structural relation graph.
The weights of the attention coefficients of the global statement-level attention layer may then be determined based on the semantic relationship of the central statement and the other statements in the document set. Wherein, the attention coefficient of the global statement level attention layer, that is, the vector of the current decoding step represents the correlation with the context vector of each statement.
And finally, based on the weight of the attention coefficient of the global statement level attention layer, adjusting the vector representation of the context vector of each statement on the current decoding step, thereby obtaining the statement level context vectors of all statements.
In this example, the structured relational graph is adopted to perform weighted constraint on the attention coefficient of the global statement level attention layer, and a process of obtaining statement level context vector representations of the current word and all statements corresponding to the vector representation in the current decoding step is obtained, so that the output statement level context vector representations of the current word and all statements corresponding to the vector representation in the current decoding step can better reflect the influence of important statements in all statements, and thus the important statement content in the document set is embodied.
Step 408, determining a current word corresponding to the vector representation of the current decoding step based on the word-level context vector representation and the sentence-level context vector representation.
In this embodiment, the word-level context vector representation introduces a local impact on the vector representation of the current decoding step, and the sentence-level context vector representation introduces a global impact on the vector representation of the current decoding step. Decoding is carried out according to the word-level context vector representation and the statement-level context vector representation, so that the current word corresponding to the vector representation obtained in the current decoding step is more accurate, and the intention of the document set can be reflected better. The execution body may decode each word in the abstract text of the document set in sequence from step 404 to step 408, so as to obtain the complete abstract text of the document set.
In some optional implementations of this step, the determining, based on the word-level context vector representation and the sentence-level context vector representation, a current word corresponding to the vector representation of the current decoding step may include: merging word-level context vector representation and statement-level context vector representation to obtain merged context vector representation of the current decoding step; determining a current word corresponding to the fused context vector representation of the current decoding step.
In this implementation manner, the method for fusing the word-level context vector representation and the sentence-level context vector representation may be a method for fusing two vectors in the prior art or a technology developed in the future, and the method is not limited in this application. For example, the two vectors may be added, spliced, linearly transformed, or fused using a model that fuses the vectors (which may be designed as desired).
After fusing the word-level context vector representation and the sentence-level context vector representation, the current word to which it corresponds may be decoded based on the fused context vector representation of the current decoding step. Decoding is carried out based on the fusion context vector representation, and the obtained current word corresponding to the vector representation of the current decoding step is more accurate and can reflect the intention of the document set better.
Compared with the embodiment shown in fig. 2, the method for generating the abstract of the embodiment of the present disclosure refers to the local influence and the global influence on the vector representation of the current decoding step during decoding, so that the accuracy of decoding can be improved, and the semantic consistency and the information richness of each current word obtained by decoding and the word obtained by decoding in advance are considered.
With further reference to fig. 5, fig. 5 illustrates yet another exemplary application scenario of a method of generating a summary according to an embodiment of the present disclosure.
As shown in fig. 5, fig. 5 illustrates an exemplary implementation of one particular embodiment of a method of generating a summary in accordance with the present disclosure.
In fig. 5, the method of generating the abstract is implemented based on the Transformer model 500. The Transformer model 500 includes an encoder 510 and a decoder 520.
Specifically, the encoder 510 includes a sentence encoding layer 511 and a picture encoding layer 512, wherein the picture encoder 512 includes a picture self-attention model (i.e., the context attention model in the above-described embodiment) 513; included in decoder 520 is a hierarchical graph attention model 521, which includes a local word-level attention layer and a global sentence-level attention layer.
Specifically, the method for generating the abstract shown in fig. 4 can implement the following steps based on the Transfomer model in fig. 5:
first, a context vector of each word in each sentence obtained by encoding each sentence in the document set is input to the sentence encoding layer 511, and an initial input vector representation of each sentence in the document set output by the sentence encoding layer 511 is obtained.
Then, the initial input vector representation of each sentence is input to the graphics coding layer 512 including the self-attention model 513, and the context vector representation of each sentence output by the graphics coding layer 512 is obtained. Wherein the graph self-attention model 513 determines the weight of the attention coefficient of the graph self-attention model 513 based on the semantic relationship between the sentences in the document set embodied by the structured relational graph.
The initial input vector representation of each sentence and the context vector representation of each sentence are then input into the local term level attention layer and the global term level attention layer of the hierarchical graph attention model 521 in the decoder 520, respectively.
Specifically, based on the similarity between the vector representation of the current decoding step and the initial input vector representation of each sentence, determining the attention coefficient of the local word-level attention layer; and adopting a structural relational graph to carry out weighting constraint on the attention coefficient of the local word-level attention layer to obtain the word-level context vector representation of the current word corresponding to the vector representation of the current decoding step and each word in each sentence.
Determining an attention coefficient of a global statement level attention layer based on similarity of vector representation of a current decoding step and context vector representation of each statement; and adopting a structural relational graph to carry out weighting constraint on the attention coefficient of the global statement level attention layer to obtain the statement level context vector representation of the current word and each statement corresponding to the vector representation of the current decoding step.
Then, based on the word-level context vector representation output by the local word-level attention layer and the sentence-level context vector representation output by the global sentence-level attention layer, the current word corresponding to the vector representation of the current decoding step is determined.
It should be understood that the embodiment shown in fig. 5 of the present disclosure is only an exemplary embodiment of the present application, and is not a limitation of the present application. For example, the encoder-decoder framework in the embodiments of fig. 2-5 described above can also be implemented by other encoder-decoder frameworks in the prior art, for example, the encoder and decoder can be implemented based on any one of the following network structures: CNN network, RNN network, LSTM, etc., and the network structures used by the encoder and the decoder may be the same or different. In these networks, Attention-Attention, Self-Attention mechanisms can be generally introduced as constraints. Meanwhile, other skills such as position information of introduced words, a residual error network, and the like are adopted for receiving and fetching high-level semantic information and low-level detail information during the calculation of the Attention.
As shown in fig. 6, the apparatus 600 for generating a summary of the present embodiment includes: an initial vector determination module 601 configured to determine an initial input vector representation of each sentence of the document set based on a context vector representation of each word in each sentence obtained by encoding each sentence of the document set; a relationship graph construction module 602 configured to construct a structured relationship graph of a document set based on semantic relationships between statements in the document set; a context vector determining module 603 configured to perform context coding on the initial input vector representation of each statement based on the structured relational graph, so as to obtain a context vector representation of each statement; and a vector representation decoding module 604 configured to decode the abstract text of the document set based on the initial input vector representation of each sentence, the structured relational graph and the context vector representation of each sentence.
In some optional implementations of this embodiment, the context vector determination module 603 includes (not shown in the figure): a coefficient weight determination module configured to determine weights of attention coefficients of a context attention model of an encoder based on the structured relational graph; and the vector representation coding module is configured to code the initial input vector representation of each statement based on the weight of the attention coefficient of the context attention model to obtain the context vector representation of each statement.
In some optional implementations of this embodiment, the coefficient weight determination module is further configured to: adopting a Gaussian function to convert the semantic relation among the sentences in the structured relational graph into the weight of the attention coefficient in the context attention model of the encoder; and the vector representation encoding module is further configured to: and calculating a weighted value of the correlation between each initial input vector representation and the initial input vector representation having semantic relation with the initial input vector representation by adopting the weight of the attention coefficient to obtain the context vector representation of each statement.
In some optional implementations of this embodiment, the vector representation decoding module 604 includes (not shown in the figure): a coefficient determination module configured to determine an attention coefficient of a local word-level attention layer based on a similarity of a vector representation of a current decoding step and an initial input vector representation of each sentence; the coefficient weighting constraint module is configured to adopt a structural relational graph to carry out weighting constraint on the attention coefficient of the local word-level attention layer so as to obtain word-level context vector representations of a current word corresponding to the vector representation of the current decoding step and each word in each sentence; an attention determination module configured to determine an attention coefficient of a global sentence-level attention layer based on a similarity of a vector representation of a current decoding step and a context vector representation of each sentence; the attention weighted constraint module is configured to adopt a structured relational graph to carry out weighted constraint on the attention coefficient of the global statement level attention layer so as to obtain statement level context vector representations of a current word and each statement corresponding to the vector representation of the current decoding step; a current word determination module configured to determine a current word to which the vector representation of the current decoding step corresponds based on the word-level context vector representation and the sentence-level context vector representation.
In some optional implementations of the present embodiment, the attention weighting constraint module is further configured to: determining a central statement in the document set corresponding to the vector representation of the current decoding step based on the result of aligning the vector representation of the current decoding step with the context vector representation of each statement; determining semantic relations between the central sentences and other sentences in the document set by adopting a structured relational graph; determining the weight of the attention coefficient of the global statement level attention layer based on the semantic relation between the central statement and other statements in the document set; and adopting the weights of the attention coefficients of the global statement level attention layer to carry out weighted constraint on the attention coefficients of the global statement level attention layer so as to obtain the vector representation of the current decoding step and statement level context vector representations of all statements.
In some optional implementations of this embodiment, the current word determination module is further configured to: merging word-level context vector representation and statement-level context vector representation to obtain merged context vector representation of the current decoding step; determining a current word corresponding to the fused context vector representation of the current decoding step.
In some optional implementations of the embodiment, the semantic relationship between the sentences in the document set adopted in the relationship diagram constructing module 602 includes at least one of the following: semantic similarity relation among the sentences in the document set, discourse structure relation among the sentences in the document set and theme relation among the sentences in the document set.
It should be understood that the various elements recited in the apparatus 600 correspond to various steps recited in the methods described with reference to fig. 2-5. Thus, the operations and features described above for the method are equally applicable to the apparatus 600 and the various units included therein, and are not described in detail here.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of generating a summary provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of generating a summary provided herein.
The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of generating a summary in the embodiments of the present application (e.g., an initial vector determination module, a relation graph construction module, a context vector determination module, a vector representation decoding module shown in fig. 6). The processor 701 executes various functional applications of the server and data processing, i.e., implements the method of generating a summary in the above-described method embodiments, by running non-transitory software programs, instructions, and units stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the method of generating the digest, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected over a network to an electronic device that generates the method of summarizing. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of generating a summary may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the method of generating the summary, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, because the semantic relation among the sentences, the context correlation between each word in each sentence and other words in the sentence and the context correlation between each sentence and other sentences in the document set are considered in the encoding and decoding processes, the abstract text which can reflect important contents in the document set can be generated, the obtained abstract text is more coherent and concise, and the generated abstract information is richer.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A method of generating a summary, the method comprising:
determining initial input vector representation of each sentence of a document set based on context vector representation of each word in each sentence obtained by encoding each sentence in the document set;
constructing a structured relationship diagram of the document set based on semantic relationships among the sentences in the document set;
based on the structural relational graph, carrying out context coding on the initial input vector representation of each statement to obtain the context vector representation of each statement;
decoding to obtain the abstract text of the document set based on the initial input vector representation of each sentence, the structured relational graph and the context vector representation of each sentence, including: and inputting the initial input vector representation of each sentence, the structural relational graph and the context vector representation of each sentence into a decoder, so that the decoder takes the structural relational graph as the constraint of the initial input vector representation of each sentence and the context vector representation of each sentence, and decodes to obtain the abstract text of the document set.
2. The method of claim 1, wherein the context-coding the initial input vector representation for each sentence based on the structured relational graph to obtain the context vector representation for each sentence comprises:
determining weights of attention coefficients of a context attention model of an encoder based on the structured relational graph;
and coding the initial input vector representation of each statement based on the weight of the attention coefficient of the context attention model to obtain the context vector representation of each statement.
3. The method of claim 2, wherein the determining weights for attention coefficients in a context attention model of an encoder based on the structured relational graph comprises:
adopting a Gaussian function to convert the semantic relation among the sentences in the structured relational graph into the weight of the attention coefficient in the context attention model of the encoder; and
the encoding the initial input vector representation of each sentence based on the weight of the attention coefficient of the context attention model to obtain a context vector representation of each sentence includes:
and calculating a weighted value of the correlation between the initial input vector representation of each statement and the initial input vector representation having semantic relation with the initial input vector representation by adopting the weight of the attention coefficient to obtain the context vector representation of each statement.
4. The method of claim 1, wherein the decoding the summary text of the document set based on the initial input vector representation of the each sentence, the structured relational graph, and the context vector representation of the each sentence comprises:
determining attention coefficients of local word-level attention layers based on similarity of vector representations of the current decoding step and initial input vector representations of the sentences;
carrying out weighting constraint on the attention coefficient of the local word-level attention layer by adopting the structural relational graph to obtain word-level context vector representations of a current word corresponding to the vector representation of the current decoding step and each word in each sentence;
determining an attention coefficient of a global statement level attention layer based on similarity of a vector representation of a current decoding step and a context vector representation of each statement;
carrying out weighting constraint on the attention coefficient of the global statement level attention layer by adopting the structured relational graph to obtain statement level context vector representations of a current word corresponding to the vector representation of the current decoding step and each statement;
determining a current word to which a vector representation of a current decoding step corresponds based on the word-level context vector representation and the sentence-level context vector representation.
5. The method of claim 4, wherein the applying the structured relational graph to perform weighted constraint on the attention coefficients of the global sentence-level attention layer to obtain a sentence-level context vector representation of all sentences and a current word corresponding to the vector representation of the current decoding step comprises:
determining a central statement in the document set corresponding to the vector representation of the current decoding step based on a result of aligning the vector representation of the current decoding step with the context vector representation of each statement;
determining semantic relations between the central statement and other statements in the document set by using the structured relational graph;
determining the weight of the attention coefficient of the global statement level attention layer based on the semantic relation between the central statement and other statements in the document set;
and carrying out weighted constraint on the attention coefficient of the global statement level attention layer by adopting the weight of the attention coefficient of the global statement level attention layer to obtain the vector representation of the current decoding step and statement level context vector representations of all statements.
6. The method of claim 4, wherein determining a current word to which a vector representation of a current decoding step corresponds based on the word-level context vector representation and the sentence-level context vector representation comprises:
fusing the word-level context vector representation and the statement-level context vector representation to obtain a fused context vector representation of the current decoding step;
determining a current word corresponding to the fused context vector representation of the current decoding step.
7. The method of any of claims 1-6, wherein the semantic relationships between the sentences in the document set include at least one of: semantic similarity relation among the sentences in the document set, discourse structure relation among the sentences in the document set and theme relation among the sentences in the document set.
8. An apparatus to generate a summary, the apparatus comprising:
an initial vector determination module configured to determine an initial input vector representation for each sentence of a document set based on a context vector representation for each word in each sentence resulting from encoding each sentence of the document set;
a relation graph construction module configured to construct a structured relation graph of the document set based on semantic relations between sentences in the document set;
a context vector determination module configured to perform context coding on the initial input vector representation of each sentence based on the structured relational graph to obtain a context vector representation of each sentence;
a vector representation decoding module configured to decode a summary text of the document set based on the initial input vector representation of each sentence, the structured relational graph and the context vector representation of each sentence;
the vector representation decoding module is further configured to input the initial input vector representation of each sentence, the structured relational graph and the context vector representation of each sentence into a decoder, so that the decoder decodes the structured relational graph as a constraint of the initial input vector representation of each sentence and the context vector representation of each sentence to obtain the abstract text of the document set.
9. The apparatus of claim 8, wherein the context vector determination module comprises:
a coefficient weight determination module configured to determine weights of attention coefficients of a context attention model of an encoder based on the structured relational graph;
a vector representation encoding module configured to encode the initial input vector representation of each sentence based on the weights of the attention coefficients of the context attention model to obtain a context vector representation of each sentence.
10. The apparatus of claim 9, wherein the coefficient weight determination module is further configured to: adopting a Gaussian function to convert the semantic relation among the sentences in the structured relational graph into the weight of the attention coefficient in the context attention model of the encoder; and
the vector representation encoding module is further configured to: and calculating a weighted value of the correlation between the initial input vector representation of each statement and the initial input vector representation having semantic relation with the initial input vector representation by adopting the weight of the attention coefficient to obtain the context vector representation of each statement.
11. The apparatus of claim 8, wherein the vector representation decoding module comprises:
a coefficient determination module configured to determine attention coefficients of local word-level attention layers based on similarity of the vector representation of the current decoding step and the initial input vector representation of the respective sentence;
a coefficient weighting constraint module configured to perform weighting constraint on the attention coefficient of the local word-level attention layer by using the structural relationship diagram to obtain word-level context vector representations of a current word corresponding to the vector representation of the current decoding step and each word in each sentence;
an attention determination module configured to determine an attention coefficient of a global sentence-level attention layer based on a similarity of a vector representation of a current decoding step and a context vector representation of the respective sentence;
an attention weighted constraint module configured to perform weighted constraint on the attention coefficient of the global statement level attention layer by using the structural relationship diagram to obtain statement level context vector representations of the current word and each statement corresponding to the vector representation of the current decoding step;
a current word determination module configured to determine a current word to which a vector representation of a current decoding step corresponds based on the word-level context vector representation and the sentence-level context vector representation.
12. The apparatus of claim 11, wherein the attention-weighted constraint module is further configured to: determining a central statement in the document set corresponding to the vector representation of the current decoding step based on a result of aligning the vector representation of the current decoding step with the context vector representation of each statement; determining semantic relations between the central statement and other statements in the document set by using the structured relational graph; determining the weight of the attention coefficient of the global statement level attention layer based on the semantic relation between the central statement and other statements in the document set; and carrying out weighted constraint on the attention coefficient of the global statement level attention layer by adopting the weight of the attention coefficient of the global statement level attention layer to obtain the vector representation of the current decoding step and statement level context vector representations of all statements.
13. The apparatus of claim 11, wherein the current word determination module is further configured to: fusing the word-level context vector representation and the statement-level context vector representation to obtain a fused context vector representation of the current decoding step; determining a current word corresponding to the fused context vector representation of the current decoding step.
14. The apparatus of any of claims 8-13, wherein the semantic relationships between the statements in the document set employed in the relationship graph construction module comprise at least one of: semantic similarity relation among the sentences in the document set, discourse structure relation among the sentences in the document set and theme relation among the sentences in the document set.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A server, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
17. An intelligent terminal, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202010305488.4A 2020-04-17 2020-04-17 Method and device for generating abstract Active CN111506725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010305488.4A CN111506725B (en) 2020-04-17 2020-04-17 Method and device for generating abstract

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010305488.4A CN111506725B (en) 2020-04-17 2020-04-17 Method and device for generating abstract

Publications (2)

Publication Number Publication Date
CN111506725A CN111506725A (en) 2020-08-07
CN111506725B true CN111506725B (en) 2021-06-22

Family

ID=71869369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010305488.4A Active CN111506725B (en) 2020-04-17 2020-04-17 Method and device for generating abstract

Country Status (1)

Country Link
CN (1) CN111506725B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069309B (en) * 2020-09-14 2024-03-15 腾讯科技(深圳)有限公司 Information acquisition method, information acquisition device, computer equipment and storage medium
CN112148871B (en) * 2020-09-21 2024-04-12 北京百度网讯科技有限公司 Digest generation method, digest generation device, electronic equipment and storage medium
CN112749253B (en) * 2020-12-28 2022-04-05 湖南大学 Multi-text abstract generation method based on text relation graph
CN113282742B (en) * 2021-04-30 2022-08-12 合肥讯飞数码科技有限公司 Abstract acquisition method, electronic equipment and storage device
CN113434642B (en) * 2021-08-27 2022-01-11 广州云趣信息科技有限公司 Text abstract generation method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050096897A1 (en) * 2003-10-31 2005-05-05 International Business Machines Corporation Document summarization based on topicality and specificity
CN107783960A (en) * 2017-10-23 2018-03-09 百度在线网络技术(北京)有限公司 Method, apparatus and equipment for Extracting Information
CN108733682A (en) * 2017-04-14 2018-11-02 华为技术有限公司 A kind of method and device generating multi-document summary
CN109145105A (en) * 2018-07-26 2019-01-04 福州大学 A kind of text snippet model generation algorithm of fuse information selection and semantic association
CN110348016A (en) * 2019-07-15 2019-10-18 昆明理工大学 Text snippet generation method based on sentence association attention mechanism

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678302B (en) * 2012-08-30 2018-11-09 北京百度网讯科技有限公司 A kind of file structure method for organizing and device
CN110196978A (en) * 2019-06-04 2019-09-03 重庆大学 A kind of entity relation extraction method for paying close attention to conjunctive word

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050096897A1 (en) * 2003-10-31 2005-05-05 International Business Machines Corporation Document summarization based on topicality and specificity
CN108733682A (en) * 2017-04-14 2018-11-02 华为技术有限公司 A kind of method and device generating multi-document summary
CN107783960A (en) * 2017-10-23 2018-03-09 百度在线网络技术(北京)有限公司 Method, apparatus and equipment for Extracting Information
CN109145105A (en) * 2018-07-26 2019-01-04 福州大学 A kind of text snippet model generation algorithm of fuse information selection and semantic association
CN110348016A (en) * 2019-07-15 2019-10-18 昆明理工大学 Text snippet generation method based on sentence association attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《融合多信息句子图模型的多文档摘要抽取》;蒋亚芳等;《计算机工程与科学》;20200331;全文 *

Also Published As

Publication number Publication date
CN111506725A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111506725B (en) Method and device for generating abstract
JP7398402B2 (en) Entity linking method, device, electronic device, storage medium and computer program
CN110390103B (en) Automatic short text summarization method and system based on double encoders
KR102577514B1 (en) Method, apparatus for text generation, device and storage medium
JP2022013602A (en) Method of extracting event in text, device, electronic equipment and storage medium
KR20210040851A (en) Text recognition method, electronic device, and storage medium
CN111143561B (en) Intention recognition model training method and device and electronic equipment
JP7301922B2 (en) Semantic retrieval method, device, electronic device, storage medium and computer program
JP7149993B2 (en) Pre-training method, device and electronic device for sentiment analysis model
CN112148871B (en) Digest generation method, digest generation device, electronic equipment and storage medium
CN112036162B (en) Text error correction adaptation method and device, electronic equipment and storage medium
JP2021111413A (en) Method and apparatus for mining entity focus in text, electronic device, computer-readable storage medium, and computer program
JP7159248B2 (en) Review information processing method, apparatus, computer equipment and medium
CN112000792A (en) Extraction method, device, equipment and storage medium of natural disaster event
CN111428514A (en) Semantic matching method, device, equipment and storage medium
CN111737954A (en) Text similarity determination method, device, equipment and medium
CN112270198B (en) Role determination method and device, electronic equipment and storage medium
CN111460135B (en) Method and device for generating text abstract
CN111078825A (en) Structured processing method, structured processing device, computer equipment and medium
CN111930915B (en) Session information processing method, device, computer readable storage medium and equipment
CN112528669A (en) Multi-language model training method and device, electronic equipment and readable storage medium
CN112507697A (en) Event name generation method, device, equipment and medium
EP3855341A1 (en) Language generation method and apparatus, electronic device and storage medium
Hsueh et al. A Task-oriented Chatbot Based on LSTM and Reinforcement Learning
CN111783395A (en) Method and device for outputting text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant