CN116450779B - Text generation method and related device - Google Patents

Text generation method and related device Download PDF

Info

Publication number
CN116450779B
CN116450779B CN202310714379.1A CN202310714379A CN116450779B CN 116450779 B CN116450779 B CN 116450779B CN 202310714379 A CN202310714379 A CN 202310714379A CN 116450779 B CN116450779 B CN 116450779B
Authority
CN
China
Prior art keywords
text
feature
decoder
attention mechanism
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310714379.1A
Other languages
Chinese (zh)
Other versions
CN116450779A (en
Inventor
颜子涵
亓克娜
王卿云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sohu New Media Information Technology Co Ltd
Original Assignee
Beijing Sohu New Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sohu New Media Information Technology Co Ltd filed Critical Beijing Sohu New Media Information Technology Co Ltd
Priority to CN202310714379.1A priority Critical patent/CN116450779B/en
Publication of CN116450779A publication Critical patent/CN116450779A/en
Application granted granted Critical
Publication of CN116450779B publication Critical patent/CN116450779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a text generation method and a related device, which can extract characteristics of news text through a first encoder to obtain corresponding first characteristics; performing cross attention mechanism calculation on the first feature through a first decoder to obtain a corresponding problem text; fusing the first feature and the problem text to obtain a second feature; and performing cross attention mechanism calculation on the second feature through a second decoder to obtain two viewpoint texts with opposite viewpoints. The method and the system can fully understand the content of the news text based on the first encoder and then extract rich features, then accurately generate corresponding problem text based on the first decoder, fully fuse the problem text with the features, and then generate corresponding viewpoint text based on the second decoder, so that the obtained viewpoint text is obtained after fully understanding the problem, the accuracy of generating the problem text and the viewpoint text is higher, the antagonism of the viewpoint text is clear, and the distinction is easy.

Description

Text generation method and related device
Technical Field
The invention relates to the field of Internet, in particular to a text generation method and a related device.
Background
With the rapid development of the Internet, more users use different social network platforms to interact with the users before, and the creation of discussion questions and corresponding viewpoint options can enable the users to participate in newly generated events to share and discuss viewpoints more quickly, so that the user activity of communities is improved.
The current generative model is a one-step reasoning problem and option, resulting in problems that are not content understood. That is, the model outputs the viewpoint options if the problem is not completely understood, and there are cases where the question is not answered. Therefore, how to accurately generate discussion questions and corresponding perspective options is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above problems, the present invention provides a text generation method and related device, which can accurately generate a question text and a view text, and effectively solve the problems existing in the prior art.
In a first aspect, a text generation method includes:
extracting features of the news text through a first encoder to obtain corresponding first features;
performing cross attention mechanism calculation on the first feature through a first decoder to obtain a corresponding problem text;
fusing the first feature and the problem text to obtain a second feature;
and performing cross-attention mechanism calculation on the second feature through a second decoder to obtain two viewpoint texts with opposite viewpoints, wherein the viewpoint texts are viewpoints aiming at the problem text.
With reference to the first aspect, in some optional embodiments, the extracting, by the first encoder, features of the news text to obtain corresponding first features includes:
and extracting features of the headline text and the content text of the news text through the first encoder to obtain the corresponding first features and a first vector group, wherein the first vector group comprises a plurality of Q vectors, a plurality of K vectors and a plurality of V vectors.
In combination with the above embodiment, in some optional embodiments, the calculating, by the first decoder, the cross-attention mechanism of the first feature to obtain a corresponding problem text includes:
and carrying out the cross-attention mechanism calculation on the first feature and the first vector group through the first decoder to obtain the problem text.
In combination with the above embodiment, in some optional embodiments, the performing, by the first decoder, the cross-attention mechanism calculation on the first feature and the first vector group to obtain the question text includes:
and performing cluster search on the first feature and the first vector group through the first decoder after performing the cross-attention mechanism calculation to obtain the problem text.
With reference to the first aspect, in some optional embodiments, the fusing the first feature and the question text to obtain a second feature includes:
converting the question text into corresponding question text features;
and splicing and fusing the first feature and the text feature of the question to obtain the second feature.
With reference to the first aspect, in some optional embodiments, the calculating, by a second decoder, the cross-attention mechanism of the second feature, to obtain two perspective texts with opposite perspectives includes:
and performing cluster search after performing the cross attention mechanism calculation on the second feature by the second decoder to obtain the viewpoint text with two opposite viewpoints.
In a second aspect, a text generating apparatus includes: the device comprises a feature extraction unit, a first calculation unit, a feature fusion unit and a second calculation unit;
the feature extraction unit is used for extracting features of the news text through the first encoder to obtain corresponding first features;
the first computing unit is used for performing cross attention mechanism computation on the first feature through a first decoder to obtain a corresponding problem text;
the feature fusion unit is used for fusing the first feature and the problem text to obtain a second feature;
the second calculating unit is configured to calculate the second feature by using a second decoder according to a cross-attention mechanism, so as to obtain two perspective texts with opposite perspectives, where the perspective texts are perspectives for the question text.
With reference to the second aspect, in some optional embodiments, the feature extraction unit includes: a feature extraction subunit;
the feature extraction subunit is configured to perform feature extraction on the headline text and the content text of the news text by using the first encoder to obtain the corresponding first feature and a first vector group, where the first vector group includes a plurality of Q vectors, a plurality of K vectors, and a plurality of V vectors.
In a third aspect, a computer-readable storage medium has stored thereon a program that, when executed by a processor, implements the text generation method of any of the above.
In a fourth aspect, an electronic device includes at least one processor, at least one memory coupled to the processor, and a bus; the processor and the memory complete communication with each other through the bus; the processor is configured to invoke the program instructions in the memory to perform the text generation method of any of the above.
By means of the technical scheme, the text generation method and the related device provided by the invention can be used for extracting the characteristics of the news text through the first encoder to obtain the corresponding first characteristics; performing cross attention mechanism calculation on the first feature through a first decoder to obtain a corresponding problem text; fusing the first feature and the problem text to obtain a second feature; and performing cross-attention mechanism calculation on the second feature through a second decoder to obtain two viewpoint texts with opposite viewpoints, wherein the viewpoint texts are viewpoints aiming at the problem text. Therefore, the method and the device can fully understand the content of the news text based on the first encoder, extract rich features, accurately generate corresponding question text based on the first decoder, fully fuse the question text with the features, and generate corresponding viewpoint text based on the second decoder, so that the obtained viewpoint text is obtained after fully understanding the questions, the accuracy of generating the question text and the viewpoint text is higher, the resistance of the viewpoint text is clear, and the viewpoint text is easy to distinguish.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flow chart of a first text generation method provided by the present invention;
FIG. 2 shows a flow chart of a second text generation method provided by the present invention;
FIG. 3 is a flow chart illustrating a third text generation method provided by the present invention;
FIG. 4 is a flow chart illustrating a fourth text generation method provided by the present invention;
FIG. 5 is a flowchart of a fifth text generation method provided by the present invention;
FIG. 6 is a flowchart of a sixth text generation method provided by the present invention;
FIG. 7 shows a schematic diagram of one embodiment provided by the present invention;
FIG. 8 is a schematic diagram of a cluster search according to the present invention;
fig. 9 is a schematic diagram showing a structure of a text generating apparatus provided by the present invention;
fig. 10 shows a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, the present invention provides a text generation method, including: s100, S200, S300, and S400;
s100, extracting features of the news text through a first encoder to obtain corresponding first features;
alternatively, the first encoder of the present invention may be understood as an encoder; news text may include news headlines and news content.
Alternatively, the invention can splice news headlines and news contents in advance to obtain news texts, and then input the news texts into the encoder so as to obtain the output characteristics of the encoder. Taking a 12-layer transform encoder as an example, the present invention can obtain the feature vector of the last layer transform as the first feature.
Optionally, in the extracting process of the feature, the encoder of the present invention needs to use the Q vector, the K vector and the V vector of the encoder to calculate the Q vector, the K vector and the V vector of each character of the news text. That is, one character may correspond to one Q vector, one K vector, and one V vector, and the first encoder may output the Q vector, the K vector, and the V vector of each character at the same time.
For example, as shown in fig. 2, in combination with the embodiment shown in fig. 1, in some alternative embodiments, the S100 includes: s110, performing S110;
s110, extracting features of the headline text and the content text of the news text through the first encoder to obtain the corresponding first features and a first vector group, wherein the first vector group comprises a plurality of Q vectors, a plurality of K vectors and a plurality of V vectors.
It should be noted that: since a plurality of characters are included in the news text, each character corresponds to one Q vector, one K vector, and one V vector, then the first vector group corresponds to the Q vector, the K vector, and the V vector including the respective characters.
Alternatively, the concepts of Q, K, and V are known to the encoder, and reference is made to the encoder for explanation.
S200, performing cross attention mechanism calculation on the first feature through a first decoder to obtain a corresponding problem text;
alternatively, the first decoder according to the present invention may be a decoder. The cross Attention mechanism is a well known concept of decoder decoders, and the present invention will not be described in any greater detail herein, with particular reference to the relevant explanation.
Alternatively, the first decoder according to the present invention may be an autoregressive decoder, and the present invention is not limited thereto.
Optionally, as shown in fig. 3, in combination with the embodiment shown in fig. 2, in some optional embodiments, the S200 includes: s210, performing S210;
s210, calculating the cross attention mechanism by the first decoder on the first feature and the first vector group to obtain the problem text.
Optionally, as mentioned above, the first encoder may further output corresponding Q, K and V vectors, and the first decoder may use the Q, K and V vectors output by the first encoder to perform corresponding cross-attention mechanism calculation to improve accuracy of the present invention.
Alternatively, the question text generated above may be understood as a discussion question or topic generated based on the news text, for which a corresponding viewpoint text may be subsequently generated.
As shown in fig. 4, in combination with the embodiment shown in fig. 3, in some alternative embodiments, the S210 includes: s211;
s211, performing cluster search on the first feature and the first vector group through the first decoder after performing the cross attention mechanism calculation to obtain the problem text.
Optionally, the bundle search is beam search. When the first decoder decodes, it usually returns the sentence with the largest probability, and the cluster search is the first N sentences with the largest probability, so as to increase the diversity of the present invention.
S300, fusing the first feature and the problem text to obtain a second feature;
optionally, in order to improve pertinence of the subsequently generated viewpoint text and the question text, and also to ensure that the subsequently generated viewpoint text is a viewpoint of outputting the news text after fully understanding the news text, the invention may fuse the first feature and the question text to obtain the second feature, and generate the viewpoint text based on the second feature.
Optionally, as shown in fig. 5, in combination with the embodiment shown in fig. 1, in some optional embodiments, the S300 includes: s310 and S320;
s310, converting the question text into corresponding question text features;
and S320, splicing and fusing the first feature and the text feature of the question to obtain the second feature.
S400, performing cross attention mechanism calculation on the second feature through a second decoder to obtain two viewpoint texts with opposite viewpoints, wherein the viewpoint texts are viewpoints aiming at the problem text.
Alternatively, the opposite viewpoint text of the present invention may be "yes" and "no", or "pair" and "error", or "endorsement" and "anti", etc., and the present invention is not limited thereto.
As shown in fig. 6, in combination with the embodiment shown in fig. 1, in some alternative embodiments, the S400 includes: s410;
and S410, performing cluster search after performing the cross attention mechanism calculation on the second feature through the second decoder to obtain the viewpoint text with two opposite viewpoints.
Alternatively, in order to further clearly describe the whole implementation of the present invention, the implementation of the present invention will be described below by taking the example that the news text is "apple ripe (title) +and the season in which the apple ripe is red-red, and apple is my favorite (content)".
As shown in fig. 7, the present invention inputs "apple ripen+and the season of apple ripen, apple in red-grid is my favorite", and finds the feature vector corresponding to each word in the vocabulary (the present invention transducer model carries the feature vector of each Chinese word). The eigenvector is multiplied by the Wq, wk and Wv matrix to obtain the Q vector, K vector and V vector corresponding to each word, then the cross attention mechanism calculation is performed (the Q vector of each word is utilized, the K vector of each word is multiplied, the corresponding alpha variable is obtained, then the alpha is multiplied by the V vector of each word), and finally the output of each word is obtained. As can be seen from the foregoing embodiments, the present invention inputs the Q vector and the K vector obtained in the process of performing the cross attention mechanism calculation by the first encoder to the cross attention mechanism module of the first decoder. The first decoder provides the V vector, and the three further do cross-attention mechanism calculations, and finally the decoding outputs the question text "you feel this apple is happy". Similarly, the invention inputs the question text "you feel this apple happy" to the second decoder, which outputs the "happy and unpalatable" two point of view options.
It should be noted that: the beam search (beam search) improves the greedy search, expands the search space and is easier to obtain the globally optimal solution. The beam search contains a parameter, beam size z, indicating that the z sequences with the highest scores remain for each instant, and then continue to be generated with the z sequences at the next instant. For example, fig. 8 illustrates the procedure of beam search, with the corresponding z=2.
As shown in fig. 9, the present invention provides a text generating apparatus including: a feature extraction unit 100, a first calculation unit 200, a feature fusion unit 300, and a second calculation unit 400;
the feature extraction unit 100 is configured to perform feature extraction on the news text through a first encoder, so as to obtain corresponding first features;
the first calculating unit 200 is configured to perform cross-attention mechanism calculation on the first feature through a first decoder, so as to obtain a corresponding problem text;
the feature fusion unit 300 is configured to fuse the first feature and the question text to obtain a second feature;
the second calculating unit 400 is configured to calculate, by using a second decoder, the second feature by using a cross-attention mechanism, and obtain two perspective texts opposite to each other, where the perspective texts are views of the question text.
In connection with the embodiment shown in fig. 9, in some alternative embodiments, the feature extraction unit 100 includes: a feature extraction subunit;
the feature extraction subunit is configured to perform feature extraction on the headline text and the content text of the news text by using the first encoder to obtain the corresponding first feature and a first vector group, where the first vector group includes a plurality of Q vectors, a plurality of K vectors, and a plurality of V vectors.
In combination with the above embodiment, in some alternative embodiments, the first computing unit 200 includes: a first subunit;
the first subunit is configured to perform, by using the first decoder, the cross-attention mechanism calculation on the first feature and the first vector group, to obtain the question text.
In combination with the above embodiment, in some alternative embodiments, the first subunit includes: a second subunit;
and the second subunit is configured to perform cluster search on the first feature and the first vector group through the first decoder after performing the computation of the cross attention mechanism, so as to obtain the question text.
In connection with the embodiment shown in fig. 9, in some alternative embodiments, the feature fusion unit 300 includes: a feature transformation subunit and a feature stitching subunit;
the feature conversion subunit is used for converting the question text into corresponding question text features;
and the characteristic splicing subunit is used for splicing and fusing the first characteristic and the text characteristic of the problem to obtain the second characteristic.
In connection with the embodiment shown in fig. 9, in some alternative embodiments, the second computing unit 400 includes: a second computing subunit;
and the second calculating subunit is configured to perform cluster search after performing the cross-attention mechanism calculation on the second feature by using the second decoder, so as to obtain the viewpoint text with two opposite viewpoints.
The present invention provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the text generation method of any one of the above.
As shown in fig. 10, the present invention provides an electronic device 70, the electronic device 70 comprising at least one processor 701, and at least one memory 702, bus 703 connected to the processor 701; wherein, the processor 701 and the memory 702 complete communication with each other through the bus 703; the processor 701 is configured to invoke the program instructions in the memory 702 to perform the text generation method according to any of the above.
In the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A text generation method, comprising:
extracting features of the news text through a first encoder to obtain corresponding first features;
performing cross attention mechanism calculation on the first feature through a first decoder to obtain a corresponding problem text;
fusing the first feature and the problem text to obtain a second feature;
and performing cross-attention mechanism calculation on the second feature through a second decoder to obtain two viewpoint texts with opposite viewpoints, wherein the viewpoint texts are viewpoints aiming at the problem text.
2. The method according to claim 1, wherein the feature extraction of the news text by the first encoder, to obtain the corresponding first feature, includes:
and extracting features of the headline text and the content text of the news text through the first encoder to obtain the corresponding first features and a first vector group, wherein the first vector group comprises a plurality of Q vectors, a plurality of K vectors and a plurality of V vectors.
3. The method of claim 2, wherein the performing, by the first decoder, the cross-attention mechanism calculation on the first feature to obtain the corresponding question text includes:
and carrying out the cross-attention mechanism calculation on the first feature and the first vector group through the first decoder to obtain the problem text.
4. A method according to claim 3, wherein said performing, by said first decoder, said cross-attention mechanism calculation on said first feature and said first vector group to obtain said question text comprises:
and performing cluster search on the first feature and the first vector group through the first decoder after performing the cross-attention mechanism calculation to obtain the problem text.
5. The method of claim 1, wherein fusing the first feature and the question text to obtain a second feature comprises:
converting the question text into corresponding question text features;
and splicing and fusing the first feature and the text feature of the question to obtain the second feature.
6. The method according to claim 1, wherein the cross-attention mechanism calculation of the second feature by the second decoder obtains two perspective texts with opposite perspectives, including:
and performing cluster search after performing the cross attention mechanism calculation on the second feature by the second decoder to obtain the viewpoint text with two opposite viewpoints.
7. A text generating apparatus, comprising: the device comprises a feature extraction unit, a first calculation unit, a feature fusion unit and a second calculation unit;
the feature extraction unit is used for extracting features of the news text through the first encoder to obtain corresponding first features;
the first computing unit is used for performing cross attention mechanism computation on the first feature through a first decoder to obtain a corresponding problem text;
the feature fusion unit is used for fusing the first feature and the problem text to obtain a second feature;
the second calculating unit is configured to calculate the second feature by using a second decoder according to a cross-attention mechanism, so as to obtain two perspective texts with opposite perspectives, where the perspective texts are perspectives for the question text.
8. The apparatus according to claim 7, wherein the feature extraction unit includes: a feature extraction subunit;
the feature extraction subunit is configured to perform feature extraction on the headline text and the content text of the news text by using the first encoder to obtain the corresponding first feature and a first vector group, where the first vector group includes a plurality of Q vectors, a plurality of K vectors, and a plurality of V vectors.
9. A computer-readable storage medium having a program stored thereon, which when executed by a processor implements the text generation method according to any one of claims 1 to 6.
10. An electronic device comprising at least one processor, and at least one memory, bus coupled to the processor; the processor and the memory complete communication with each other through the bus; the processor is configured to invoke program instructions in the memory to perform the text generation method of any of claims 1 to 6.
CN202310714379.1A 2023-06-16 2023-06-16 Text generation method and related device Active CN116450779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310714379.1A CN116450779B (en) 2023-06-16 2023-06-16 Text generation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310714379.1A CN116450779B (en) 2023-06-16 2023-06-16 Text generation method and related device

Publications (2)

Publication Number Publication Date
CN116450779A CN116450779A (en) 2023-07-18
CN116450779B true CN116450779B (en) 2023-09-12

Family

ID=87134196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310714379.1A Active CN116450779B (en) 2023-06-16 2023-06-16 Text generation method and related device

Country Status (1)

Country Link
CN (1) CN116450779B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221964A (en) * 2019-12-25 2020-06-02 西安交通大学 Text generation method guided by evolution trends of different facet viewpoints
CN112307726A (en) * 2020-11-09 2021-02-02 浙江大学 Automatic court opinion generation method guided by causal deviation removal model
CN114676234A (en) * 2022-02-22 2022-06-28 华为技术有限公司 Model training method and related equipment
CN116089576A (en) * 2022-11-09 2023-05-09 南开大学 Pre-training model-based fully-generated knowledge question-answer pair generation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481418B2 (en) * 2020-01-02 2022-10-25 International Business Machines Corporation Natural question generation via reinforcement learning based graph-to-sequence model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221964A (en) * 2019-12-25 2020-06-02 西安交通大学 Text generation method guided by evolution trends of different facet viewpoints
CN112307726A (en) * 2020-11-09 2021-02-02 浙江大学 Automatic court opinion generation method guided by causal deviation removal model
CN114676234A (en) * 2022-02-22 2022-06-28 华为技术有限公司 Model training method and related equipment
CN116089576A (en) * 2022-11-09 2023-05-09 南开大学 Pre-training model-based fully-generated knowledge question-answer pair generation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双重注意力的无触发词中文事件检测;程永等;《计算机科学》;全文 *

Also Published As

Publication number Publication date
CN116450779A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US11386271B2 (en) Mathematical processing method, apparatus and device for text problem, and storage medium
CN110610700B (en) Decoding network construction method, voice recognition method, device, equipment and storage medium
CN108334487B (en) Missing semantic information completion method and device, computer equipment and storage medium
US20200159755A1 (en) Summary generating apparatus, summary generating method and computer program
US20190164064A1 (en) Question and answer interaction method and device, and computer readable storage medium
CN110634487B (en) Bilingual mixed speech recognition method, device, equipment and storage medium
CN104573099B (en) The searching method and device of topic
JP6677419B2 (en) Voice interaction method and apparatus
CN108228576B (en) Text translation method and device
CN110472043B (en) Clustering method and device for comment text
US20220358297A1 (en) Method for human-machine dialogue, computing device and computer-readable storage medium
CN111813923A (en) Text summarization method, electronic device and storage medium
CN111159394B (en) Text abstract generation method and device
Vichyaloetsiri et al. Web service framework to translate text into sign language
CN105373527B (en) Omission recovery method and question-answering system
CN113343692B (en) Search intention recognition method, model training method, device, medium and equipment
CN110489761B (en) Chapter-level text translation method and device
CN116450779B (en) Text generation method and related device
Lee et al. Impact of out-of-vocabulary words on the twitter experience of blind users
Jia et al. Post-training dialogue summarization using pseudo-paraphrasing
CN113782030A (en) Error correction method based on multi-mode speech recognition result and related equipment
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium
JP2020149119A (en) Recommendation sentence generation device, recommendation sentence generation method, and recommendation sentence generation program
JP6097791B2 (en) Topic continuation desire determination device, method, and program
JP2019087157A (en) Word vector conversion apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant