CN110688834B - Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model - Google Patents

Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model Download PDF

Info

Publication number
CN110688834B
CN110688834B CN201910780331.4A CN201910780331A CN110688834B CN 110688834 B CN110688834 B CN 110688834B CN 201910780331 A CN201910780331 A CN 201910780331A CN 110688834 B CN110688834 B CN 110688834B
Authority
CN
China
Prior art keywords
style
target
source
deep learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910780331.4A
Other languages
Chinese (zh)
Other versions
CN110688834A (en
Inventor
龙翀
王雅芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910780331.4A priority Critical patent/CN110688834B/en
Publication of CN110688834A publication Critical patent/CN110688834A/en
Application granted granted Critical
Publication of CN110688834B publication Critical patent/CN110688834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

An exemplary aspect of the present disclosure relates to a method of intelligent document style overwriting based on a deep learning model, comprising receiving a source document associated with a source style and at least one target style; for each of the one or more natural sentences of the source document: generating a semantic vector corresponding to the natural sentence of the source manuscript based on the source style by a deep learning model; and generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and sequentially merging one or more target natural sentences corresponding to the one or more natural sentences of the source document to generate at least one target document associated with the at least one target style. The present disclosure also relates to a corresponding device or the like.

Description

Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model
Technical Field
The application relates to artificial intelligence, in particular to intelligent manuscript rewriting based on deep learning.
Background
With the explosion development of the self-media in the information age at present, the public opinion propaganda means of various media are increasingly abundant, the preference and taste of readers are increasingly diversified, and great challenges are brought to writing propaganda and putting work. The same article or news manuscript is often required to be modified into articles or news manuscripts with different styles according to the characteristics of various media and various readers so as to adapt to readers with specific categories, layers, tastes and the like, and improve the actual reading quantity of the articles or news manuscripts, the interested degree of the readers and even the influence. Therefore, personalized rewriting of articles has a wide real demand.
However, the reader's categories, layers, tastes, etc. vary widely. To accommodate a particular reader, essentially the same article needs to be rewritten many times. This greatly increases the amount of work done for writing.
Accordingly, there is a need in the art for a technique that enables automated intelligent overwriting of documents.
Disclosure of Invention
An exemplary aspect of the present disclosure relates to a method of intelligent document style overwriting based on a deep learning model, comprising receiving a source document associated with a source style and at least one target style; for each of the one or more natural sentences of the source document: generating a semantic vector corresponding to the natural sentence of the source manuscript based on the source style by a deep learning model; and generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and sequentially merging one or more target natural sentences corresponding to the one or more natural sentences of the source document to generate at least one target document associated with the at least one target style.
According to an exemplary embodiment, the deep learning model includes an encoder and a decoder, wherein a semantic vector corresponding to a natural sentence of the source document is generated by the encoder of the deep learning model based on the source style, and a target natural sentence corresponding to the semantic vector is generated by the decoder of the deep learning model based on the at least one target style.
According to a further exemplary embodiment, the method further comprises word segmentation of the natural sentence of the source document, and wherein the encoder of the deep learning model comprises a plurality of cascaded first unit modules, wherein each word in the segmented natural sentence is sequentially input to the plurality of cascaded first unit modules, respectively.
According to another exemplary embodiment, the method further includes generating, by the plurality of cascaded first-unit modules, an output of the present level based on the output of the first-stage first unit and the word input to the present level in the segmented natural sentence, wherein the first-stage first unit uses the source style as the output of the previous stage, and the last-stage first unit outputs a semantic vector corresponding to the natural sentence of the source document.
According to a further exemplary embodiment, the decoder of the deep learning model comprises a plurality of cascaded second unit modules, the method further comprising generating, by the plurality of cascaded second unit modules, target words corresponding to the semantic vectors based on the at least one target style, respectively; and combining the various generated target words of the plurality of cascaded second unit modules to form a target natural sentence.
According to an exemplary embodiment, the method further includes filling the input of the redundant first unit modules with a blank when the number of words obtained after the word segmentation of the natural sentence of the source document is smaller than the number of the plurality of cascaded first unit modules.
According to an exemplary embodiment, the method further includes, when the number of words obtained after the word segmentation of the natural sentence of the source document is greater than the number of the plurality of cascaded first unit modules, segmenting the natural sentence.
According to an exemplary embodiment, the source style is received from outside or extracted directly from the source document.
According to an exemplary embodiment, the method further comprises training the deep learning model, wherein training the deep learning model comprises setting a feature library comprising two or more features related to intelligent manuscript style overwriting; generating a document material library, the document material library comprising pairs of articles associated with at least two features in the feature library; and training the deep learning model based on the manuscript material library.
According to further exemplary embodiments, generating the library of document materials includes one or more of the following, or any combination thereof: for a particular feature in the feature library: capturing all articles with the specific features from a featured website; retrieving articles with high relevance from a search engine based on the specific features; and learning a marking model using machine learning to find articles related to the particular feature in text crawled from the web.
Other aspects of the disclosure also relate to corresponding apparatuses and computer-readable media.
Drawings
FIG. 1 shows a diagram of an example Recurrent Neural Network (RNN).
FIG. 2 shows a diagram of an example Long Short Term Memory (LSTM) network.
FIG. 3 illustrates a diagram of a word segmentation system in accordance with an exemplary aspect of the present disclosure.
FIG. 4 illustrates a block diagram of a deep learning model in accordance with an exemplary aspect of the present disclosure.
FIG. 5 illustrates a flow chart of an offline training method of a deep learning model in accordance with an exemplary aspect of the present disclosure.
Fig. 6 illustrates a block diagram of a method for intelligent document rewrite using a trained deep learning model in accordance with an aspect of the disclosure.
FIG. 7 illustrates a diagram of a scenario in which intelligent document rewrite is performed using a trained deep learning model, according to an exemplary aspect of the present disclosure.
Fig. 8 illustrates a block diagram of an intelligent document rewriting apparatus based on deep learning in accordance with an exemplary aspect of the present disclosure.
Fig. 9 illustrates a block diagram of an apparatus for intelligent document rewrite using a trained deep learning model in accordance with an aspect of the disclosure.
Detailed Description
Conventional neural networks generally include an input layer, one or more hidden layers, and an output layer, where the layers may be fully connected, but the nodes between each layer are connectionless. Such common neural networks are ineffectual for many problems. For example, in speech recognition or natural language processing, it is generally necessary to base the word that appears before, provided that it is desired to predict what the next word of the sentence is. This is because the words before and after a sentence are not independent of each other.
In this case, a Recurrent Neural Network (RNN) has developed. To process sequence data, the RNN relates the current output of a sequence to the previous output. Thus, the network will memorize the previous information and apply it to the calculation of the current output. In other words, the nodes between hidden layers are no longer connectionless but connected, and the input of a hidden layer includes not only the output of the input layer/previous hidden layer but also the output of the present hidden layer at the previous moment.
Recurrent Neural Networks (RNNs) are most commonly used for time-series data mining. Recurrent neural networks are a class of neural networks with memory capabilities and are therefore often used for mining data with time dependencies. In general, neurons in a recurrent neural network are able to receive not only information from other neurons, but also information themselves, forming a cascade of network structures with loops.
Fig. 1 shows a diagram of an example simple Recurrent Neural Network (RNN) 100. As can be seen, the left part represents a simplified RNN structure, with the lowest circle representing the input layer, the middle circle representing the hidden layer, and the uppermost circle representing the output layer. X in the input layer represents the input layer value vector. S in the hidden layer represents a hidden layer value vector. O in the output layer represents the output layer value vector. U, V and W represent weight matrices connecting the respective layers, respectively.
The functional relationship between the hidden layer value vector S and the input layer value vector X is shown in the following formula (1):
S t =f(U·X t +W·S t-1 +b) (1)
as can be seen, the hidden layer value vector S at time t t Not only on the current input layer value vector X t And a weight matrix U connecting the input layer and the hidden layer, and also depends on the hidden layer value vector S of the previous time t-1 t-1 And a corresponding weight matrix W (and possibly also the bias term b).
On the other hand, the output layer value vector O at time t t And hidden layer value vector S t For example, the functional relationship of (2) is as follows:
O t =g(V·S t ) (2)
as can be seen, the output layer value vector O at time t t Depending on the current hidden layer value vector S t And a weight matrix V connecting the hidden layer and the output layer.
By spreading the RNN 100 in time, the structure of the right part of fig. 1 is obtained. As can be seen, the RNN adds a delay from the hidden layer back to the hidden layer compared to a fully connected structure such as a multi-layer perceptron (MLP), thus giving the RNN a "memory" function. And is thus particularly suitable for processing time series data.
In theory, RNNs are able to process sequence data of any length. That is, the length of the input sequence may not be fixed. In practice, however, for the sake of complexity reduction, it is often assumed that the current state is only related to a limited number of states in front. RNNs and their various variants (e.g., bi-directional RNN, LSTM, GRU, etc.) have enjoyed great success and widespread use in numerous natural language processing.
RNNs can be categorized into a variety of structures depending on the number of input and output sequences, including: one-to-one structures, one-to-many structures, many-to-one structures, many-to-many structures, etc. The multi-pair multi-structure may also include structures with input and output sequences of equal length and structures with input and output sequences of unequal length. The many-to-many structure with unequal input and output sequences is called the seq2seq (sequence to sequence) model. A common seq2seq model may comprise an encoder-decoder structure, i.e. using two RNNs, one RNN acting as an encoder and the other RNN acting as a decoder. According to one implementation, one word/word in the sequence may be input into the encoder at each instant. The encoder is responsible for compressing the input sequence into a vector of a specified length (i.e., embedding), which can be regarded as the semantics of the sequence, a process called encoding. The decoder is then responsible for generating the specified sequence from the semantic vector, a process called decoding.
One of the ways the encoder obtains the semantic vector may include directly taking the hidden state of the last input as the semantic vector C. Another way of obtaining the semantic vector may include transforming the last hidden state to obtain the semantic vector C. The way the semantic vector is obtained may also include transforming all hidden states of the input sequence to obtain the semantic vector C. The decoder may input the semantic variable obtained by the encoder as an initial state into the RNN as the decoder to obtain an output sequence. According to one exemplary implementation, the semantic vector C may participate in operations only as an initial state. According to another exemplary implementation, the semantic vector C may participate in operations at all instants of the sequence in the decoder.
LSTM (long term memory) networks are a special class of RNNs designed to address the above-mentioned problems of RNNs, etc. In general, RNNs may include a chain structure of repeating neural network unit modules. In a standard RNN, this replicated unit cell typically includes only a simple structure, such as a tanh layer. LSTM is also a chain structure, but the repeated unit modules have a more complex structure. In particular, the unit modules of a general LSTM may include an input gate, a forget gate, an output gate, and a cell, which interact in a specific manner, wherein the forget gate may be used to forget information that is not already needed, the input gate and the output gate may be used for input and output of parameters, and the cell is used to store a state. LSTM also has various modifications. Attention (attention) mechanisms introduced in recent years can greatly improve the efficiency of LSTM, thereby bringing a wider prospect for LSTM.
Fig. 2 shows a diagram of an example LSTM network 200. LSTM network 200 may include a plurality of cells in cascade. Each cell may obtain the output of the present stage by performing embedding based on the output of the previous stage and the input of the present stage. Specifically, as can be seen, first, the current input may be cascaded with the previous h, with the result being noted as x. Then, x may perform matrix point multiplication operations with weight matrices w (f), w (i), w (j), and w (o), respectively, to obtain result matrices f, i, j, o, where w (f), w (i), w (j), and w (o) are core weight parameters of the LSTM cell, and training is performed for the four weight matrix parameters. Next, a matrix f is subjected to a sigmod operation, i.e., a sigmod (f), a matrix i is also subjected to a sigmod operation, i.e., a sigmod (i), a matrix j is subjected to a tanh operation, i.e., a tanh (j), a matrix o is subjected to a sigmod operation, i.e., a sigmod (o), and then a new c is calculated, where new c=old c x sigmod (f) + (sigmod (i) xtanh (j)). Finally, a new h is calculated, where new h=tanh (new c) sigmod (o). After the operation, the cell is calculated, and a new c and a new h are obtained. And then taking h as the output of the time, and taking the binary group (c, h) as the cell state of the time to be stored for the next cell calculation.
One of the fundamental steps in the processing of natural language involves word segmentation. In a line text of a word-based language such as english, spaces between words are used as natural delimiters. In a scene such as chinese, only words, sentences and paragraphs can be simply delimited by obvious delimiters, without formal explicit delimiters between words. Therefore, in performing the natural language processing of the text, it is generally necessary to perform word segmentation first. Existing word segmentation algorithms can be divided into three main categories: word segmentation method based on character string matching, word segmentation method based on understanding and word segmentation method based on statistics. According to the combination of the part-of-speech labeling process or not, the method can be divided into a simple word segmentation method and an integrated method combining word segmentation and labeling.
Fig. 3 illustrates a word segmentation system 300 according to an exemplary aspect of the present disclosure. As can be seen, the word segmentation system 300 includes a word segmenter 301. When a sentence is input to the word segmenter 301, the word segmenter 301 outputs a segmented result. The word segmenter 301 may be implemented with any prior art or future technique word segmentation algorithm.
For example, in a chinese word segmentation scene, word segmentation methods based on character string matching, such as a forward maximum matching method, a reverse maximum matching method, a least segmentation method, and the like, may be used; word segmentation method based on understanding; a word segmentation method based on statistics; rule-based word segmentation methods such as minimum matching algorithm, maximum matching algorithm, word-by-word matching algorithm, neural network word segmentation algorithm, association-backtracking method, N-shortest path word segmentation algorithm, etc.; algorithms based on word frequency statistics, based on expected word segmentation, finite multi-level enumeration, etc.
As can be seen, although one possible word segmentation result is shown above, different word segmentation results may be obtained with different word segmentation algorithms.
According to an exemplary embodiment, the source document may be input into the segmenter sentence by sentence, and the segmenter outputs the segmented source sentence (or other unit) sentence by sentence or integrally outputs the segmented source document. According to another exemplary embodiment, the source document may be integrally input into the word segmenter, and the word segmenter may output the segmented source document (or other unit) sentence by sentence or integrally.
FIG. 4 illustrates a block diagram of a deep learning model 400 in accordance with an exemplary aspect of the present disclosure. Deep learning model 400 may include, for example, a seq2seq model. According to an exemplary, but non-limiting embodiment, the seq2seq model may comprise, for example, an encoder portion 410 and a decoder portion 420, wherein the encoder portion 410 may comprise a unit module h 1 ,h 2 ,……h n . Unit module h 1 ,h 2 ,……h n May be implemented, for example, using the unit modules (expanded in time lines) discussed above in connection with RNNs (e.g., LSTM) and form a chain structure discussed above in connection with RNNs (e.g., LSTM) expanded in time lines. h is a 0 May be a feature (or source style) of the source document. Features may refer to classifications, identifications, genres, applicable people groups, etc. of paragraphs, examples of which may include, for example, general body, palace body, northeast words, young people words, etc. … …. This will be described further below.
x 1 ,x 2 ,……x n Is to the unit module h 1 ,h 2 ,……h n Is input to the computer. X is x 1 ,x 2 ,……x n May include a word or sequence of words that may be obtained by word segmentation of natural sentences in the source document. For example, the word segmentation of natural sentences may be implemented using the word segmentation system described above in connection with fig. 2.
According to an exemplary, but non-limiting embodiment, n may be a fixed window length. When the length of the natural sentence after word segmentation is smaller than n (e.g., x only 1 ,x 2 ,……x i Wherein i is<n), the remaining portion may be filled with a blank (e.g., x is filled with i+1 ,……x n ). On the other hand, when the length of the natural sentence after word segmentation is greater than n, the sentence can be segmented. The segmentation criteria may include, for example, commas, mood words, and/or other overt segmentation words, etc., or any combination thereof.
According to an exemplary but non-limiting embodimentExemplary embodiment, unit Module h 1 ,h 2 ,……h n Word/word sequence x, which can be input based on the present level 1 ,x 2 ,……x n And the output of the preceding stage unit module is embedded to obtain the output vector h of the current stage 1 ,h 2 ,……h n Wherein h is 1 ,h 2 ,……h n-1 Respectively, to the next unit module in the chain structure as its input.
Can be based on the last unit module h of the encoder n Output (i.e. hidden state) h n To obtain the semantic vector C. For example, according to an exemplary, but non-limiting embodiment, h can be directly taken n As semantic vector C. According to another exemplary but non-limiting embodiment, may include the following steps for h n A transformation is performed to obtain a semantic vector C. According to yet another exemplary but non-limiting embodiment, the input sequence x may be 1 ,x 2 ,……x n All hidden states h of (2) 1 ,h 2 ,……h n The semantic vectors C are aggregated (and optionally transformed).
The decoder portion 420 may include a unit module h 1 ’,h 2 ’,……h m ' wherein m may be greater than, equal to, or less than n. Unit module h 1 ’,h 2 ’,……h m ' may be implemented, for example, using the unit modules discussed above in connection with RNNs (e.g., LSTM) and form a chain structure discussed above in connection with RNNs (e.g., LSTM).
According to an exemplary, but non-limiting embodiment, h 0 ' input to the unit module h 1 ’,h 0 ' may be a feature of the target document (or target style), which may be different from the source document feature. Examples of target manuscript features may include, for example, a general body, a palace body, northeast words, young people expressions, and the like … ….
According to an exemplary, but non-limiting embodiment, unit module h 1 ’,h 2 ’,……h m-1 ' the output vector h of the present stage can be obtained by embedding based on the semantic vector C and the output of the preceding stage unit module 1 ’,h 2 ’,……h m-1 ' to next unit module h 2 ’,……h m '. In the example of fig. 4, the semantic vector C is input to each unit module h of the decoder 420 1 ’,h 2 ’,……h m-1 '. In alternative embodiments, the semantic vector C may also be input into only the first unit module of the decoder 420, for example.
According to an exemplary, but non-limiting embodiment, unit module h 1 ’,h 2 ’,……h m ' y is output separately by embedding the outputs of the previous stages based on semantic vectors 1 ,y 2 ,……y m 。y 1 ,y 2 ,……y m A sequence of words or words may also be included. Combination y 1 ,y 2 ,……y m The obtained sequence can be the natural sentence input sequence x of the source style 1 ,x 2 ,……x n And (3) rewriting the characteristics of the target manuscript to obtain a target natural sentence output sequence.
According to an exemplary, but non-limiting embodiment, similarly, when the output sequence length is less than m (e.g., y only 1 ,y 2 ,……y j Wherein j is<m) can be used with the output of the remainder (e.g., y j+1 ,……y m ) May be filled with a blank. On the other hand, the sentence sliced at the encoder may be reassembled after being output at the decoder. The decoded sentences are combined in sequence to form the target document.
The various unit modules in encoder 410 and decoder 420 may be implemented with various RNN structures, including, for example, but not limited to, the network structures described above in connection with fig. 1 and 2, and the like.
Although the embodiments are described in connection with outputting a target document corresponding to one target style, as can be appreciated, the present disclosure may encompass implementations in which a plurality of target documents corresponding to a plurality of target styles are output simultaneously or sequentially, respectively.
By converting the source style content (e.g., sentence, paragraph, article) into the target style corresponding content (e.g., sentence, paragraph, article) by the encoder and decoder as described with reference to the exemplary embodiment of fig. 4, a deep learning-assisted rewrite flow can be implemented, and thus the work efficiency of the contribution rewrite can be greatly improved.
Training of the deep learning model may include, for example, offline training and online training. FIG. 5 illustrates a flow chart of an offline training method 500 of a deep learning model (e.g., model 400 of FIG. 4) according to an exemplary aspect of the present disclosure.
The method 500 may include setting up a feature library at block 502. As previously described, a feature may refer to a classification, identification, genre, applicable crowd, etc. of a paragraph, examples of which may include, for example, general, palace, northeast, young people, etc. … …. Features may be represented by phrases or identifiers, etc. For example, according to one example, the natural sentence "good" may be characterized as a plain body. The palace body natural sentence corresponding to the palace body natural sentence is 'this is thought to be excellent'. Setting up the feature library may include, for example, setting up different features according to different preferences for the population. The number of different article types that the model can rewrite depends on the number of different features.
At block 504, the method 500 generates a library of document materials. The library of document materials is associated with a particular feature. A library of document materials of a particular feature may refer to a collection of documents having the particular feature. Generating a library of document materials associated with a particular feature may include, for example, crawling all articles with the particular feature from a featured website (e.g., microblog, headline, knowledgeable, etc.). Generating the document materials library associated with a particular feature may also include retrieving on a search engine using the feature to obtain top-ranked articles of relevance. Generating a library of document materials associated with a particular feature may also include, after obtaining certain article data in the various manners described above or any combination thereof, utilizing a machine learning method to learn a marking model, and then searching for more articles related to the feature in the text crawled on the web. According to an exemplary, but nonlimiting, embodiment, learning the marking model may include learning a text classifier and then training the text classifier with articles captured and/or obtained as described above as a training set. After training, the trained text classifier can be utilized to search more articles related to the corresponding features in the text crawled on the internet. After generating two or more document feature libraries associated with at least two or more of the feature types in the feature library, the method 500 may proceed to block 506.
At block 506, the method 500 trains a deep learning model, such as the seq2seq model or the like. According to an exemplary but non-limiting embodiment, training the deep learning model may include generating a training dataset by finding pairs of articles in each document material library that have the same or similar subject matter but different features (e.g., classification of paragraphs, identification, genre, applicable crowd, etc.). The subject matter being identical or similar may include one or more of the following: contains the same keywords and/or has a sufficiently high similarity for each respective sentence in the article. The similarity may be calculated using various algorithms including, but not limited to, for example, word vector based similarity (e.g., cosine similarity, manhattan distance similarity, euclidean distance similarity, bright distance similarity, etc.), character based similarity (e.g., edit distance similarity, simhash, number of common characters, etc.), probabilistic statistics based similarity (e.g., jekcard similarity coefficients, etc.), word embedding model based similarity (e.g., word2vec/doc2vec, etc.).
According to another exemplary, but non-limiting embodiment, the training data set may also be obtained from other sources. For example, the training data set may be obtained directly from an external source. After the training data set is generated or obtained, a deep learning model, such as the deep learning model 400 described with reference to FIG. 4, may be trained using one article of the pair of articles in the training data set as a source document and the other article as a target document. The above-described exemplary embodiments are trained on the granularity of articles, but the present disclosure is not limited thereto. According to another exemplary, but non-limiting embodiment, training may also be performed at sentence or other granularity.
Fig. 6 illustrates a block diagram of a method 600 of intelligent document rewrite using a trained deep learning model in accordance with an aspect of the disclosure. Method 600 includes, at block 602, inputting a source document, a source style, and a target style. The source style may represent a characteristic of the source document, including, for example, a category or style to which it belongs. The target style may represent a characteristic of the document to be rewritten, including, for example, a category or style to which it belongs. According to an exemplary, but non-limiting embodiment, the source style may also be determined directly on a source document basis without additional input.
At block 604, the method 600 includes outputting the target document with a trained deep learning model. For example, for the deep learning model described with reference to FIG. 4, a source document may be segmented into x by natural sentence 1 ,x 2 ,……x n . The segmentation may be implemented using, for example, the segmentation system 300 described in connection with fig. 3. X after word segmentation 1 ,x 2 ,……x n The unit modules h that can be respectively input to the encoder 410 1 ,h 2 ,……h n . According to an exemplary, but non-limiting embodiment, when the length of a natural sentence after segmentation is less than n (e.g., x only 1 ,x 2 ,……x i Wherein i is<n), the remaining portion may be filled with a blank (e.g., x is filled with i+1 ,……x n ). On the other hand, when the length of the natural sentence after word segmentation is greater than n, the sentence can be segmented. The segmentation criteria may include, for example, commas, mood words, and/or other overt segmentation words, etc., or any combination thereof. The source style is input to h 0 . For example, sentence 1 in the source document is segmented into x 1 ,x 2 ,……x n . Source style h 0 And the first word x 1 Is input to the first unit module h 1 . The first unit module is used for the first word x 1 Carry out ebedding and output vector h 1 . Vector h 1 And the second word x 2 Are input together to the second unit module h 2 . The second unit module h 2 For the second word x 2 Carry out ebedding and output vector h 2 . Vector h 2 And the third word x 3 Are input together to the third unit module h 3 And so on. Vector h n-1 And the nth word x n Are input together to the nth unit modeBlock h n . The nth unit module is used for the nth word x n Carry out ebedding and output vector h n . Through the unit modules h 1 ,h 2 ,……h n Concatenating, creating word x of the sentence 1 ,x 2 ,……x n Contextual relation between them. As can be seen, although h is used for both the unit modules and the vectors output by them n The expression of the form is merely for the convenience of identifying the correlation thereof, and their respective meanings are clear and should not be confused.
For a deep learning model described with reference to, for example, fig. 4, the last unit module h of the encoder may be based on n Output (i.e. hidden state) h n To obtain the semantic vector C. For example, according to an exemplary, but non-limiting embodiment, h can be directly taken n As semantic vector C. According to another exemplary but non-limiting embodiment, may include the following steps for h n A transformation is performed to obtain a semantic vector C. According to yet another exemplary but non-limiting embodiment, the input sequence x may be 1 ,x 2 ,……x n All hidden states h of (2) 1 ,h 2 ,……h n The semantic vectors C are aggregated (and optionally transformed). The semantic vector C may be input to the respective unit modules h of the decoder section 120 1 ’,h 2 ’,……h m ' wherein m may be greater than, equal to, or less than n. h is a 0 ' input to the unit module h 1 ’,h 0 ' may be a feature of the target document (or target style), which may be different from the source document feature. Unit module h 1 ’,h 2 ’,……h m-1 ' carrying out ebadd on the semantic vector C and outputting a vector h respectively 1 ’,h 2 ’,……h m-1 ' to next unit module h 2 ’,……h m '. Unit module h 1 ’,h 2 ’,……h m ' also output y respectively 1 ,y 2 ,……y m 。y 1 ,y 2 ,……y m May include a sequence of words or words. y is 1 ,y 2 ,……y m Can be a pair of sequencesNatural sentence input sequence x of source style 1 ,x 2 ,……x n And (5) rewriting the characteristics of the target manuscript to obtain a sentence output sequence. According to an exemplary, but non-limiting embodiment, similarly, when the output sequence length is less than m (e.g., y only 1 ,y 2 ,……y j Wherein j is<m) can be used with the output of the remainder (e.g., y j+1 ,……y m ) May be filled with a blank. On the other hand, the sentence sliced at the encoder may be reassembled after being output at the decoder.
At optional block 606, method 600 may include optionally performing a manually assisted overwrite adjustment. If the article manuscript automatically rewritten by the deep learning model reaches the available standard, the article manuscript may need to be edited and audited manually. However, compared with pure manual rewrite, the automatic rewrite of the deep learning model can greatly reduce the degree of manual intervention and the workload. For example, according to an exemplary, but non-limiting embodiment, an editing interface may be provided to facilitate the addition, deletion, and/or modification of sentences in an editing box by a user. According to another exemplary, but non-limiting embodiment, addition, deletion, and/or substitution of paragraphs and/or sentences, etc. may also be supported. For example, sentences may be manually entered by the user at the time of addition or may be automatically retrieved from a library of manuscript materials. As another example, at the time of replacement, sentences automatically generated by the deep learning model may be manually input by the user or provided for selection.
At block 608, the method 600 may include outputting the result. Outputting the results may include, for example, outputting the rewritten sentence/article. After the overwriting is completed, there may be corresponding saving, previewing, auditing, and/or publishing, etc.
Optional block 606 may also be omitted or repositioned. For example, the results may be output at block 608 and manually assisted overwrite adjustment may be performed as appropriate or necessary later.
FIG. 7 illustrates a diagram of a scenario 700 for intelligent document rewrite using a trained deep learning model in accordance with an exemplary aspect of the present disclosure.
As can be seen, a source document 701 is input into a word segmentation module 702. The word segmentation module 702 may, for example, include the word segmentation system 300 described above in connection with fig. 3, and the like. The word segmentation module 702 may output the segmented document 703 based on the employed word segmentation algorithm.
The segmented manuscript (e.g., segmented natural sentence or other element) 703 is input into a trained deep learning model 704. The segmented manuscript 703 may be inputted into the deep learning model 704 sentence by sentence, or may be inputted into the deep learning model 704 in other units. For example, the segmented document 703 may be input to the deep learning model 704 as a whole, or may be streamed to the deep learning model 704 while being generated.
A source style corresponding to the source document 701 and a target style corresponding to the target document may be input into the deep learning model 704. According to alternative embodiments, the source style may also be automatically identified by the system from the source document. In this case, the system may also include, for example, a feature recognition module (not shown). For example, the source style may be a plain body and the target style may be a young person term.
The deep learning model 704 may include, for example, the deep learning model 400 described above in connection with fig. 4, or variants thereof, and the like. Based on the source style and the target style, the deep learning model 704 can rewrite the segmented source document 703 accordingly, for example, rewrite it from a general body to a body of a young person.
The deep learning model 704 outputs a target document 705 rewritten based on the target style. According to an exemplary embodiment, target document 705 may be output sentence-by-sentence, such as outputting a target sentence every time a source sentence is input, or outputting a target sentence-by-sentence for an entirely input source document. According to another exemplary embodiment, the target document 705 may be output in its entirety. For example, for a source document that is input as a whole, all target sentences may be combined and output after they are obtained. For another example, for a source document input sentence by sentence, all target sentences may be combined and output after obtaining them. Combining the target sentences may include sequentially combining the target sentences to obtain the target document. The target document 705 may be output directly to the outside or may be output after further manual-assisted rewrite adjustment after being provided by the deep learning model 704.
FIG. 8 illustrates a block diagram of an offline training device 800 of a deep learning model in accordance with an exemplary aspect of the present disclosure. The deep learning model may include, for example, the deep learning model 400 described in connection with fig. 4, and the like.
According to an exemplary, but non-limiting embodiment, the offline training device 800 of the deep learning model may include a module 802 for setting up a feature library. The module 802 for setting up the feature library may, for example, implement the functionality described above in connection with block 502 of fig. 5, etc.
According to an exemplary, but non-limiting embodiment, the offline training device 800 of the deep learning model may further include a module 804 for generating a library of document materials. The module for generating a library of document materials 804 may, for example, implement the functions described above in connection with block 504 of fig. 5, etc.
According to an exemplary, but non-limiting embodiment, the offline training apparatus 800 of the deep learning model may further comprise a module 806 for training the deep learning model. The module for training the deep learning model 806 may, for example, implement the functionality described above in connection with block 506 of fig. 5, etc.
Fig. 9 illustrates a block diagram of an apparatus 900 for intelligent document rewrite using a trained deep learning model in accordance with an aspect of the disclosure.
According to an exemplary, but non-limiting embodiment, intelligent document rewrite apparatus 900 may include a module 902 for inputting a document, a source style, and a target style. The module 902 for entering the document, source style, and target style may, for example, implement the functionality described above in connection with block 602 of fig. 6, and the like. Although a direct input source style is described in the present embodiment, a source style may be extracted from an input source document according to an alternative embodiment.
According to an exemplary, but non-limiting embodiment, intelligent document rewriting apparatus 900 may further include a module 904 for outputting the target document with the trained deep learning model. The module 904 for outputting the target document with the trained deep learning model may, for example, implement the functionality described above in connection with block 604 of fig. 6, etc.
According to an exemplary, but non-limiting embodiment, intelligent document rewriting apparatus 900 may also optionally include a module 906 for making manually-assisted rewrite adjustments. The module for manually assisted overwrite adjustment 906 may, for example, implement the functionality described above in connection with block 606 of fig. 6, etc.
According to an exemplary, but non-limiting embodiment, intelligent document rewriting apparatus 900 may further include a module 908 for outputting a result. The module 908 for outputting the results may, for example, implement the functionality described above in connection with block 608 of fig. 6, etc. The output result may be, for example, a target document or a part thereof rewritten by the deep learning model, or may be a target document or a part thereof adjusted by manual assist rewriting.
Although the present application is described with respect to the intelligent manuscript writing method and apparatus based on deep learning by taking the seq2seq model as an example, the present application is not limited thereto, but may be applied to any deep learning model in existing and future technologies.
Those of ordinary skill in the art will appreciate that the benefits of the present disclosure are not all achieved by any single embodiment. Various combinations, modifications, and substitutions will be apparent to those of ordinary skill in the art based on the present disclosure.
Furthermore, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless otherwise indicated or clear from the context, the phrase "X" employs "a" or "B" is intended to mean any of the natural inclusive permutations. That is, the phrase "X" employs "A" or "B" is satisfied by any of the following examples: x is A; x is B; or X employs both A and B. The terms "connected" and "coupled" may mean the same, meaning that the two devices are electrically connected. In addition, the articles "a" and "an" as used in this disclosure and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
The various aspects or features will be presented in terms of systems that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Combinations of these approaches may also be used.
The various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Further, at least one processor may include one or more modules operable to perform one or more of the steps and/or actions described above. For example, the embodiments described above in connection with the various methods may be implemented by a processor and a memory coupled to the processor, where the processor may be configured to perform any step of any of the methods described above, or any combination thereof.
Furthermore, the steps and/or actions of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. For example, the embodiments described above in connection with the various methods may be implemented by a computer-readable medium storing computer program code which, when executed by a processor/computer, performs any step of any of the methods described above, or any combination thereof.
The elements of the various aspects described throughout this disclosure are all structural and functional equivalents that are presently or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.

Claims (16)

1. The method for performing intelligent manuscript style rewriting based on the deep learning model is characterized by comprising the following steps of:
receiving a source document and at least one target style associated with a source style;
for each of the one or more natural sentences of the source document:
Generating a semantic vector corresponding to the natural sentence of the source manuscript based on the source style by a deep learning model; and
generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and
sequentially merging one or more target natural sentences corresponding to the one or more natural sentences of the source document to generate at least one target document associated with the at least one target style;
the deep learning model is a seq2seq model and comprises an encoder and a decoder;
further comprises:
word segmentation is carried out on natural sentences of the source manuscript, and the word segmentation method comprises the following steps of
The encoder of the deep learning model comprises a plurality of cascaded first unit modules, wherein each word in the segmented natural sentence is sequentially and respectively input to the plurality of cascaded first unit modules;
generating, by the plurality of cascaded first unit modules, an output of the present level based on the output of a first unit of the previous level and words input to the present level in the segmented natural sentences, wherein the first unit of the first level uses the source style as the output of the previous level, and the first unit of the last level outputs semantic vectors corresponding to the natural sentences of the source manuscript, the source style characterizing classification, identification, genre, or applicable crowd of the source manuscript;
The decoder of the deep learning model comprises a plurality of cascaded second unit modules, wherein a first-stage second unit takes the target style as the output of a previous stage;
the method further comprises:
generating, by the plurality of cascaded second unit modules, target words corresponding to the semantic vectors based on the at least one target style, respectively; and
and combining the various generated target words of the plurality of cascaded second unit modules to form a target natural sentence.
2. The method of claim 1, wherein
Semantic vectors corresponding to natural sentences of the source manuscript are generated by an encoder of the deep learning model based on the source style, and
a target natural sentence corresponding to the semantic vector is generated by a decoder of the deep learning model based on the at least one target style.
3. The method of claim 1, further comprising filling the input of the redundant first cell module with a blank when the number of words resulting from the word segmentation of the natural sentence of the source document is less than the number of the plurality of cascaded first cell modules.
4. The method of claim 1, further comprising splitting the natural sentence of the source document when a number of words obtained after the splitting of the natural sentence is greater than a number of the plurality of cascaded first cell modules.
5. The method of claim 1, wherein the source style is received externally or extracted directly from the source document.
6. The method of claim 1, further comprising training the deep learning model, wherein training the deep learning model comprises:
setting a feature library, wherein the feature library comprises two or more features related to intelligent manuscript style rewriting;
generating a document material library, the document material library comprising pairs of articles associated with at least two features in the feature library; and
the deep learning model is trained based on the library of manuscript materials.
7. The method of claim 6, wherein generating a library of document materials comprises one or more of:
for a particular feature in the feature library:
(i) Capturing all articles with the specific features from a featured website;
(ii) Retrieving articles with high relevance from a search engine based on the specific features; and
(iii) Machine learning is used to learn a marking model to find articles related to the particular feature in text crawled from the web.
8. An apparatus for performing intelligent manuscript style rewriting based on a deep learning model, comprising:
Means for receiving a source document associated with a source style and at least one target style;
for each of the one or more natural sentences of the source document:
means for generating, by a deep learning model, semantic vectors corresponding to natural sentences of the source document based on the source style; and
means for generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and
means for sequentially merging the target natural sentences to generate at least one target document associated with the at least one target style;
the deep learning model is a seq2seq model and comprises an encoder and a decoder;
further comprises:
a module for word segmentation of natural sentences of the source document, and wherein
The encoder of the deep learning model comprises a plurality of cascaded first unit modules, wherein each word in the segmented natural sentence is sequentially and respectively input to the plurality of cascaded first unit modules;
means for generating, by the plurality of cascaded first-unit modules, an output of the present level based on the output of a first-level first unit that uses the source style as an output of a previous level and a word of the present level input in the segmented natural sentence, and a final-level first unit that outputs a semantic vector corresponding to the natural sentence of the source document, the source style characterizing a classification, identification, genre, or applicable crowd of the source document;
The decoder of the deep learning model comprises a plurality of cascaded second unit modules, wherein a first-stage second unit takes the target style as the output of a previous stage;
the device further comprises:
means for generating, by the second unit means of the plurality of concatenations, target words corresponding to the semantic vectors based on the at least one target style, respectively; and
and a module for combining the various generated target words of the plurality of cascaded second unit modules to form a target natural sentence.
9. The apparatus of claim 8, wherein
Semantic vectors corresponding to natural sentences of the source manuscript are generated by an encoder of the deep learning model based on the source style, and
a target natural sentence corresponding to the semantic vector is generated by a decoder of the deep learning model based on the at least one target style.
10. The apparatus of claim 8, further comprising means for filling the input of the redundant first cell module with a blank when the number of words resulting from the segmentation of the natural sentence of the source document is less than the number of the plurality of cascaded first cell modules.
11. The apparatus of claim 8, further comprising means for splitting the natural sentence of the source document when a number of words obtained after the splitting of the natural sentence is greater than a number of the plurality of cascaded first cell modules.
12. The apparatus of claim 8, wherein the source style is received externally or extracted directly from the source document.
13. The apparatus of claim 8, further comprising means for training the deep learning model, wherein the means for training the deep learning model comprises:
a module for setting a feature library, the feature library comprising two or more features related to intelligent manuscript style overwriting;
a module for generating a document material library comprising pairs of articles associated with at least two features in the feature library; and
means for training the deep learning model based on the library of manuscript materials.
14. The apparatus of claim 13, wherein the means for generating a library of document materials comprises one or more of:
for a particular feature in the feature library:
(i) Means for crawling all articles with the particular feature from a featured website;
(ii) Means for retrieving articles of high relevance from a search engine based on the particular feature; and
(iii) And means for learning a marking model using machine learning to find articles related to the particular feature in text crawled from the web.
15. An apparatus for performing intelligent manuscript style rewriting based on a deep learning model, comprising:
a memory; and
a processor coupled to the memory, the processor configured to:
receiving a source document and at least one target style associated with a source style, the source document including one or more natural sentences;
for each of the one or more natural sentences of the source document:
generating semantic vectors corresponding to natural sentences of the source manuscript based on the source style by a deep learning model; and
generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and
sequentially merging the target natural sentences to generate at least one target document associated with the at least one target style;
The deep learning model is a seq2seq model and comprises an encoder and a decoder;
further comprises:
word segmentation is carried out on natural sentences of the source manuscript, and the word segmentation method comprises the following steps of
The encoder of the deep learning model comprises a plurality of cascaded first unit modules, wherein each word in the segmented natural sentence is sequentially and respectively input to the plurality of cascaded first unit modules;
generating, by the plurality of cascaded first unit modules, an output of the present level based on the output of a first unit of the previous level and words input to the present level in the segmented natural sentences, wherein the first unit of the first level uses the source style as the output of the previous level, and the first unit of the last level outputs semantic vectors corresponding to the natural sentences of the source manuscript, the source style characterizing classification, identification, genre, or applicable crowd of the source manuscript;
the decoder of the deep learning model comprises a plurality of cascaded second unit modules, wherein a first-stage second unit takes the target style as the output of a previous stage;
further comprises:
generating, by the plurality of cascaded second unit modules, target words corresponding to the semantic vectors based on the at least one target style, respectively; and
And combining the various generated target words of the plurality of cascaded second unit modules to form a target natural sentence.
16. A computer readable medium storing processor executable instructions for intelligent manuscript-style overwriting based on a deep learning model, wherein the processor executable instructions when executed by a processor cause the processor to:
receiving a source document and at least one target style associated with a source style;
for each of the one or more natural sentences of the source document:
generating semantic vectors corresponding to natural sentences of the source manuscript based on the source style by a deep learning model; and
generating, by the deep learning model, a target natural sentence corresponding to the semantic vector based on the at least one target style; and
sequentially merging the target natural sentences to generate at least one target document associated with the at least one target style;
the deep learning model is a seq2seq model and comprises an encoder and a decoder;
further comprises:
word segmentation is carried out on natural sentences of the source manuscript, and the word segmentation method comprises the following steps of
The encoder of the deep learning model comprises a plurality of cascaded first unit modules, wherein each word in the segmented natural sentence is sequentially and respectively input to the plurality of cascaded first unit modules;
Generating, by the plurality of cascaded first unit modules, an output of the present level based on the output of a first unit of the previous level and words input to the present level in the segmented natural sentences, wherein the first unit of the first level uses the source style as the output of the previous level, and the first unit of the last level outputs semantic vectors corresponding to the natural sentences of the source manuscript, the source style characterizing classification, identification, genre, or applicable crowd of the source manuscript;
the decoder of the deep learning model comprises a plurality of cascaded second unit modules, wherein a first-stage second unit takes the target style as the output of a previous stage;
further comprises:
generating, by the plurality of cascaded second unit modules, target words corresponding to the semantic vectors based on the at least one target style, respectively; and
and combining the various generated target words of the plurality of cascaded second unit modules to form a target natural sentence.
CN201910780331.4A 2019-08-22 2019-08-22 Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model Active CN110688834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910780331.4A CN110688834B (en) 2019-08-22 2019-08-22 Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910780331.4A CN110688834B (en) 2019-08-22 2019-08-22 Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model

Publications (2)

Publication Number Publication Date
CN110688834A CN110688834A (en) 2020-01-14
CN110688834B true CN110688834B (en) 2023-10-31

Family

ID=69108564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910780331.4A Active CN110688834B (en) 2019-08-22 2019-08-22 Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model

Country Status (1)

Country Link
CN (1) CN110688834B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553146A (en) * 2020-05-09 2020-08-18 杭州中科睿鉴科技有限公司 News writing style modeling method, writing style-influence analysis method and news quality evaluation method
CN111768755A (en) * 2020-06-24 2020-10-13 华人运通(上海)云计算科技有限公司 Information processing method, information processing apparatus, vehicle, and computer storage medium
CN111931496B (en) * 2020-07-08 2022-11-15 广东工业大学 Text style conversion system and method based on recurrent neural network model
CN114519339A (en) * 2020-11-20 2022-05-20 北京搜狗科技发展有限公司 Input method, input device and input device
WO2023115914A1 (en) * 2021-12-20 2023-06-29 山东浪潮科学研究院有限公司 Method and device for generating document having consistent writing style, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997370A (en) * 2015-08-07 2017-08-01 谷歌公司 Text classification and conversion based on author
CN108304436A (en) * 2017-09-12 2018-07-20 深圳市腾讯计算机系统有限公司 The generation method of style sentence, the training method of model, device and equipment
CN109344391A (en) * 2018-08-23 2019-02-15 昆明理工大学 Multiple features fusion Chinese newsletter archive abstraction generating method neural network based
CN109583952A (en) * 2018-11-28 2019-04-05 深圳前海微众银行股份有限公司 Advertising Copy processing method, device, equipment and computer readable storage medium
CN109885811A (en) * 2019-01-10 2019-06-14 平安科技(深圳)有限公司 Written style conversion method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997370A (en) * 2015-08-07 2017-08-01 谷歌公司 Text classification and conversion based on author
CN108304436A (en) * 2017-09-12 2018-07-20 深圳市腾讯计算机系统有限公司 The generation method of style sentence, the training method of model, device and equipment
CN109344391A (en) * 2018-08-23 2019-02-15 昆明理工大学 Multiple features fusion Chinese newsletter archive abstraction generating method neural network based
CN109583952A (en) * 2018-11-28 2019-04-05 深圳前海微众银行股份有限公司 Advertising Copy processing method, device, equipment and computer readable storage medium
CN109885811A (en) * 2019-01-10 2019-06-14 平安科技(深圳)有限公司 Written style conversion method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110688834A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688834B (en) Method and equipment for carrying out intelligent manuscript style rewriting based on deep learning model
CN109753566B (en) Model training method for cross-domain emotion analysis based on convolutional neural network
CN108009228B (en) Method and device for setting content label and storage medium
CN109844708B (en) Recommending media content through chat robots
WO2020107878A1 (en) Method and apparatus for generating text summary, computer device and storage medium
CN111190997B (en) Question-answering system implementation method using neural network and machine learning ordering algorithm
CN111061862B (en) Method for generating abstract based on attention mechanism
CN109960724B (en) Text summarization method based on TF-IDF
CN111027595B (en) Double-stage semantic word vector generation method
US20070260564A1 (en) Text Segmentation and Topic Annotation for Document Structuring
CN109977220B (en) Method for reversely generating abstract based on key sentence and key word
CN112163092B (en) Entity and relation extraction method, system, device and medium
CN112749253B (en) Multi-text abstract generation method based on text relation graph
CN110807324A (en) Video entity identification method based on IDCNN-crf and knowledge graph
CN111178053B (en) Text generation method for generating abstract extraction by combining semantics and text structure
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN111125333B (en) Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN115794999A (en) Patent document query method based on diffusion model and computer equipment
CN111723295A (en) Content distribution method, device and storage medium
CN113806554A (en) Knowledge graph construction method for massive conference texts
CN114281982B (en) Book propaganda abstract generation method and system adopting multi-mode fusion technology
CN112183106A (en) Semantic understanding method and device based on phoneme association and deep learning
CN115935975A (en) Controllable-emotion news comment generation method
CN113127604B (en) Comment text-based fine-grained item recommendation method and system
CN111382333B (en) Case element extraction method in news text sentence based on case correlation joint learning and graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant