CN106933785A - A kind of abstraction generating method based on recurrent neural network - Google Patents

A kind of abstraction generating method based on recurrent neural network Download PDF

Info

Publication number
CN106933785A
CN106933785A CN201710099638.9A CN201710099638A CN106933785A CN 106933785 A CN106933785 A CN 106933785A CN 201710099638 A CN201710099638 A CN 201710099638A CN 106933785 A CN106933785 A CN 106933785A
Authority
CN
China
Prior art keywords
state vector
recurrent neural
neural network
decoder
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710099638.9A
Other languages
Chinese (zh)
Inventor
贾江龙
刘聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201710099638.9A priority Critical patent/CN106933785A/en
Publication of CN106933785A publication Critical patent/CN106933785A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present invention relates to a kind of abstraction generating method based on recurrent neural network, in current time t, by the state vector s of recurrent neural network decodertState vector with each moment of recurrent neural network encoder is contrasted, and is found out and state vector stThe most strong state vector H of relevance, then d state vector of utilization state vector H and the state vector H left and right sides be calculated state vector ct, utilization state vector ct, state vector stIt is calculated new state vector dt, then according to state vector dtDecoding obtains next word or word of output sequence, and wherein d is the integer more than or equal to 1.

Description

A kind of abstraction generating method based on recurrent neural network
Technical field
The present invention relates to field of neural networks, more particularly, to a kind of summarization generation side based on recurrent neural network Method.
Background technology
Summarization generation is a major issue of natural language processing field, and it mainly has two kinds of different forms:It is a kind of It is the purport for generating source document, another kind is the title for generating source document.The former is general long, potentially include dozens of word or Word, and the latter is relatively short, general only ten words or so.Summary is that the height of source document is summarized, and it must be simple and clear Expression source document the meaning.Traditional abstraction generating method can be divided into three steps:First, according to certain standard (such as participle) Source document is divided into many small pieces;2nd, according to each segment weight (such as tf-idf), therefrom select weight ratio compared with Those big segments;3rd, the larger segment of those weights is combined into by new sentence according to certain algorithm, as plucking for source document Will.
Prior art provides a kind of abstraction generating method for being based on recurrent neural network encoder and decoder, should Method is actually machine-learning process of the sequence to sequence, and its input can be a sentence, a paragraph or One article, its output is the purport or title of corresponding input, therefore input and output can be regarded as by word or The time series of word composition.Compared with traditional abstraction generating method, this is a kind of abstract summarization generation process.Its basis is given Fixed list entries, method by whole vocabulary, recursively make constituting a new sentence from front to back by search keyword Be output sequence, i.e. summary info.
The effect of wherein recurrent neural network encoder be using recurrent neural network will be given list entries conversion or Person's mapping turns into an intermediate expression, and the paragraph or article that will be input into are converted into a vector expression H.Assuming that input Sequence is:X={ x1,x2,…,xn, wherein n represents the length of list entries.As shown in figure 1, the effect of encoder can be used to Lower expression formula is represented:
ht=f (xt,ht-1)
Wherein, xtRepresent t-th corresponding vector of element, h in list entriestRepresent the state vector of t encoder, f WithRepresent nonlinear mapping function.H represents the vector expression of list entries, generally takes H=hn, i.e., with passing Return neutral net encoder last moment state vector as list entries intermediate vector expression formula.Eos is a spy Different mark, represents the termination of list entries, and encoder work end and the beginning of decoder functions.
Accordingly, the effect of recurrent neural network decoder is the intermediate vector expression formula H next life using encoder generation Into output sequence.Assuming that output sequence is:Y={ y1,y2,…,ym, wherein m represents the length of output sequence.Should be noted It is that decoder is not disposably to generate whole output sequence, but is generated according to each moment of order from front to back and exported One word or a word of order, untill the whole output sequence of generation.As shown in Fig. 2 the effect of decoder can be used Following formula is represented:
P(yt|y1..., yt-1, H) and=g (st,H)
st=f (yt-1, st-1)
Wherein, P (Y │ X) represents the probability for obtaining being input into Y according to input X;ytRepresent the decoding of t in output sequence The word or word for going out, stRepresent the state vector of t decoder.F and g represent non-linear transfer function, and g is used here Softmax functions.
In such scheme, the abstraction generating method based on recurrent neural network encoder and decoder has a deficiency, i.e., Output sequence generation only it is relevant with the state vector of the last moment of encoder, but with encoder in other state vectors without Close.When recurrent neural network length increase when, recurrent neural network extract characteristic vector often with list entries after The state relation in face increases, but is reduced with the state relation before list entries, and this may cause the decay of information. Therefore only being decoded according to the final state of encoder can cause the relevance between output sequence and list entries to die down.
The content of the invention
The present invention can cause the defect of information attenuation to solve above prior art during decoding, there is provided a kind of Abstraction generating method based on recurrent neural network
To realize above goal of the invention, the technical scheme of use is:
A kind of abstraction generating method based on recurrent neural network, in current time t, by recurrent neural network decoder State vector stState vector with each moment of recurrent neural network encoder is contrasted, and is found out and state vector stClose The most strong state vector H of connection property, then d state vector of utilization state vector H and the state vector H left and right sides be calculated State vector ct, utilization state vector ct, state vector stObtain new state vector dt, then according to state vector dtDecode To next word or word of output sequence, wherein d is the integer more than or equal to 1.
In such scheme, when next word or word of output sequence to be generated, the method that the present invention is provided is simultaneously Do not decoded with the current state vector of decoder directly;But the current state vector s of decodertWith each moment of encoder State vector contrasted, find out and stThe state vector H of most like encoder, then by H and its both sides several State vector is calculated a state vector ct, and utilization state vector ct, state vector stIt is calculated new state vector dt, it has reacted the word currently to be generated or word should be proportionate with it, i.e., it is the next word or word that will be generated Alignment information.Finally according to state vector dtNext word or word of output sequence are obtained to decode.By finding H and profit State vector c is generated with Ht, and utilization state vector ct, state vector stIt is calculated new state vector dt, finally using shape State vector dtDecoding obtains next word or word of output sequence so that method is solved between output sequence and list entries Alignment relation, improve the quality and efficiency of output sequence, and make relevance between output sequence and list entries so as to Level higher can be maintained at.
Preferably, it is described to find out and state vector stThe detailed process of the most strong state vector H of relevance is as follows:
Wherein ptPositions of the state vector H in recurrent neural network coder state sequence vector is represented, n represents input The length of sequence,wpThe parameter for needing study is represented, sigmoid functions are specifically expressed as follows:
Sigmoid (x)=1/ (1+e-x)
Tanh function representations are as follows:
Preferably, it is described to be calculated state vector ctDetailed process it is as follows:
Wherein, αtiRepresent weight, hiRepresent the state vector of recurrent neural network encoder.
Preferably, the αtiTo ask for process as follows:
Wherein etiRepresent the relevance weight of state vector in state vector and encoder in decoder:
eti=si*hi
Preferably, the state vector dtCalculating process it is as follows:
dt=sigmoid (ct*st)
Wherein, sigmoid functions are specifically expressed as follows:
Sigmoid (x)=1/ (1+e-x)。
In such scheme, state vector dtIt is in fact a kind of strategy of local optimum for the treatment of alignment relation, output Sequence and the alignment relation of list entries should be overall related rather than should be local correlation, when this means that current The alignment information at quarter not only has an impact to the decoding at current time, and should can also have an impact to follow-up decoding.Therefore, originally The alignment information d of the method current time of offer is providedtAs in next state of extra information input to decoder.
Preferably, if recurrent neural network encoder is sandwich construction with decoder, by state vector ctAnd/or dtDirectly Connect and be input to the ground floor of decoder or last layer and be applied in the decoding of decoder subsequent time;If recurrent neural net Network encoder is single layer structure with decoder, then by state vector ctAnd/or dtIt is directly inputted to decoder.
Compared with prior art, the beneficial effects of the invention are as follows:
When next word or word of output sequence is generated, the method that the present invention is provided is not for the method for the present invention Directly decoded with the current state vector of decoder;But the current state vector s of decodertWith each moment of encoder State vector is contrasted, and is found out and stThe state vector H of most like encoder, then by H and several shapes of its both sides State vector is calculated a state vector ct, and utilization state vector ct, state vector stIt is calculated new state vector dt, it has reacted the word currently to be generated or word should be proportionate with it, i.e., it is the next word or word that will be generated Alignment information.Finally according to state vector dtNext word or word of output sequence are obtained to decode.By finding H and profit State vector c is generated with Ht, and utilization state vector ct, state vector stIt is calculated new state vector dt, finally using shape State vector dtDecoding obtains next word or word of output sequence so that method is solved between output sequence and list entries Alignment relation, improve the quality and efficiency of output sequence, and make relevance between output sequence and list entries so as to Level higher can be maintained at.
Brief description of the drawings
Fig. 1 is the effect schematic diagram of encoder.
Fig. 2 is the effect schematic diagram of decoder.
Fig. 3 is the effect schematic diagram of attention mechanism.
(a), (b), (c) of Fig. 4, (d) are respectively four kinds of effect schematic diagrames of difference feed mechanism.
Specific embodiment
Accompanying drawing being for illustration only property explanation, it is impossible to be interpreted as the limitation to this patent;
Below in conjunction with drawings and Examples, the present invention is further elaborated.
Embodiment 1
The primary object of the method that the present invention is provided is that increased attention mechanism, as shown in figure 3, its is specific It is as follows:
In current time t, by the state vector s of recurrent neural network decodertIt is every with recurrent neural network encoder The state vector at individual moment is contrasted, and is found out and state vector stThe most strong state vector H of relevance, then utilization state to D state vector of amount H and the state vector H left and right sides is calculated state vector ct, utilization state vector ct, state vector stObtain new state vector dt, then according to state vector dtDecoding obtains next word or word of output sequence, wherein d It is the integer more than or equal to 1.
In such scheme, when next word or word of output sequence to be generated, the method that the present invention is provided is simultaneously Do not decoded with the current state vector of decoder directly;But the current state vector s of decodertWith each moment of encoder State vector contrasted, find out and stThe state vector H of most like encoder, then by H and its both sides several State vector is calculated a state vector ct, and utilization state vector ct, state vector stIt is calculated new state vector dt, it has reacted the word currently to be generated or word should be proportionate with it, i.e., it is the next word or word that will be generated Alignment information.Finally according to state vector dtNext word or word of output sequence are obtained to decode.By finding H and profit State vector c is generated with Ht, and utilization state vector ct, state vector stIt is calculated new state vector dt, finally using shape State vector dtDecoding obtains next word or word of output sequence so that method is solved between output sequence and list entries Alignment relation, improve the quality and efficiency of output sequence, and make relevance between output sequence and list entries so as to Level higher can be maintained at.
Embodiment 2
Attention mechanism solves the alignment relation between output sequence and list entries to a certain extent, so that Improve the quality and efficiency of output sequence.But attention mechanism is in fact one kind local most for the treatment of alignment relation Excellent strategy.In order to further improve the quality of output sequence i.e. source document summary, the present embodiment has used a kind of new complete The optimal strategy of office, i.e. feed mechanism.
There are two kinds of alignment informations in fact in attention mechanism, one kind is direct alignment information ct, another kind is Indirectly alignment informationd t, they have all reacted position and the content that next word or word in output sequence should align.Output sequence Row should be overall related rather than that should be local correlation to the alignment relation of list entries, and this means that current time Alignment information not only have an impact to the decoding at current time, and should can also have an impact to follow-up decoding.Therefore, it can Using the alignment information at current time as extra information input to next state of decoder in.
Can be individual layer, or multilayer based on recurrent neural network encoder and decoder, as shown in Figure 4. The present embodiment employs two kinds of distribution mechanisms:One is the subsequent time which kind of alignment information is input to decoder, is directly alignd Information or indirectly alignment information.Two is that last layer is also to which layer of the subsequent time of decoder by alignment information input It is last layer.Based on both mechanism, four kinds of difference feed mechanism can be obtained, as shown in figure 4, they are respectively:Indirectly Alignment information is input to last layer, and direct alignment information is input to last layer, and indirect alignment information is input to ground floor, directly Connect alignment information and be input to ground floor.
Embodiment 3
The sequence of summarization generation essentially belongs to a kind of machine learning method to the learning method of sequence, therefore Can be trained using the general training method of machine learning.In order to accelerate training speed, the present embodiment is using minimum lot size ladder Descent algorithm (mini-batch gradient descent) is spent to train.Because the vocabulary of Chinese is very big, needed for decoding process Want the time very long, in order to reduce the decoding time, the present embodiment uses word table, i.e., only use 4000 the most frequently used Chinese character conducts Word table, for other words, is replaced using special mark.Tables 1 and 2 is made by using test data set to generation method Test and evaluation
Table 1
Model R-1 R-2 R-L BLEU
Multilayer RNN 30.4 16.0 27.3 9.5
Multilayer RNN+attention 32.2 17.8 28.8 11.4
Multilayer RNN+attention+feed1 32.2 17.7 28.7 11.3
Multilayer RNN+attention+feed2 33.1 18.3 29.5 12.0
Multilayer RNN+attention+feed3 30.1 16.7 27.6 10.3
Multilayer RNN+attention+feed4 31.1 17.1 27.9 10.9
Table 2
Model Informedness Grammer Terseness
Multilayer RNN+attention 2.87 3.95 3.28
Multilayer RNN+attention+feed1 2.85 4.01 2.93
Multilayer RNN+attention+feed2 3.02 4.23 3.30
Multilayer RNN+attention+feed3 3.00 3.93 3.10
Multilayer RNN+attention+feed4 2.88 4.03 3.03
Manually 3.88 4.42 3.80
Table 1 employs two kinds of machine index ROUGE and BLEU to be estimated generation method.ROUGE is from recall rate Angle assess, and BLEU is angle from accurate rate assesses.From table 1 it follows that using second feed Strategy, will be when directly alignment information be input to last layer of decoder, and the result of test behaves oneself best, more than only making With the generation method of attention mechanism, this illustrates the validity of generation method.
Table 2 employs three-type-person's work index to assess generation method.Wherein informedness reflects the forgiven information content of summary, The degree of grammer reflection summary grammatical, the succinct degree of terseness reflection summary, artificial model represents manually generated Summary info.Can show that the conclusion similar with table 1, i.e. feed mechanism have exceeded attention mechanism according to table 2.Equally will Last layer of this strategy that direct alignment information is input to decoder shows outstanding.
By two contrasts of form, it can be deduced that abstraction generating method has high efficiency, the conclusion of high accuracy, especially When being using second feed mechanism.
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not right The restriction of embodiments of the present invention.For those of ordinary skill in the field, may be used also on the basis of the above description To make other changes in different forms.There is no need and unable to be exhaustive to all of implementation method.It is all this Any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in the claims in the present invention Protection domain within.

Claims (7)

1. a kind of abstraction generating method based on recurrent neural network, it is characterised in that:In current time t, by recurrent neural net The state vector s of network decodertState vector with each moment of recurrent neural network encoder is contrasted, and is found out and shape State vector stThe most strong state vector H of relevance, then d state of utilization state vector H and the state vector H left and right sides to Amount is calculated state vector ct, utilization state vector ct, state vector stIt is calculated new state vector dt, then basis State vector dtDecoding obtains next word or word of output sequence, and wherein d is the integer more than or equal to 1.
2. the abstraction generating method based on recurrent neural network according to claim 1, it is characterised in that:
It is described to find out and state vector stThe detailed process of the most strong state vector H of relevance is as follows:
p t = n * s i g m o i d ( v p T * tanh ( w p * s t ) )
Wherein ptPositions of the state vector H in recurrent neural network coder state sequence vector is represented, n represents list entries Length,wpThe parameter for needing study is represented, sigmoid functions are specifically expressed as follows:
Sigmoid (x)=1/ (1+e-x)
Tanh function representations are as follows:
tanh ( x ) = - 1 , x < - 1 x , - 1 &le; x &le; 1 1 , x &GreaterEqual; 1 .
3. the abstraction generating method based on recurrent neural network according to claim 2, it is characterised in that:It is described to calculate To state vector ctDetailed process it is as follows:
c t = &Sigma; i = p t - d p t + d &alpha; t i * h i
Wherein, αtiRepresent weight, hiRepresent the state vector of recurrent neural network encoder.
4. the abstraction generating method based on recurrent neural network according to claim 3, it is characterised in that:The αtiAsk Take process as follows:
&alpha; t i = exp ( e t i ) / &Sigma; k = 1 n e t k
Wherein etiRepresent the relevance weight of state vector in state vector and encoder in decoder:
eti=si*hi
5. the abstraction generating method based on recurrent neural network according to claim 1, it is characterised in that:
The state vector dtCalculating process it is as follows:
dt=sigmoid (ct*st)
Wherein, sigmoid functions are specifically expressed as follows:
Sigmoid (x)=1/ (1+e-x)。
6. the abstraction generating method based on recurrent neural network according to claim 1, it is characterised in that:By state vector ctAnd/or dtIt is added in the decoder of decoder subsequent time.
7. the abstraction generating method based on recurrent neural network according to claim 6, it is characterised in that:If recurrent neural Network encoder is sandwich construction with decoder, then by state vector ctAnd/or dtIt is directly inputted to the ground floor or most of decoder Later layer is applied in the decoding of decoder subsequent time;If recurrent neural network encoder is single layer structure with decoder, Then by state vector ctAnd/or dtIt is directly inputted to decoder.
CN201710099638.9A 2017-02-23 2017-02-23 A kind of abstraction generating method based on recurrent neural network Pending CN106933785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710099638.9A CN106933785A (en) 2017-02-23 2017-02-23 A kind of abstraction generating method based on recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710099638.9A CN106933785A (en) 2017-02-23 2017-02-23 A kind of abstraction generating method based on recurrent neural network

Publications (1)

Publication Number Publication Date
CN106933785A true CN106933785A (en) 2017-07-07

Family

ID=59423748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710099638.9A Pending CN106933785A (en) 2017-02-23 2017-02-23 A kind of abstraction generating method based on recurrent neural network

Country Status (1)

Country Link
CN (1) CN106933785A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766506A (en) * 2017-10-20 2018-03-06 哈尔滨工业大学 A kind of more wheel dialog model construction methods based on stratification notice mechanism
CN108153864A (en) * 2017-12-25 2018-06-12 北京牡丹电子集团有限责任公司数字电视技术中心 Method based on neural network generation text snippet
CN108319668A (en) * 2018-01-23 2018-07-24 义语智能科技(上海)有限公司 Generate the method and apparatus of text snippet
CN108984524A (en) * 2018-07-05 2018-12-11 北京理工大学 A kind of title generation method based on variation neural network topic model
CN109325110A (en) * 2018-08-24 2019-02-12 广东外语外贸大学 Indonesian documentation summary generation method, device, storage medium and terminal device
CN109947930A (en) * 2019-03-12 2019-06-28 上海秘塔网络科技有限公司 Abstraction generating method, device, terminal and computer readable storage medium
CN109948162A (en) * 2019-03-25 2019-06-28 北京理工大学 The production text snippet method of fusion sequence grammer annotation framework
CN110032729A (en) * 2019-02-13 2019-07-19 北京航空航天大学 A kind of autoabstract generation method based on neural Turing machine
CN110598779A (en) * 2017-11-30 2019-12-20 腾讯科技(深圳)有限公司 Abstract description generation method and device, computer equipment and storage medium
CN111192576A (en) * 2018-11-14 2020-05-22 三星电子株式会社 Decoding method, speech recognition device and system
CN111386537A (en) * 2017-10-27 2020-07-07 谷歌有限责任公司 Decoder-only attention-based sequence-switched neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868829A (en) * 2015-02-06 2016-08-17 谷歌公司 Recurrent neural networks for data item generation
CN105930314A (en) * 2016-04-14 2016-09-07 清华大学 Text summarization generation system and method based on coding-decoding deep neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868829A (en) * 2015-02-06 2016-08-17 谷歌公司 Recurrent neural networks for data item generation
CN105930314A (en) * 2016-04-14 2016-09-07 清华大学 Text summarization generation system and method based on coding-decoding deep neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALEX ALIFIMOFF: "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks", 《PROCEEDINGS OF NAACL-HLT 2016》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766506A (en) * 2017-10-20 2018-03-06 哈尔滨工业大学 A kind of more wheel dialog model construction methods based on stratification notice mechanism
CN111386537A (en) * 2017-10-27 2020-07-07 谷歌有限责任公司 Decoder-only attention-based sequence-switched neural network
CN110598779B (en) * 2017-11-30 2022-04-08 腾讯科技(深圳)有限公司 Abstract description generation method and device, computer equipment and storage medium
CN110598779A (en) * 2017-11-30 2019-12-20 腾讯科技(深圳)有限公司 Abstract description generation method and device, computer equipment and storage medium
CN108153864A (en) * 2017-12-25 2018-06-12 北京牡丹电子集团有限责任公司数字电视技术中心 Method based on neural network generation text snippet
CN108319668A (en) * 2018-01-23 2018-07-24 义语智能科技(上海)有限公司 Generate the method and apparatus of text snippet
CN108319668B (en) * 2018-01-23 2021-04-20 义语智能科技(上海)有限公司 Method and equipment for generating text abstract
CN108984524A (en) * 2018-07-05 2018-12-11 北京理工大学 A kind of title generation method based on variation neural network topic model
CN109325110A (en) * 2018-08-24 2019-02-12 广东外语外贸大学 Indonesian documentation summary generation method, device, storage medium and terminal device
CN109325110B (en) * 2018-08-24 2021-06-25 广东外语外贸大学 Indonesia document abstract generation method and device, storage medium and terminal equipment
CN111192576A (en) * 2018-11-14 2020-05-22 三星电子株式会社 Decoding method, speech recognition device and system
CN111192576B (en) * 2018-11-14 2024-08-27 三星电子株式会社 Decoding method, voice recognition device and system
CN110032729A (en) * 2019-02-13 2019-07-19 北京航空航天大学 A kind of autoabstract generation method based on neural Turing machine
CN109947930A (en) * 2019-03-12 2019-06-28 上海秘塔网络科技有限公司 Abstraction generating method, device, terminal and computer readable storage medium
CN109948162A (en) * 2019-03-25 2019-06-28 北京理工大学 The production text snippet method of fusion sequence grammer annotation framework

Similar Documents

Publication Publication Date Title
CN106933785A (en) A kind of abstraction generating method based on recurrent neural network
CN106980683B (en) Blog text abstract generating method based on deep learning
CN111897949B (en) Guided text abstract generation method based on Transformer
CN107038159B (en) A kind of neural network machine interpretation method based on unsupervised domain-adaptive
CN109635124B (en) Remote supervision relation extraction method combined with background knowledge
CN106547735A (en) The structure and using method of the dynamic word or word vector based on the context-aware of deep learning
CN108280112A (en) Abstraction generating method, device and computer equipment
CN106383816B (en) The recognition methods of Chinese minority area place name based on deep learning
CN109840322B (en) Complete shape filling type reading understanding analysis model and method based on reinforcement learning
CN107133211A (en) A kind of composition methods of marking based on notice mechanism
CN109635280A (en) A kind of event extraction method based on mark
CN107273355A (en) A kind of Chinese word vector generation method based on words joint training
CN107066973A (en) A kind of video content description method of utilization spatio-temporal attention model
CN103325061B (en) A kind of community discovery method and system
CN109271644A (en) A kind of translation model training method and device
CN107133224A (en) A kind of language generation method based on descriptor
CN107644014A (en) A kind of name entity recognition method based on two-way LSTM and CRF
CN109977199B (en) Reading understanding method based on attention pooling mechanism
CN108874997A (en) A kind of name name entity recognition method towards film comment
CN109977234A (en) A kind of knowledge mapping complementing method based on subject key words filtering
CN111897957B (en) Capsule neural network integrating multi-scale feature attention and text classification method
CN107451278A (en) Chinese Text Categorization based on more hidden layer extreme learning machines
Shah et al. Image captioning using deep neural architectures
CN109145290A (en) Based on word vector with from the semantic similarity calculation method of attention mechanism
CN107247703A (en) Microblog emotional analysis method based on convolutional neural networks and integrated study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20201013

AD01 Patent right deemed abandoned