CN108629401A - Character level language model prediction method based on local sensing recurrent neural network - Google Patents

Character level language model prediction method based on local sensing recurrent neural network Download PDF

Info

Publication number
CN108629401A
CN108629401A CN201810398231.0A CN201810398231A CN108629401A CN 108629401 A CN108629401 A CN 108629401A CN 201810398231 A CN201810398231 A CN 201810398231A CN 108629401 A CN108629401 A CN 108629401A
Authority
CN
China
Prior art keywords
neural network
recurrent neural
layer
local sensing
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810398231.0A
Other languages
Chinese (zh)
Inventor
刘惠义
王刚
陶颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201810398231.0A priority Critical patent/CN108629401A/en
Publication of CN108629401A publication Critical patent/CN108629401A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The present invention discloses a kind of character level language model prediction method based on local sensing recurrent neural network, use the processing form of recurrent neural network, together by hierarchical combination by three-layer network, low level obtains the feature of local intercharacter, the semantic feature of high-level acquisition text, to make new model that there is stronger informix ability, and it is easier to handle longer data sequence.This method has the method for supervision to train using BPTT RNN (recurrent neural network back-propagation algorithm);Adadelta (autoadapted learning rate adjustment) is used to optimize training to test set BPC less than 1.45 first, it is allowed to Fast Convergent, learning rate 0.0001 is reused, SGD (stochastic gradient descent) optimization method training that momentum is 0.9, to obtain preferable test result.

Description

Character level language model prediction method based on local sensing recurrent neural network
Technical field
The invention belongs to natural language processing field, specifically a kind of character level based on local sensing recurrent neural network Language model prediction method.
Background technology
Recurrent neural network is a kind of dynamic model of great ability to express because RNN have it is high-dimensional hiding non-linear Internal state so that it can extract the Dependency Specification of priori from the information of previously processed mistake.Theoretically, a hidden state Sufficiently large RNN can generate the sequence of arbitrarily complicated degree, and the verified feelings in given any amount hidden neuron RNN is that figure spirit is complete under shape;But in fact, the RNN of standard cannot store longer existing list entries information, so while The ability of RNN is very attractive to people, but internal hidden state becomes unstable after multiple Recursion process, gradient-pole It easily disappears or expands.This makes the application of the complete RNN of figure spirit be restricted.
Sutskever in 2011 et al. trains character level language model using a mutation of RNN, and obtains at that time most Good performance, hereafter Graves show the powerful Sequence Structure Information capture energy of recurrent neural network through a large number of experiments Power;Mikolov in 2015 uses feedforward neural network, maximum informational entropy, n-gram to be given in word level and character rank respectively Go out Contrast on effect.Many achievements in research show compared with traditional feedforward neural network model or probabilistic model, recurrent neural net Network is more suitable for handling the wider character level neural network language model of data sequence window.
However, the form of traditional each layer of multilayer recurrent neural network is similar, function is similar, this makes every layer of recurrent neural The function of network is difficult to divide, and is also not easy to determine the neuronal quantity of the layering quantity and each layer wanted;And when data are defeated When entering conventional multilayer recurrent neural network, each time step, data are merely by bottom neural network upper layer neural network Transmission, it is single that data information flows into mode, it is difficult to handle longer data sequence.
Invention content
In response to the problems existing in the prior art, the purpose of the present invention is to provide one kind being based on local sensing recurrent neural network (LA-RNN) character level language model prediction method, by pressing the network structure of three layers of similar recurrent neural network (RNN) Certain hierarchical combination together, by low layer to the high-rise information processing for making each layer be responsible for different levels, to make new model have There is stronger informix ability, so that model is easier to handle longer data sequence.
To achieve the above object, the technical solution adopted by the present invention is:It is a kind of based on local sensing recurrent neural network Character level language model prediction method, includes the following steps:
PTB data are divided into three kinds of training, verification and test data sets, to three kinds of numbers by step A, data prediction All characters for including according to collection are ranked up by ASCII character, then the character of the data set 1 step that misplaces forward is set, by rope Draw expression and generates object set;
Step B, neural network structure, the local sensing recurrent neural network include that a of non-linear connection successively hides Layer, b hidden layers and h hidden layers;It is that 102 complete connects that neuronal quantity is sequentially connected behind the local sensing recurrent neural network Connect layer, ReLU layers, neuronal quantity be 102 full articulamentum, ReLU layers, neuronal quantity be 51 full articulamentum and SoftMax returns layer;
Step C, neural metwork training use adadelta to optimize training and are less than 1.45 to test set BPC, reuse first Learning rate 0.0001, the SGD optimization methods that momentum is 0.9 are trained, if BPC values when training on test set are twice in succession Do not decline, then learning rate halves;
The value that each node is calculated by the propagated forward of training data is calculated by the comparison with target desired value Then loss function carries out the backpropagation of error, the output error of each layer neuron is successively calculated by output layer, use Method in the step C trains neural network, obtains final mask training result, is adjusted according to error gradient descent method each The weights and threshold value of layer, make modified network final output close to desired value.
Specifically, described a layers is input pretreatment layer, for being pre-processed to input data;Described b layers is short-term letter Extract layer is ceased, for a layers and h layers extraction information as input data;Described h layers be long-term information synthetic layer, for export and Feedback data;Described ReLU layers is activation primitive, is used for Fast Convergent network, prevents gradient disperse, saves calculation amount.
Specifically, described in step A by index indicate method be by OneHot layer by data with 51 tie up character to Amount indicates, described OneHot layer with the front end of local sensing recurrent neural network by neuronal quantity for 51 full articulamentum company It connects.
Specifically, local sensing recurrent neural network is built by following formula in the step B:
at=Tanh (Wa[xt,at-1,bt-1]+ba),
f1t=Sigmoid (Wf1[at,bt-1]+bf1),
i1t=Sigmoid (Wi1[at,bt-1]+bi1),
bt=i1t⊙Wbaat+f1t⊙Wbbbt-1+bb,
f2t=Sigmoid (Wf2[xt,bt,ht-1]+bf2),
i2t=Sigmoid (Wi2[xt,bt,ht-1]+bi2),
ht=f2t⊙Whhht-1+i2t⊙Tanh(Wh[xt,bt]+bh);
Wherein, each time step when t expressions data processing, Tanh are the activation primitive between layer a, layer b, layer c, and W is Matrix, Sigmoid is is arranged in the subsequent activation primitive of local sensing recurrent neural network, and for slowly restraining network, x is pre- Data are handled, ⊙ is that corresponding element quality multiplication operations accord with (Element wise multiplication).In the local sensing In recurrent neural network LA-RNN models, compared to traditional RNN models, in addition to cell state h, the LA-RNN is increased carefully Born of the same parents state b and a and corresponding Elementwise " door " operate f1, i1, f2, i2.F1 is the forgetting door of layer a to layer b, and i1 is The input gate of layer a to layer b, f2 are the forgetting door of layer b to layer h, and i2 is the input gate of layer b to layer h;F1, f2 are according to corresponding defeated Enter information and forgets the status information fallen and no longer needed in b, h, and which input information i1, i2 determine according to corresponding input information It stores in the state in b, h.RNN is formed by for obtaining local input stream information around local cells state a, is depended on In the local message length to be collected, f0, i0 can also be added for more precise control cell state a;Around cell state b The RNN of formation is used for obtaining long-term input stream information;Around cell state H-shaped at RNN in introduce inlet flow x to h's Peephole is linked, this makes the stream information preserved in cell state b obtain a degree of amendment.Cell state h, b with Simple Elementwise linear operations make cell state information be easier long-term preservation between " door ".
Specifically, it is added to Dropout between three hidden layers of middle local sensing recurrent neural network described in step B Layer, for reducing over-fitting, formula is as follows:
at=Tanh (WaD([xt,at-1,bt-1])+ba),
f1t=Sigmoid (Wf1D([at,bt-1])+bf1),
i1t=Sigmoid (Wi1D([at,bt-1])+bi1),
bt=i1t⊙WbaD(at)+f1t⊙WbbD(bt-1)+bb,
f2t=Sigmoid (Wf2D([xt,bt,ht-1])+bf2),
i2t=Sigmoid (Wi2D([xt,bt,ht-1])+bi2),
ht=f2t⊙WhhD(ht-1)+i2t⊙Tanh(WhD([xt,bt])+bh);
Wherein, D operator representations carry out Dropout operations to parameter.
Specifically, described Dropout layers of probability of giving up is set as 0.25, and network mistake is prevented by discard portion neuron Fitting.
Specifically, a layers, b layers and h layers neuronal quantity are respectively 51,102 and 512 in the step B.
Specifically, the loss function is using negative log-likelihood function NLL.
Compared with prior art, the beneficial effects of the invention are as follows:(1) in three layers of recurrent neural network of the invention, upper layer Neural network in a manner of similar peephole from lower layer's even initial data directly or access evidence, and each layer incoming data of adjust automatically Weight, can effectively handle longer data sequence;(2) three layers of recurrent neural network of the invention have clearly divided input Pretreatment layer (a layers), short term information extract layer (b layers) and long-term information synthetic layer (h layers), a, b, h hidden layer neuron quantity Respectively 51,102,512, divide the work between each layer neural network clear, to make the model that there is stronger informix ability, The accuracy of the model prediction is also improved simultaneously.
Description of the drawings
Fig. 1 is that the present invention is based on the character level language model prediction flow diagrams of LA-RNN;
Fig. 2 is LA-RNN neural network tier model architecture schematic block diagrams in the present invention;
Fig. 3 is the information transferring structure schematic diagram that LA-RNN neural network models are temporally unfolded in the present invention.
Specific implementation mode
Below in conjunction with the attached drawing in the present invention, technical scheme of the present invention is clearly and completely described, it is clear that Described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the implementation in the present invention Example, all other embodiment that those of ordinary skill in the art are obtained under the conditions of not making creative work belong to The scope of protection of the invention.
As shown in Figures 1 to 3, a kind of character level language mould based on local sensing recurrent neural network is present embodiments provided Type prediction technique, specifically includes following steps:
S1:Initial data is obtained, data are pre-processed, data are divided into training data, test data and verification Data, using data set PTB training patterns, training set has 900K word of 5.2M character;Verification collection has 400K character 73K A word;Test set has 82K word of 464K character;With ' | ' replace '<unk>' after character set have 51 characters;At batch It is 50 to manage size, sequence length 250, and each character is indicated using the OneHot vectors of 51 dimensions;
S2:Neural network is built, in language model first layer LA-RNN a, b, h hidden layer neuron quantity be respectively 51, 102、512;Be sequentially connected again the full articulamentum that neuronal quantity is 102, ReLU layers, neuronal quantity be 102 full articulamentum, ReLU layers, the full articulamentum that neuronal quantity is 51 are finally output to recurrence layer and do character probabilities the prediction, (loss of Cost functions Function) NLL functions (negative log-likelihood function) are used, described ReLU layers is activation primitive;
S3:Neural metwork training is trained using BPTT-RNN (recurrent neural network back-propagation algorithm) method, is made first With adadelta (autoadapted learning rate adjustment) optimization training to test set BPC be less than 1.45, reuse learning rate 0.0001, SGD (stochastic gradient descent) optimization method that momentum is 0.9 is trained, if test result when training on training set does not change Into (the BPC values on test set do not decline twice in succession), then learning rate is halved;Until training to Model B PC is 1.33 (the theoretical BPC values of Character prediction model are 1.2);It also to be adjusted again because this pre-training model is applied to after sentiment analysis It is whole, therefore, in this pre-training model, training need not be continued and gone down.
Specifically, in process of data preprocessing, number is used as after each text data reading that training data is concentrated According to input, then it is used as target text, the pre-training in a manner of Character prediction to carry after the time series of input data is shifted to an earlier date 1 step Character vector is taken, SoftMax layers is finally output to and does character probabilities prediction.
Specifically, during model training, the present embodiment is that a batch carries out batch processing, 64 words with 64 samples Time step of the sequence as training is accorded with, often wheel training all upsets the sequence of sample again, when parameter updates, use [- 0.3, 0.3] cut out Grad with reduce gradient explosion probability.
Specifically, in step S2, layer functions are returnedLoss function
Specifically, in step S2, local sensing recurrent neural network is built by following formula:
at=Tanh (Wa[xt,at-1,bt-1]+ba),
f1t=Sigmoid (Wf1[at,bt-1]+bf1),
i1t=Sigmoid (Wi1[at,bt-1]+bi1),
bt=i1t⊙Wbaat+f1t⊙Wbbbt-1+bb,
f2t=Sigmoid (Wf2[xt,bt,ht-1]+bf2),
i2t=Sigmoid (Wi2[xt,bt,ht-1]+bi2),
ht=f2t⊙Whhht-1+i2t⊙Tanh(Wh[xt,bt]+bh);
Wherein, each time step when t expressions data processing, x are initial data, and f1 is the forgetting door of layer a to layer b, i1 It is the forgetting door of layer b to layer h for layer a to the input gate of layer b, f2, i2 is the input gate of layer b to layer h;
The input data of wherein first layer a has 3 sources:Original input data x, the output of last moment a and upper a period of time Carve the output of b;The input of f1 is the current output of layer a and the output of last time step layer b, and the input of i1 is identical as f1;Second Layer b input be:The b of i1 and previous moment;The input data of f2 is:Initial data x, layer b currently output and last moment layer h Output, the input of i2 are identical as f2;The input of third layer h is:The output of last moment h, the output of initial data x and layer b.
To reduce over-fitting, Dropout is added between three hidden layers of the local sensing recurrent neural network Layer, formula are as follows:
at=Tanh (WaD([xt,at-1,bt-1])+ba),
f1t=Sigmoid (Wf1D([at,bt-1])+bf1),
i1t=Sigmoid (Wi1D([at,bt-1])+bi1),
bt=i1t⊙WbaD(at)+f1t⊙WbbD(bt-1)+bb,
f2t=Sigmoid (Wf2D([xt,bt,ht-1])+bf2),
i2t=Sigmoid (Wi2D([xt,bt,ht-1])+bi2),
ht=f2t⊙WhhD(ht-1)+i2t⊙Tanh(WhD([xt,bt])+bh);
Specifically, in above formula, D operator representations carry out Dropout operations to parameter, and ⊙ accords with for corresponding element quality multiplication operations (Element wise multiplication)。
Specifically, in step S2, the Dropout layers that probability is 0.25 is connected in the front end LA-RNN layers of, are done so Benefit be that, by some neurons of random closing during neural metwork training and its connection, this makes during the training period There is the nervelet network that numerous (exponential) partial nerve member is combined into be trained simultaneously, and reaches the average effect of model Fruit avoids certain features from only just coming into force under fixed Combination, allow consciously network go to learn some universal general character (rather than Some characteristics of certain training samples) robustness of the model trained can be improved in this way.
Specifically, in step S3, the calculation formula of BPC is:
Result of the model of the present invention on test set is 1.41BPC, is better than the result of Bielik in 2016 et al. (1.53BPC), and model proposed by the present invention only has 1.1M parameters, training speed is faster.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with Understanding without departing from the principles and spirit of the present invention can carry out these embodiments a variety of variations, modification, replace And modification, the scope of the present invention is defined by the appended.

Claims (7)

1. a kind of character level language model prediction method based on local sensing recurrent neural network, which is characterized in that including with Lower step:
PTB data are divided into three kinds of training, verification and test data sets, to three kinds of data sets by step A, data prediction Including all characters be ranked up by ASCII character, then the character of the data set 1 step that misplaces forward is set, by concordance list Show generation object set;
Step B, neural network structure, the local sensing recurrent neural network includes a hidden layers, the b of non-linear connection successively Hidden layer and h hidden layers;The full connection that neuronal quantity is 102 is sequentially connected behind the local sensing recurrent neural network Layer, ReLU layers, neuronal quantity be 102 full articulamentum, ReLU layers, neuronal quantity be 51 full articulamentum and SoftMax returns layer;
Step C, neural metwork training use adadelta to optimize training and are less than 1.45 to test set BPC, reuse study first Rate 0.0001, the SGD optimization methods that momentum is 0.9 are trained, if BPC values on test set are twice in succession no longer when training It reduces, then learning rate halves;
The value that each node is calculated by the propagated forward of training data calculates loss by the comparison with target desired value Then function carries out the backpropagation of error, the output error of each layer neuron is successively calculated by output layer, using described Method in step C trains neural network, obtains final mask training result, each layer is adjusted according to error gradient descent method Weights make the final output of modified network close to desired value.
2. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 1 Method, which is characterized in that pass through the OneHot layers of character vector by data with 51 dimensions by the method indicated is indexed described in step A Indicate, described OneHot layer with the front end of local sensing recurrent neural network by neuronal quantity for 51 full articulamentum company It connects.
3. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 1 Method, which is characterized in that local sensing recurrent neural network described in step B is built by following formula:
at=Tanh (Wa[xt,at-1,bt-1]+ba),
f1t=Sigmoid (Wf1[at,bt-1]+bf1),
i1t=Sigmoid (Wi1[at,bt-1]+bi1),
bt=i1t⊙Wbaat+f1t⊙Wbbbt-1+bb,
f2t=Sigmoid (Wf2[xt,bt,ht-1]+bf2),
i2t=Sigmoid (Wi2[xt,bt,ht-1]+bi2),
ht=f2t⊙Whhht-1+i2t⊙Tanh(Wh[xt,bt]+bh)。
4. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 1 Method, which is characterized in that be added to Dropout between three hidden layers of local sensing recurrent neural network described in step B Layer, for reducing over-fitting, formula is as follows:
at=Tanh (WaD([xt,at-1,bt-1])+ba),
f1t=Sigmoid (Wf1D([at,bt-1])+bf1),
i1t=Sigmoid (Wi1D([at,bt-1])+bi1),
bt=i1t⊙WbaD(at)+f1t⊙WbbD(bt-1)+bb,
f2t=Sigmoid (Wf2D([xt,bt,ht-1])+bf2),
i2t=Sigmoid (Wi2D([xt,bt,ht-1])+bi2),
ht=f2t⊙WhhD(ht-1)+i2t⊙Tanh(WhD([xt,bt])+bh)。
5. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 4 Method, which is characterized in that described Dropout layers of probability of giving up is set as 0.25.
6. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 1 Method, which is characterized in that a hidden layers described in step B, b hidden layers and h hidden layer neuron quantity are respectively 51,102 and 512.
7. a kind of character level language model prediction side based on local sensing recurrent neural network according to claim 1 Method, which is characterized in that the loss function is using negative log-likelihood function NLL.
CN201810398231.0A 2018-04-28 2018-04-28 Character level language model prediction method based on local sensing recurrent neural network Pending CN108629401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810398231.0A CN108629401A (en) 2018-04-28 2018-04-28 Character level language model prediction method based on local sensing recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810398231.0A CN108629401A (en) 2018-04-28 2018-04-28 Character level language model prediction method based on local sensing recurrent neural network

Publications (1)

Publication Number Publication Date
CN108629401A true CN108629401A (en) 2018-10-09

Family

ID=63694827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810398231.0A Pending CN108629401A (en) 2018-04-28 2018-04-28 Character level language model prediction method based on local sensing recurrent neural network

Country Status (1)

Country Link
CN (1) CN108629401A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517669A (en) * 2019-07-24 2019-11-29 北京捷通华声科技股份有限公司 A kind of method, apparatus, electronic equipment and storage medium for predicting word pronunciation
CN110543888A (en) * 2019-07-16 2019-12-06 浙江工业大学 image classification method based on cluster recurrent neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104459373A (en) * 2014-11-11 2015-03-25 广东电网有限责任公司东莞供电局 Method for calculating node voltage temporary drop magnitudes based on BP neural network
CN106022954A (en) * 2016-05-16 2016-10-12 四川大学 Multiple BP neural network load prediction method based on grey correlation degree
CN107064845A (en) * 2017-06-06 2017-08-18 深圳先进技术研究院 One-dimensional division Fourier's parallel MR imaging method based on depth convolution net

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104459373A (en) * 2014-11-11 2015-03-25 广东电网有限责任公司东莞供电局 Method for calculating node voltage temporary drop magnitudes based on BP neural network
CN106022954A (en) * 2016-05-16 2016-10-12 四川大学 Multiple BP neural network load prediction method based on grey correlation degree
CN107064845A (en) * 2017-06-06 2017-08-18 深圳先进技术研究院 One-dimensional division Fourier's parallel MR imaging method based on depth convolution net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王刚等: "局部感知递归神经网络在语言模型中的应用", 《信息技术》 *
谢威: "《新兴技术与科技情报》", 30 November 2017 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543888A (en) * 2019-07-16 2019-12-06 浙江工业大学 image classification method based on cluster recurrent neural network
CN110517669A (en) * 2019-07-24 2019-11-29 北京捷通华声科技股份有限公司 A kind of method, apparatus, electronic equipment and storage medium for predicting word pronunciation
CN110517669B (en) * 2019-07-24 2022-04-19 北京捷通华声科技股份有限公司 Method and device for predicting pronunciation of words, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108595632B (en) Hybrid neural network text classification method fusing abstract and main body characteristics
CN109271522B (en) Comment emotion classification method and system based on deep hybrid model transfer learning
CN108984745B (en) Neural network text classification method fusing multiple knowledge maps
CN109284506B (en) User comment emotion analysis system and method based on attention convolution neural network
CN111444305B (en) Multi-triple combined extraction method based on knowledge graph embedding
AU2020100710A4 (en) A method for sentiment analysis of film reviews based on deep learning and natural language processing
CN108763326B (en) Emotion analysis model construction method of convolutional neural network based on feature diversification
CN107562784A (en) Short text classification method based on ResLCNN models
CN107924680A (en) Speech understanding system
CN107301864A (en) A kind of two-way LSTM acoustic models of depth based on Maxout neurons
CN110222163A (en) A kind of intelligent answer method and system merging CNN and two-way LSTM
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN108681539A (en) A kind of illiteracy Chinese nerve interpretation method based on convolutional neural networks
CN111858989A (en) Image classification method of pulse convolution neural network based on attention mechanism
CN109684449B (en) Attention mechanism-based natural language semantic representation method
CN111177376A (en) Chinese text classification method based on BERT and CNN hierarchical connection
CN111552803A (en) Text classification method based on graph wavelet network model
CN111753207B (en) Collaborative filtering method for neural map based on comments
CN105701480A (en) Video semantic analysis method
CN109743732B (en) Junk short message distinguishing method based on improved CNN-LSTM
CN108763542A (en) A kind of Text Intelligence sorting technique, device and computer equipment based on combination learning
CN109101584A (en) A kind of sentence classification improved method combining deep learning with mathematical analysis
CN105975497A (en) Automatic microblog topic recommendation method and device
CN111400494A (en) Sentiment analysis method based on GCN-Attention
CN112287106A (en) Online comment emotion classification method based on dual-channel hybrid neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181009