CN103729459A - Method for establishing sentiment classification model - Google Patents
Method for establishing sentiment classification model Download PDFInfo
- Publication number
- CN103729459A CN103729459A CN201410012464.4A CN201410012464A CN103729459A CN 103729459 A CN103729459 A CN 103729459A CN 201410012464 A CN201410012464 A CN 201410012464A CN 103729459 A CN103729459 A CN 103729459A
- Authority
- CN
- China
- Prior art keywords
- layer
- training
- input
- network
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention provides a sentiment classification method for generating a model deep-convinced-degree network on the basis of the probability of depth study. According to the technical scheme of the method, a plurality of Boltzmann machine layers are stacked, namely, output of this layer is used as input of the next layer. By the adoption of the mode, input information can be expressed in a grading mode, and abstraction can be conducted layer by layer. A multi-layer sensor containing a plurality of hidden layers is the basic study structure of the method. More abstract high layers are formed through combining the characteristics of lower layers and are used for expressing attribute categories or characteristics, so that the distribution type character presentation of data can be discovered. The method belongs to monitoring-free study, and a mainly-used model is the deep-convinced-degree network. The method enables a machine to conduct characteristic abstract better so as to improve the accuracy of sentiment classifications.
Description
Technical field
The application relates to information machine learning areas, a kind of method that builds probability generation model of special design.
Background technology
Along with current cybertimes rise, the degree of depth is learnt to call the revolutionary a kind of new technology of artificial intelligence with the front page of < < New York Times > >.Have reason let us degree of depth study is further understood in depth, as the algorithm of complicated " machine learning ", in identification audio frequency and image accuracy rate, considerably beyond previous technology.But also there is sufficient reason to suspect this viewpoint.Although report, " degree of depth study allows machine carry out the mankind's activity, as sees, listens and think deeply, and can pattern-recognition provide possibility, has promoted the progress of artificial intelligence technology." degree of depth study let us strides forward to the real intelligent machine epoch, is also a small step.Combine with instant practical application, degree of depth study is very important work.
Degree of depth study is a new field in machine learning research, and its motivation is the neural network of setting up, simulation human brain carries out analytic learning, and it imitates the mechanism of human brain and carrys out decryption, for example image, sound and text.Degree of depth study is the one of unsupervised learning.Its concept comes from the research of artificial neural network.Containing the multilayer perceptron of many hidden layers, it is exactly a kind of degree of depth study structure.Degree of depth study forms more abstract high level by combination low-level feature and represents attribute classification or feature, with the distributed nature of finding data, represents.The people such as the Objective Concept Hinton of degree of depth study proposed in 2006.
Study root in depth in traditional " neural network ", " neural network " can trace back to the fifties later stage in 20th century.At that time, Frank's Rosenblat attempted to set up a kind of perceptron of similar mechanical brain, can " memory, makes the machine of response as people's thinking for perception, identification ".This system can be identified some basic shapes within the specific limits, as triangle and square.People place high hopes to its potentiality.
But test finally ends in failure, by Marvin's Ming Siji and he's the co-worker west Palt that rubs, in a book, to be pointed out, the original system of Rosenblat design is very conditional, from literal, blindness is carried out some simple logic functions as distance.As everyone knows, the glamour of " neural network " has disappeared very soon.
But, in the eighties mid-term in last century, when glad of the professor Jeff of Ka Neiji-Mei Long university helps to build more complicated virtual neuroid, can evade the difficult point that some Ming Siji point out, another idea of Rosenblat occurs once again.Han Ding introduces the concept of " hidden layer ", and " hidden layer " neuron allows new generation network to have more complicated learning functionality (the similar XOR function that raw sensed device cannot be processed).But new model also has serious problem.Training time is long, and study is slow, inefficiency, and neural network starts again to lose favour.
But Hinton is unremitting, in 2006, made significant improvement, degree of depth study is proposed, technique is still by Google, Microsoft, and other local institutes apply.A typical setting is such: a computer is faced with a large data set, need to classify to these data, and this allows child classify to toy just as not having in situation of concrete instruction.Child may be to their color, shape or function, or classify in other aspect.Machine learning person attempts doing like this, and for example millions of handwritten forms is learnt on a large scale, these handwritten form data are contrasted mutually, on the basis of similarity to they " cluster ".The important innovations of degree of depth study is to set up model and progressively learns, and attempts to decide the classification of low level, and then the classification of trial learning higher level.
Study in depth and be good at this class problem, be called as unsupervised learning.In some cases, its performance well technology in the past far away.For example, it can be better than early stage system to the study identification of syllable at a new language.But it is still good not, when in the very large situation of the set of classification possibility, in object identification or classification situation, just seem awkward.Google's system that everybody is conventional, but the image that it still can recognition training collection less than 1/6th, but move left and right when the element in image rotation or image, and the result providing can be even worse.
In fact, study in depth and just build huge challenge that intelligent machine a faces part wherein.Such technology lacks causal method for expressing, may face the challenge that obtains abstract concept, as " siblings' relation " or " jointly referring to ".The mode that they do not understand is carried out reasoning from logic, and is having got long long way to go integrating aspect abstract knowledge, as information to as if what, what class is information return, and how to use information.
Based on dark belief network, propose successively training algorithm of non-supervisory greed, for solving the relevant optimization difficult problem of deep structure, bring hope, propose subsequently multilayer autocoder deep structure.In addition convolutional neural networks is first real sandwich construction learning algorithm, and it utilizes space relativeness to reduce number of parameters to improve training performance.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of method that builds probability generation model, the method can improve the accuracy of the abstract calculating of information extraction.
For achieving the above object, technical scheme provided by the invention is:
A method that builds general emotional semantic classification model, the method comprises:
Utilize the feature of artificial neural network, suppose that its output is identical with input, then its parameter is adjusted in training, obtains the weight in every one deck.Naturally, we have just obtained several different expression of input, and these expressions are exactly feature.Autocoder is exactly a kind of neural network that reappears as far as possible input signal.In order to realize this reproduction, need to use autocoder to catch the most important factor that can represent input data, find the principal ingredient that can represent prime information.
When having obtained the coding of ground floor, then need to make the error minimum of reconstruct, just can believe that this coding has been exactly the good representation of original input signal, here we suppose that it and original signal are the same.The training patterns of the second layer and ground floor does not have difference, and we are the input signal as the second layer by the coding of ground floor output, and same minimum reconstructed, will obtain the parameter of the second layer, and obtains the coding of second layer input.Other layer make to use the same method carry out successively just passable.
Through method above, just can obtain multi-layer coding.The number of layers that experiment needs will be according to the concrete effect debugging of oneself experiment.Every one deck all can obtain the different expression of original input, and the vision system of simulating people is here the same.
At present it just study obtained a feature that can well represent input, this feature can represent original input signal to the full extent.So, in order to realize classification, we just can add a sorter at the coding layer on the top of autocoder and return as Rogers is special, and then the supervised training method of the multilayer neural network by standard is gone training.
That is to say, at this time, we need to be input to last sorter by the feature coding of final layer, by there being exemplar, by supervised learning, finely tune.Prototype network by " restriction " be a visual layers and a hidden layer, interlayer exist connect, but layer in unit between there is not connection.Hidden unit is gone to catch the correlativity of the high-order data that show in visual layers by training.
Then, do not consider to push up most and form the two-layer of an associative memory, the connection at degree of a deeply convinceing networking is instructed definite by top-down generation weights, and restriction Boltzmann machine is just as a building block, and it can be easy to connect the study of weights.The most at first, by a non-supervisory greediness successively method go pre-training to obtain the weights of generation model, non-supervisory greediness successively method is proved effectively by Hinton, and is called contrast difference by it.
In this training stage, in visual layers, can produce a vector v, by it, value is delivered to hidden layer.Conversely, the input meeting of visual layers is by random selection, to attempt the original input signal of duplicate removal structure.Finally, these new visual neuronal activation unit by forward direction transmit reconstruct hidden layer activate unit, obtain h, in training process, first by visual vector-valued map to hidden unit; Then visual element is rebuild by Hidden unit; These new visual element are shone upon again to hidden unit, so just obtain new hidden unit.Carry out this step repeatedly and be called gibbs sampler.These retreat and the step of advancing is exactly our familiar Gibbs sampling, and hidden layer activates correlation difference between unit and visual layers input just as the Main Basis of right value update.
Training time can reduce significantly, because only need single step just can approach maximum likelihood study.The every one deck that increases enter the internet all can improve the logarithm probability of training data, and we can be understood as and more and more approach truly expressed.
Accompanying drawing explanation
Fig. 1 is dark belief network Deep Belief Network schematic diagram;
Fig. 2 is limited Bai Ziman machine Restricted Boltzmann Machine schematic diagram;
Fig. 3 is the process flow diagram that the embodiment of the present invention builds the method.
Embodiment
For making object of the present invention, technical scheme and advantage clearer, referring to the accompanying drawing embodiment that develops simultaneously, scheme of the present invention is described in further detail.
X={x in the set of given comment language material
1, x
2..., x
n, task, for these documents are carried out to emotional semantic classification, provides each comment its attribute (positive or passive).Wherein every section of comment is all represented as x
i={ x
i3, x
i2..., x
iD.Objective attribute target attribute is also y with vector representation simultaneously
i=1, and-1}, wherein 1 represents that actively ,-1 represents passiveness.Hypothetical target value vector set is Y={y
1, y
2..., y
n, object is to find out the mapping function of X to Y.
We use dark belief network as depth model, comment to be expected to train, and dark belief network is a multilayered model, and having an input layer is visible layer, and multiple hidden layers are as Fig. 1, and we set following parameters:
V=h
0: input layer;
H
i(i=1,2 ... K-1)=i layer hidden layer;
O=h
k: output layer;
H
i: (i=1 ..., K): h
i-1and h
iweight between layer
B
i: (i=1 ..., K): h
iand h
i+1error between layer
C
i: (i=1 ..., K): h
iand h
i-1error between layer
Activation function between wherein two-layer is:
p(h
i-1,s=1|h
i)=σ(b
i,s+∑
jw
i,jh
i,j) (1)
p(h
i,t=1|h
i-1)=σ(c
i,t+∑
jw
i,jh
i-1,j) (2)
Wherein σ (x) is:
σ(x)=1/(1+e
-x) (3)
Although gradient descent algorithm can be used for the weights of tuning network, but this algorithm only when initial weight relatively approaches a good solution effect just better.Here by the pre-training pattern of a multilayer, obtain a suitable initial network.From bottom to top, by being regarded as to one deck that visual layers is higher, lower one deck regards hidden layer as, and every a pair of adjacent layer can be regarded limited Bai Ziman machine as.The energy function is here:
E(v,h)=-∑
s,tv
sw
sth
t-∑
sb
sv
s-∑
tc
th
t (4)
The target of pre-training is the maximum probability that makes to generate training set, and the training set probability of each network allocation can pass through computing formula:
And its logarithm probability gradient can provide by computing formula:
Wherein s
v∞ and s
v∞ can, by alternately Gibbs sampling acquisition, can calculate by following formula the renewal of weight:
In order to make dark belief network better solve emotional semantic classification problem, need to carry out again weights optimization discriminatively, so just make the error minimum of classifying on training set.
Wherein θ=(w
1..., w
k, c
1..., c
k), L (Y, Z, θ) is loss function, and for given θ, Y is the desired value of training data reality, and Z is predicted target values, supposes that training set quantity is N.By L2-norm vague generalization:
The above, be only preferred embodiment of the present invention, is not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.
Claims (8)
1. a method that builds learning model, is characterized in that, the method comprises:
Method is to utilize the feature of artificial neural network, and artificial neural network itself is exactly the system with hierarchical structure, if a given neural network, we suppose that its output is identical with input, and then its parameter is adjusted in training, obtains the weight in every one deck.Naturally, we have just obtained several different expression the (every one deck represents a kind of expression) of input, and these expressions are exactly feature.Autocoder is exactly a kind of neural network that reappears as far as possible input signal.In order to realize this reproduction, autocoder just must seizure can represent the most important factor of inputting data, finds the principal ingredient that can represent prime information.
2. the method for structure learning model according to claim 1, is characterized in that, the described method that obtains encoding according to input is:
In general neural network, we have label by the sample of input, have input and desired value, and we go to change the parameter of each layer above according to the difference between current output and desired value like this, until restrain.But we only have without label data now, so, we import a scrambler into input, will obtain a coding, the namely expression of input of this coding, we are exactly input for what know this coded representation so, and we add a demoder, at this time will export an information.If this information and the input signal at the beginning of output are more approaching, we believe that this coding is reliable.So we just, by adjusting the parameter of encoder, make reconstructed error minimum, at this time we just obtained input signal first represented, namely encoded.Because be without label data, so the source of error is exactly to obtain compared with former input after direct reconstruct.
3. the method for structure learning model according to claim 2, is characterized in that, by scrambler, produces feature, successively training;
According to the coding that just can obtain ground floor above, the minimum let us of the error of reconstruct believes that this coding is exactly the good representation of original input signal, and here we suppose that it and original signal are the same.The training patterns of the second layer and ground floor has not just had difference, we are the input signal as the second layer by the coding of ground floor output, and same minimum reconstructed will obtain the parameter of the second layer, and obtain the coding of second layer input, namely second of former input message expression.Other layers make to use the same method and carry out successively.
4. the method for structure network according to claim 3, is characterized in that, disposes the fine setting of supervision;
Through method above, just can obtain multi-layer coding.The number of layers that experiment needs will be according to the concrete effect debugging of oneself experiment.At present it just study obtained a feature that can well represent input, this feature can represent original input signal to the full extent.So, in order to realize classification, we just can add a sorter at the coding layer on the top of autocoder and return as Rogers is special, and then the supervised training method of the multilayer neural network by standard is gone training.
That is to say, at this time, we need to be input to last sorter by the feature coding of final layer, by there being exemplar, by supervised learning, finely tune.
5. the method for structure network according to claim 4, is characterized in that, adds dark belief network;
Dark belief network is comprised of multiple restriction Boltzmann machine layers.These networks are a visual layers and a hidden layer by " restriction ", and interlayer exists and connects, but layer in unit between there is not connection.Hidden unit is gone to catch the correlativity of the high-order data that show in visual layers by training.
First, the connection at degree of a deeply convinceing networking is instructed definite by top-down generation weights, and restriction Boltzmann machine is compared tradition and the sigmoid belief network of Depth Stratification, and it can be easy to connect the study of weights.
The most at first, by a non-supervisory greediness successively method go pre-training to obtain the weights of generation model, in this training stage, in visual layers, can produce a vector v, by it, value is delivered to hidden layer.Conversely, the input meeting of visual layers is by random selection, to attempt the original input signal of duplicate removal structure.Finally, these new visual neuronal activation unit by forward direction transmit reconstruct hidden layer activate unit, obtain h, in training process, first by visual vector-valued map to hidden unit; Then visual element is rebuild by Hidden unit; These new visual element are shone upon again to hidden unit, so just obtain new hidden unit.Carry out this step repeatedly and be called Gibbs sampling.In this patent, we adopt Gibbs sampling, and hidden layer activates correlation difference between unit and visual layers input just as the Main Basis of right value update.
Adopt such method can make the training time to reduce significantly, because only need single step just can approach maximum likelihood study.The every one deck that increases enter the internet all can improve the logarithm probability of training data, and we can be understood as the truly expressed that more and more approaches energy.
6. build degree of deeply convinceing relational network;
The highest two-layer, weights are joined together, and so more the output of low layer will provide the clue of a reference or associated to top layer, and top layer will be related to its memory content like this.After pre-training, dark belief network can be by utilizing tape label data BP algorithm to go to adjust to differentiating performance.Here, a tally set will be affixed to top layer (popularization associative memory), by one bottom-up, the identification weights that study is arrived obtain the classifying face of a network.This performance can be better than the network of simple BP Algorithm for Training.The BP algorithm of dark belief network only need to carry out a local search to weighting parameter space, and this compares feedforward neural network, and training is to want fast, and the time of convergence is also few.
7. the method for structure according to claim 6, is characterized in that, automatic coding and add dark belief network; Bottom is arrived
In order to reach better effect, stacking autocoder can be added to dark belief network, it is the restriction Bai Ziman machine by replace the dark belief network of tradition the inside with stacking autocoder.This can be trained and be produced degree of depth multilayer neural network framework by same rule with regard to making, but it lacks the parameterized strict demand of layer.Different from DBNs, autocoder uses discrimination model, and this structure is just difficult to sampling input sample space like this, and this just makes its internal representations of the more difficult seizure of network.But noise reduction autocoder but can well be avoided this problem, and more excellent than traditional DBNs.Train the process of single noise reduction autocoder the same with the process of RBMs training generation model.
8. add the content about emotion text-processing.According to the method described in claim 1~6, text is carried out to emotional semantic classification, it is characterized in that, text is carried out to pre-service;
First comment document is carried out to cutting, then remove stop words, and represent to be divided into again two parts, training set and test sets with vectorial form.Wherein training set can also be further divided into two parts, is used for respectively training in advance and tuning, uses above-mentioned model.Wherein pre-training is unsupervised, with a kind of greediness successively algorithm obtain initial network.Then the parameter that in evolutionary process, previous step is obtained to network is adjusted by BP algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410012464.4A CN103729459A (en) | 2014-01-10 | 2014-01-10 | Method for establishing sentiment classification model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410012464.4A CN103729459A (en) | 2014-01-10 | 2014-01-10 | Method for establishing sentiment classification model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103729459A true CN103729459A (en) | 2014-04-16 |
Family
ID=50453533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410012464.4A Pending CN103729459A (en) | 2014-01-10 | 2014-01-10 | Method for establishing sentiment classification model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103729459A (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123336A (en) * | 2014-05-21 | 2014-10-29 | 深圳北航新兴产业技术研究院 | Deep Boltzmann machine model and short text subject classification system and method |
CN104269169A (en) * | 2014-09-09 | 2015-01-07 | 山东师范大学 | Classifying method for aliasing audio events |
CN104572892A (en) * | 2014-12-24 | 2015-04-29 | 中国科学院自动化研究所 | Text classification method based on cyclic convolution network |
CN104636732A (en) * | 2015-02-12 | 2015-05-20 | 合肥工业大学 | Sequence deeply convinced network-based pedestrian identifying method |
CN105306883A (en) * | 2014-07-22 | 2016-02-03 | 瑞萨电子株式会社 | Image receiving device, image transmission system, and image receiving method |
CN105741832A (en) * | 2016-01-27 | 2016-07-06 | 广东外语外贸大学 | Spoken language evaluation method based on deep learning and spoken language evaluation system |
CN105809186A (en) * | 2016-02-25 | 2016-07-27 | 中国科学院声学研究所 | Emotion classification method and system |
CN106095735A (en) * | 2016-06-06 | 2016-11-09 | 北京中加国道科技有限责任公司 | A kind of method plagiarized based on deep neural network detection academic documents |
CN106095746A (en) * | 2016-06-01 | 2016-11-09 | 竹间智能科技(上海)有限公司 | Word emotion identification system and method |
CN106161209A (en) * | 2016-07-21 | 2016-11-23 | 康佳集团股份有限公司 | A kind of method for filtering spam short messages based on degree of depth self study and system |
CN106453416A (en) * | 2016-12-01 | 2017-02-22 | 广东技术师范学院 | Detection method of distributed attack intrusion based on deep belief network |
CN106557566A (en) * | 2016-11-18 | 2017-04-05 | 杭州费尔斯通科技有限公司 | A kind of text training method and device |
CN106778880A (en) * | 2016-12-23 | 2017-05-31 | 南开大学 | Microblog topic based on multi-modal depth Boltzmann machine is represented and motif discovery method |
CN107229636A (en) * | 2016-03-24 | 2017-10-03 | 腾讯科技(深圳)有限公司 | A kind of method and device of word's kinds |
CN107305574A (en) * | 2016-04-25 | 2017-10-31 | 百度在线网络技术(北京)有限公司 | Object search method and device |
WO2017206936A1 (en) * | 2016-06-02 | 2017-12-07 | 腾讯科技(深圳)有限公司 | Machine learning based network model construction method and apparatus |
CN107632258A (en) * | 2017-09-12 | 2018-01-26 | 重庆大学 | A kind of fan converter method for diagnosing faults based on wavelet transformation and DBN |
CN108038543A (en) * | 2017-10-24 | 2018-05-15 | 华南师范大学 | It is expected and anti-desired depth learning method and nerve network system |
CN108229640A (en) * | 2016-12-22 | 2018-06-29 | 深圳光启合众科技有限公司 | The method, apparatus and robot of emotion expression service |
CN108536838A (en) * | 2018-04-13 | 2018-09-14 | 重庆邮电大学 | Very big unrelated multivariate logistic regression model based on Spark is to text sentiment classification method |
CN108563624A (en) * | 2018-01-03 | 2018-09-21 | 清华大学深圳研究生院 | A kind of spatial term method based on deep learning |
CN108805036A (en) * | 2018-05-22 | 2018-11-13 | 电子科技大学 | A kind of new non-supervisory video semanteme extracting method |
CN109189919A (en) * | 2018-07-27 | 2019-01-11 | 广州市香港科大霍英东研究院 | Method, system, terminal and the storage medium of text multi-angle of view emotional semantic classification |
CN109213860A (en) * | 2018-07-26 | 2019-01-15 | 中国科学院自动化研究所 | Merge the text sentiment classification method and device of user information |
CN109308471A (en) * | 2018-09-29 | 2019-02-05 | 河海大学常州校区 | A kind of EMG Feature Extraction |
CN109323832A (en) * | 2018-09-12 | 2019-02-12 | 温州大学 | A kind of monitoring method of cold header mold impact conditions |
CN109559576A (en) * | 2018-11-16 | 2019-04-02 | 中南大学 | A kind of children companion robot and its early teaching system self-learning method |
CN109690577A (en) * | 2016-09-07 | 2019-04-26 | 皇家飞利浦有限公司 | Classified using the Semi-supervised that stack autocoder carries out |
CN109829499A (en) * | 2019-01-31 | 2019-05-31 | 中国科学院信息工程研究所 | Image, text and data fusion sensibility classification method and device based on same feature space |
CN110390013A (en) * | 2019-06-25 | 2019-10-29 | 厦门美域中央信息科技有限公司 | A kind of file classification method based on cluster with ANN fusion application |
CN110442693A (en) * | 2019-07-27 | 2019-11-12 | 中国科学院自动化研究所 | Generation method, device, server and medium are replied message based on artificial intelligence |
CN111784159A (en) * | 2020-07-01 | 2020-10-16 | 深圳市检验检疫科学研究院 | Food risk tracing information grading method and device |
CN113807527A (en) * | 2020-06-11 | 2021-12-17 | 华硕电脑股份有限公司 | Signal detection method and electronic device using same |
CN116028880A (en) * | 2023-02-07 | 2023-04-28 | 支付宝(杭州)信息技术有限公司 | Method for training behavior intention recognition model, behavior intention recognition method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6556987B1 (en) * | 2000-05-12 | 2003-04-29 | Applied Psychology Research, Ltd. | Automatic text classification system |
CN101127042A (en) * | 2007-09-21 | 2008-02-20 | 浙江大学 | Sensibility classification method based on language model |
CN103473380A (en) * | 2013-09-30 | 2013-12-25 | 南京大学 | Computer text sentiment classification method |
-
2014
- 2014-01-10 CN CN201410012464.4A patent/CN103729459A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6556987B1 (en) * | 2000-05-12 | 2003-04-29 | Applied Psychology Research, Ltd. | Automatic text classification system |
CN101127042A (en) * | 2007-09-21 | 2008-02-20 | 浙江大学 | Sensibility classification method based on language model |
CN103473380A (en) * | 2013-09-30 | 2013-12-25 | 南京大学 | Computer text sentiment classification method |
Non-Patent Citations (1)
Title |
---|
ZOUXY09: "Deep Learning(深度学习)学习笔记整理系列", 《CSDN.NET-博客频道-ZOUXY09的专栏》 * |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123336B (en) * | 2014-05-21 | 2018-04-24 | 深圳北航天汇创业孵化器有限公司 | Depth Boltzmann machine model and short text subject classification system and method |
CN104123336A (en) * | 2014-05-21 | 2014-10-29 | 深圳北航新兴产业技术研究院 | Deep Boltzmann machine model and short text subject classification system and method |
CN105306883A (en) * | 2014-07-22 | 2016-02-03 | 瑞萨电子株式会社 | Image receiving device, image transmission system, and image receiving method |
CN105306883B (en) * | 2014-07-22 | 2020-03-03 | 瑞萨电子株式会社 | Image receiving apparatus, image transmission system, and image receiving method |
CN104269169B (en) * | 2014-09-09 | 2017-04-12 | 山东师范大学 | Classifying method for aliasing audio events |
CN104269169A (en) * | 2014-09-09 | 2015-01-07 | 山东师范大学 | Classifying method for aliasing audio events |
CN104572892A (en) * | 2014-12-24 | 2015-04-29 | 中国科学院自动化研究所 | Text classification method based on cyclic convolution network |
CN104572892B (en) * | 2014-12-24 | 2017-10-03 | 中国科学院自动化研究所 | A kind of file classification method based on cyclic convolution network |
CN104636732A (en) * | 2015-02-12 | 2015-05-20 | 合肥工业大学 | Sequence deeply convinced network-based pedestrian identifying method |
CN104636732B (en) * | 2015-02-12 | 2017-11-07 | 合肥工业大学 | A kind of pedestrian recognition method based on the deep belief network of sequence |
CN105741832A (en) * | 2016-01-27 | 2016-07-06 | 广东外语外贸大学 | Spoken language evaluation method based on deep learning and spoken language evaluation system |
CN105809186A (en) * | 2016-02-25 | 2016-07-27 | 中国科学院声学研究所 | Emotion classification method and system |
CN107229636B (en) * | 2016-03-24 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Word classification method and device |
CN107229636A (en) * | 2016-03-24 | 2017-10-03 | 腾讯科技(深圳)有限公司 | A kind of method and device of word's kinds |
CN107305574A (en) * | 2016-04-25 | 2017-10-31 | 百度在线网络技术(北京)有限公司 | Object search method and device |
CN106095746A (en) * | 2016-06-01 | 2016-11-09 | 竹间智能科技(上海)有限公司 | Word emotion identification system and method |
WO2017206936A1 (en) * | 2016-06-02 | 2017-12-07 | 腾讯科技(深圳)有限公司 | Machine learning based network model construction method and apparatus |
US11741361B2 (en) | 2016-06-02 | 2023-08-29 | Tencent Technology (Shenzhen) Company Limited | Machine learning-based network model building method and apparatus |
CN106095735A (en) * | 2016-06-06 | 2016-11-09 | 北京中加国道科技有限责任公司 | A kind of method plagiarized based on deep neural network detection academic documents |
CN106161209A (en) * | 2016-07-21 | 2016-11-23 | 康佳集团股份有限公司 | A kind of method for filtering spam short messages based on degree of depth self study and system |
CN106161209B (en) * | 2016-07-21 | 2019-09-20 | 康佳集团股份有限公司 | A kind of method for filtering spam short messages and system based on depth self study |
CN109690577A (en) * | 2016-09-07 | 2019-04-26 | 皇家飞利浦有限公司 | Classified using the Semi-supervised that stack autocoder carries out |
CN106557566A (en) * | 2016-11-18 | 2017-04-05 | 杭州费尔斯通科技有限公司 | A kind of text training method and device |
CN106557566B (en) * | 2016-11-18 | 2019-06-07 | 杭州费尔斯通科技有限公司 | A kind of text training method and device |
CN106453416A (en) * | 2016-12-01 | 2017-02-22 | 广东技术师范学院 | Detection method of distributed attack intrusion based on deep belief network |
CN108229640A (en) * | 2016-12-22 | 2018-06-29 | 深圳光启合众科技有限公司 | The method, apparatus and robot of emotion expression service |
CN108229640B (en) * | 2016-12-22 | 2021-08-20 | 山西翼天下智能科技有限公司 | Emotion expression method and device and robot |
CN106778880B (en) * | 2016-12-23 | 2020-04-07 | 南开大学 | Microblog topic representation and topic discovery method based on multi-mode deep Boltzmann machine |
CN106778880A (en) * | 2016-12-23 | 2017-05-31 | 南开大学 | Microblog topic based on multi-modal depth Boltzmann machine is represented and motif discovery method |
CN107632258A (en) * | 2017-09-12 | 2018-01-26 | 重庆大学 | A kind of fan converter method for diagnosing faults based on wavelet transformation and DBN |
CN108038543A (en) * | 2017-10-24 | 2018-05-15 | 华南师范大学 | It is expected and anti-desired depth learning method and nerve network system |
CN108038543B (en) * | 2017-10-24 | 2021-01-22 | 华南师范大学 | Expectation and anti-expectation deep learning method and neural network system |
CN108563624A (en) * | 2018-01-03 | 2018-09-21 | 清华大学深圳研究生院 | A kind of spatial term method based on deep learning |
CN108536838B (en) * | 2018-04-13 | 2021-10-19 | 重庆邮电大学 | Method for classifying text emotion through maximum irrelevant multiple logistic regression model based on Spark |
CN108536838A (en) * | 2018-04-13 | 2018-09-14 | 重庆邮电大学 | Very big unrelated multivariate logistic regression model based on Spark is to text sentiment classification method |
CN108805036B (en) * | 2018-05-22 | 2022-11-22 | 电子科技大学 | Unsupervised video semantic extraction method |
CN108805036A (en) * | 2018-05-22 | 2018-11-13 | 电子科技大学 | A kind of new non-supervisory video semanteme extracting method |
CN109213860A (en) * | 2018-07-26 | 2019-01-15 | 中国科学院自动化研究所 | Merge the text sentiment classification method and device of user information |
CN109189919A (en) * | 2018-07-27 | 2019-01-11 | 广州市香港科大霍英东研究院 | Method, system, terminal and the storage medium of text multi-angle of view emotional semantic classification |
CN109189919B (en) * | 2018-07-27 | 2020-11-13 | 广州市香港科大霍英东研究院 | Method, system, terminal and storage medium for text multi-view emotion classification |
CN109323832A (en) * | 2018-09-12 | 2019-02-12 | 温州大学 | A kind of monitoring method of cold header mold impact conditions |
CN109308471A (en) * | 2018-09-29 | 2019-02-05 | 河海大学常州校区 | A kind of EMG Feature Extraction |
CN109559576A (en) * | 2018-11-16 | 2019-04-02 | 中南大学 | A kind of children companion robot and its early teaching system self-learning method |
CN109559576B (en) * | 2018-11-16 | 2020-07-28 | 中南大学 | Child accompanying learning robot and early education system self-learning method thereof |
CN109829499A (en) * | 2019-01-31 | 2019-05-31 | 中国科学院信息工程研究所 | Image, text and data fusion sensibility classification method and device based on same feature space |
CN109829499B (en) * | 2019-01-31 | 2020-10-27 | 中国科学院信息工程研究所 | Image-text data fusion emotion classification method and device based on same feature space |
CN110390013A (en) * | 2019-06-25 | 2019-10-29 | 厦门美域中央信息科技有限公司 | A kind of file classification method based on cluster with ANN fusion application |
CN110442693A (en) * | 2019-07-27 | 2019-11-12 | 中国科学院自动化研究所 | Generation method, device, server and medium are replied message based on artificial intelligence |
CN110442693B (en) * | 2019-07-27 | 2022-02-22 | 中国科学院自动化研究所 | Reply message generation method, device, server and medium based on artificial intelligence |
CN113807527A (en) * | 2020-06-11 | 2021-12-17 | 华硕电脑股份有限公司 | Signal detection method and electronic device using same |
CN111784159A (en) * | 2020-07-01 | 2020-10-16 | 深圳市检验检疫科学研究院 | Food risk tracing information grading method and device |
CN111784159B (en) * | 2020-07-01 | 2024-02-02 | 深圳市检验检疫科学研究院 | Food risk traceability information grading method and device |
CN116028880A (en) * | 2023-02-07 | 2023-04-28 | 支付宝(杭州)信息技术有限公司 | Method for training behavior intention recognition model, behavior intention recognition method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103729459A (en) | Method for establishing sentiment classification model | |
CN107992904B (en) | Forestry ecological environment man-machine interaction method based on multi-source information fusion | |
CN112084331A (en) | Text processing method, text processing device, model training method, model training device, computer equipment and storage medium | |
CN107015963A (en) | Natural language semantic parsing system and method based on deep neural network | |
CN111858989A (en) | Image classification method of pulse convolution neural network based on attention mechanism | |
CN111400452B (en) | Text information classification processing method, electronic device and computer readable storage medium | |
Ma et al. | Multi-feature fusion deep networks | |
CN113407660B (en) | Unstructured text event extraction method | |
CN110188195A (en) | A kind of text intension recognizing method, device and equipment based on deep learning | |
CN112949647A (en) | Three-dimensional scene description method and device, electronic equipment and storage medium | |
CN104850837A (en) | Handwritten character recognition method | |
CN106959946A (en) | A kind of text semantic feature generation optimization method based on deep learning | |
Pal et al. | Deep learning for network analysis: problems, approaches and challenges | |
CN103136540A (en) | Behavior recognition method based on concealed structure reasoning | |
Wang et al. | A deep clustering via automatic feature embedded learning for human activity recognition | |
Wang et al. | Recurrent spiking neural network with dynamic presynaptic currents based on backpropagation | |
Das et al. | Determining attention mechanism for visual sentiment analysis of an image using svm classifier in deep learning based architecture | |
CN112148997A (en) | Multi-modal confrontation model training method and device for disaster event detection | |
Li | A discriminative learning convolutional neural network for facial expression recognition | |
CN114863572B (en) | Myoelectric gesture recognition method of multi-channel heterogeneous sensor | |
Yu | Analysis of task degree of English learning based on deep learning framework and image target recognition | |
Lee et al. | Ensemble of binary tree structured deep convolutional network for image classification | |
CN109859062A (en) | A kind of community discovery analysis method of combination depth sparse coding device and quasi-Newton method | |
Liu et al. | Li Zhang | |
Phan et al. | Little flower at memotion 2.0 2022: Ensemble of multi-modal model using attention mechanism in memotion analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140416 |