CN110851593A - Complex value word vector construction method based on position and semantics - Google Patents

Complex value word vector construction method based on position and semantics Download PDF

Info

Publication number
CN110851593A
CN110851593A CN201910898057.0A CN201910898057A CN110851593A CN 110851593 A CN110851593 A CN 110851593A CN 201910898057 A CN201910898057 A CN 201910898057A CN 110851593 A CN110851593 A CN 110851593A
Authority
CN
China
Prior art keywords
word
neural network
vector
complex
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910898057.0A
Other languages
Chinese (zh)
Other versions
CN110851593B (en
Inventor
赵东浩
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910898057.0A priority Critical patent/CN110851593B/en
Publication of CN110851593A publication Critical patent/CN110851593A/en
Application granted granted Critical
Publication of CN110851593B publication Critical patent/CN110851593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/383Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/387Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a complex value word vector construction method based on position and semantics, which comprises the following steps: collecting a text classification corpus, and dividing the corpus into a training set, a verification set and a test set; preprocessing the texts in the corpus (removing stop words); constructing sentence representation by using the relative position information and the global semantic information; inputting word vectors of a training corpus into a complex value neural network, and training a semantic classification model; inputting the text word vectors of the verification set into a complex neural network model, thereby calculating the prediction probability of each sample; testing the model obtained based on the verification set on the test set; the invention overcomes the current situation that a text classification corpus is relatively lacked, can more fully extract the characteristic information (position information) of the text, fuses the position information and the global semantic information of the text, and applies the complex-valued word vectors to the duplicate neural network, so that the neural network models have stronger discrimination capability.

Description

Complex value word vector construction method based on position and semantics
Technical Field
The invention relates to the technical field of text classification, in particular to a complex value vector construction method based on position and semantics.
Background
With the rapid development of scientific technology in the past years, particularly with the rapid development of the internet and social networks, various information is enriched on the internet, which includes comments made by users on social platforms and certain opinions of the users, and which becomes one of the main sources for obtaining information in the daily life of the users. People can obtain a large amount of data through the internet, but how to reasonably and effectively manage the large amount of data becomes a concern of people. A very common way of managing large amounts of information is by classification, whereby the visible text classification implies a great social value. The invention mainly researches the emotion or the category of sentences and sections.
The classification task plays an important role in the natural language processing task. The classification task can be simply divided into two classifications (spam classification and the like) and multiple classifications (emotional state of text), and the development of the classification task is widely concerned by the industry and academia. The invention not only discusses the emotion of the sentence, but also judges the category of the sentence, which is a fine-grained task in the field of text classification. For example, "whether or not it seems impossible: this makes the interlinked killer javerry dammer boring. The emotion of this sentence is a negative emotion and is mainly determined by the word "boring".
Neural network classification methods based on complex-valued vector representations of location and semantics aim to distinguish the emotional polarity or belonging classification of a given sentence. Of course, the industry and academia are now aware of the importance of the emotional information of the words in the sentence and try to better distinguish them by designing a series of classification models. However, the importance of the position information of the words in the sentence is generally ignored in the current method, and we do not know whether the position of the word is important in semantic information or position information, and expect that the model can better recognize that the word semantics are changed when the word sequence is changed but the word is not changed. Therefore, the invention constructs the position and semantic information among complex value word vector encode words by focusing attention on the relation between word sequence and semantics in the sentence, and extracts classification information through a complex value neural network model.
Now, complex-valued neural network-based models have been used by researchers to model some natural language processing tasks, and have been very successful. However, the current method only uses complex vectors, and does not better utilize complex word vectors to mine the relative position information of words in sentences.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a text emotion or belonging category classification method of a neural network model based on complex-valued word vectors, a corpus based on a large amount of texts is built, word vectors and relative position information of the texts are respectively built, the complex-valued neural network model is used for training a text classification model, an Adam training network model is used for obtaining the prediction result of an optimal model on a test set by utilizing a back propagation and random descent method, and finally a more accurate classification result is obtained.
The purpose of the invention is realized by the following technical scheme:
a complex value word vector construction method based on position and semantics comprises the following steps:
(1) adopting a jieba word segmentation tool to segment the chapters and sentences so as to construct a multi-modal classified corpus
(2) From the multi-modal corpus classification set constructed in the step (1), 80% N samples are randomly selected as a training set, and 10% N samples are randomly selected as a training set
Dividing the samples into verification sets and the rest 10% N samples into test sets, and respectively comparing the training sets, the verification sets and the test sets
Preprocessing a test set;
(3) selecting sentences subjected to multi-modal classified corpus centralized preprocessing to construct a complex value neural network model, and further establishing a loss function of the complex value neural network model:
Figure BDA0002210909170000021
wherein: y isiIs representative of trueA category label is attached to the tag of the category,representing the predicted result;
(4) training the complex value neural network model on a training set to obtain a semantic classification model;
(5) performing effect verification on the trained neural network model in a verification set, and recording and storing model parameters when the model parameters reach the optimal values;
(6) and (4) testing the samples on the test set by using the optimal model parameters stored in the previous step, finally obtaining the prediction result of each test sample, comparing the test labels, and calculating the classification accuracy.
The complex value neural network model construction in the step (3) comprises the following steps:
3.1, constructing an absolute position index of each word according to the positions of the words in the chapters and the sentences, namely:
pi=eiβPOS
wherein p isiA position vector representing the current word, POS as its absolute position, β as its initialized period, i as the imaginary representation of the complex number;
3.2, obtaining the 300-dimensional word vector w of each chapter or sentence by using the glove tooliSimultaneously establishing a matrix of position information, each position index corresponding to a 300-dimensional position vector piAnd further constructing a relative position index of each word, namely:
xi=wieiβPOS
3.3, the word vector of each word of the text and the relative position information vector thereof are input into the Fastext and LSTM CNN network according to the sequence of the word vector in the sentence, and the specific calculation formula is as follows:
Fasttext:
ZC=σ(Ax-By+br)+iσ(Bx-Ay+bi)
CNN
ZC=Ax-By+i(Bx+Ay)
wherein σ denotes a sigmod activation function, a and B denote real and imaginary parts of the weights, respectively, and x and y denote real and imaginary parts of the input features;
RNN
wherein x istIndicating the input of each of the words,
Figure BDA0002210909170000032
representing the respectively obtained network hidden layer states;
3.4, outputting the final network hidden layer in the stepInputting the vector into a nonlinear full-connection neural network layer to obtain a neural network expression vector x, and then inputting the expression vector x into a softmax classification layer to output a final class y:
Figure BDA0002210909170000034
y=softmax(Wsx+bs)
the relative position index of each word in step 3.2 is a vector representation of the chapter or sentence obtained by the following formula:
relative position between word dimensions:
Figure BDA0002210909170000035
wherein: x represents a vector representation of a word, wj,kSemantic vectors representing words, n represents word intervals, pos, j, k are as defined above;
relative position between words:
Figure BDA0002210909170000036
wherein: p, k represents the kth dimension of the different position word.
The invention has the beneficial effects that:
(1) an effective entity text corpus is built, so that the dilemma of the shortage of the current text classification corpus is overcome;
(2) a complex value neural network model framework based on the position information is developed by using the relative position information of words in texts or sentences as characteristics to perform classification tasks.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a 3-dimensional complex word vector distribution diagram;
FIG. 3 is a comparison of classification model run times based on different word vectors.
FIG. 4 is a graph of similarity comparisons of different words for different word vectors in the same sentence.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following. FIG. 1 shows a flow of a neural network classification method based on complex-valued vector representation of location and semantics proposed by the present method; FIG. 2 shows a possible distribution of 3-dimensional complex word vectors; FIG. 3 shows the run-time comparison of classification models for different word vectors; FIG. 4 is a graph of similarity comparisons of different words for different word vectors in the same sentence. :
the traditional extraction, arrangement and processing of webpage text contents containing keywords are completely manual, labor and time are wasted, the efficiency is low by a simple manual method along with the explosive increase of webpage data, and under the condition, the system uses a web crawler to obtain the webpage contents, and an effective keyword webpage corpus is formed through data processing.
Establishing a multi-modal data set based on the acquired webpage data, and specifically comprising the following steps:
(1): the method comprises the steps of using a jieba word segmentation tool to segment chapters and sentences, removing stop words after word segmentation, useless punctuation and the like, and accordingly constructing a multi-mode classification corpus. The multi-modal classified corpus is a text classified corpus, the total sample number of the corpus is N, and each sample comprises a section of text;
(2): randomly selecting 80% of texts as a training set from the multi-modal classified corpus set constructed in the step (1), dividing 10% of texts into a verification set, dividing the remaining 10% of texts into a test set, and respectively preprocessing the training set, the verification set and the test set;
(3): the preprocessed chapters or sentences are input into complex-value Fasttext and LSTM, CNN neural network models respectively according to the absolute position information characteristics of the words in the position structure of the chapters or sentences, and the application method is as follows:
3.1: constructing an absolute position index of each word of a chapter or sentence according to the position of the word, and assuming that a word appears at the beginning of the sentence, its position index will be labeled as 1 with a period of
Figure BDA0002210909170000041
And the position indexes of other words in the sentence are sequentially overlapped:
pi=eiβpos
Figure BDA0002210909170000042
wherein p isiThe position vector representing the current word, POS is its absolute position, β is the period for which it is initialized, and i is the imaginary representation of the complex number.
3.2: obtaining 300-dimensional word vector w of words in each chapter or sentence by using glove tooliSimultaneously establishing a matrix of position information, each position index corresponding to a 300-dimensional position vector piAnd in the model initialization stage, the parameter matrix is initialized by uniform distribution, and the optimization is updated in the model training process. At this step we get a representation x of each word in the chapters and sentencesi=wieiβPOS
The relative position index of each word in step 3.2 is a vector representation of the chapter or sentence obtained by the following formula:
the relative position between word dimensions is derived as follows:
Figure BDA0002210909170000051
the relative position between words is derived as follows:
Figure BDA0002210909170000052
3.3: the word vector of each word of the text together with its position information vector is input into the Fasttext and LSTM, CNN network in the order of its in-sentence, the specific calculation formula is as follows:
Fasttext:
ZC=σ(Ax-By+br)+iσ(Bx-Ay+bi)
CNN:
ZC=Ax-By+i(Bx+Ay)
RNN:
Figure BDA0002210909170000053
wherein sigma represents a sigmod activation function, A and B respectively represent a real part and an imaginary part of a weight, x and y represent a real part and an imaginary part of an input characteristic, and the weight and the input in the RNN formula and the output are both expressed by complex values. By using the above formula, a representation of each word is input in 300 dimensions xtRespectively obtaining 128 dimensions of network hidden layer state representation
Figure BDA0002210909170000054
3.4: according to the network hidden layer state representation of each word in the text obtained in the last step
Figure BDA0002210909170000061
Inputting the vector into a nonlinear full-connection neural network layer to obtain a neural network expression vector x, and inputting the expression x into a networkEntering the softmax classification layer to output the final classification y:
Figure BDA0002210909170000062
y=softmax(Wsx+bs)
defining a complex-valued neural network model loss function as:
Figure BDA0002210909170000063
wherein y isiThe representation is really a category label,representing the predicted outcome. And training the model by a back propagation algorithm and a batch random gradient descent method. The model was trained by a back-propagation algorithm, batch (mini-batch 32) Adam gradient descent method.
And training the model on the training set, verifying the model effect on the verification set at intervals of 100 batches, and recording the model parameters stored on the verification set when the effect is optimal.
And (4) testing the samples on the test set by using the optimal model stored in the last step, finally obtaining the prediction result of each test sample, comparing the test labels, and calculating the classification accuracy. Finally, a prediction result of each sample is obtained, the test labels are compared, the classification accuracy is calculated, and compared with a Fastext model, a convolutional neural network model and an LSTM model, an operation time table is counted, so that the effect of obviously improving an analysis model can be observed visually, and the method is shown in figure 3. To further prove the effectiveness of the model method, we randomly select a sentence from the data set, and respectively calculate similarity scores between different 3-grams, and as can be seen from fig. 4, when we directly use word-embedding, the score obtained is very large between good except and except good, i.e. the color is greenish; using the transform position construction method, the obtained score value is basically fixed, i.e. the weight of the position vector is large, which seriously affects the final result (though having regularity); finally, fig. 4(c) uses our complex-valued word vectors to derive scores that are very small and yellow in color, since our word vectors can smooth word vectors and position vectors, thereby reducing the parameters trained in the network.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (3)

1. A complex value word vector construction method based on position and semantics is characterized by comprising the following steps:
(1) adopting a jieba word segmentation tool to segment the chapters and the sentences so as to construct a multi-modal classified corpus set;
(2) randomly selecting 80% N samples from the multi-modal classified corpus set constructed in the step (1) as a training set, dividing 10% N samples into a verification set and the remaining 10% N samples into a test set, and preprocessing the training set, the verification set and the test set respectively;
(3) selecting sentences subjected to multi-modal classified corpus centralized preprocessing to construct a complex value neural network model, and further establishing a complex value neural network model loss function:
Figure FDA0002210909160000011
wherein: y isiThe representation is really a category label,
Figure FDA0002210909160000013
representing the predicted result;
(4) training the complex value neural network model on a training set to obtain a semantic classification model;
(5) performing effect verification on the trained neural network model in a verification set, and recording and storing model parameters when the model parameters reach the optimal values;
(6) and (4) testing the samples on the test set by using the optimal model parameters stored in the previous step, finally obtaining the prediction result of each test sample, comparing the test labels, and calculating the classification accuracy.
2. The method for constructing the complex-valued word vector based on the position and the semantics as claimed in claim 1, wherein the constructing of the complex-valued neural network model in (3) comprises the following steps:
3.1, constructing an absolute position index of each word according to the positions of the words in the chapters and the sentences, namely:
pi=eiβPOS
Figure FDA0002210909160000012
wherein p isiA position vector representing the current word, POS as its absolute position, β as its initialized period, i as the imaginary representation of the complex number;
3.2, obtaining the 300-dimensional word vector w of each chapter or sentence by using the glove tooliSimultaneously establishing a matrix of position information, each position index corresponding to a 300-dimensional position vector piAnd further constructing a relative position index of each word, namely:
xi=wieiβPOS
3.3, the word vector of each word of the text and the relative position information vector thereof are input into the Fastext and LSTM CNN network according to the sequence of the word vector in the sentence, and the specific calculation formula is as follows:
Fasttext:
ZC=σ(Ax-By+br)+iσ(Bx-Ay+bi)
CNN
ZC=Ax-By+i(Bx+Ay)
wherein σ denotes a sigmod activation function, a and B denote real and imaginary parts of the weights, respectively, and x and y denote real and imaginary parts of the input features;
RNN
Figure FDA0002210909160000021
wherein x istIndicating the input of each of the words,
Figure FDA0002210909160000022
representing the respectively obtained network hidden layer states;
3.4, outputting the final network hidden layer in the step
Figure FDA0002210909160000023
Inputting the vector into a nonlinear full-connection neural network layer to obtain a neural network expression vector x, and then inputting the expression vector x into a softmax classification layer to output a final class y:
y=softmax(Wsx+bs)
3. the neural network method for complex-valued word vector representation based on location and semantics of claim 2, wherein the relative location index of each word in the step 3.2 is a vector representation of a chapter or sentence obtained by the following formula:
relative position between word dimensions:
Figure FDA0002210909160000025
wherein: x represents a vector representation of a word, wj,kSemantic vectors representing words, n represents word intervals, pos, j, k are as defined above;
relative position between words:
Figure FDA0002210909160000026
wherein: p, k represents the kth dimension of the different position word.
CN201910898057.0A 2019-09-23 2019-09-23 Complex value word vector construction method based on position and semantics Active CN110851593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910898057.0A CN110851593B (en) 2019-09-23 2019-09-23 Complex value word vector construction method based on position and semantics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910898057.0A CN110851593B (en) 2019-09-23 2019-09-23 Complex value word vector construction method based on position and semantics

Publications (2)

Publication Number Publication Date
CN110851593A true CN110851593A (en) 2020-02-28
CN110851593B CN110851593B (en) 2024-01-05

Family

ID=69596047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910898057.0A Active CN110851593B (en) 2019-09-23 2019-09-23 Complex value word vector construction method based on position and semantics

Country Status (1)

Country Link
CN (1) CN110851593B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111897961A (en) * 2020-07-22 2020-11-06 深圳大学 Text classification method and related components of wide neural network model
CN114970497A (en) * 2022-06-02 2022-08-30 中南大学 Text classification method and word sense disambiguation method based on pre-training feature embedding
CN115934752A (en) * 2022-12-09 2023-04-07 北京中科闻歌科技股份有限公司 Method for constructing retrieval model, electronic equipment and storage medium
CN116112320A (en) * 2023-04-12 2023-05-12 广东致盛技术有限公司 Method and device for constructing edge computing intelligent gateway based on object model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030216919A1 (en) * 2002-05-13 2003-11-20 Roushar Joseph C. Multi-dimensional method and apparatus for automated language interpretation
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
CN108363769A (en) * 2018-02-07 2018-08-03 大连大学 The method for building up of semantic-based music retrieval data set
CN108363714A (en) * 2017-12-21 2018-08-03 北京至信普林科技有限公司 A kind of method and system for the ensemble machine learning for facilitating data analyst to use
CN109522548A (en) * 2018-10-26 2019-03-26 天津大学 A kind of text emotion analysis method based on two-way interactive neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030216919A1 (en) * 2002-05-13 2003-11-20 Roushar Joseph C. Multi-dimensional method and apparatus for automated language interpretation
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
CN108363714A (en) * 2017-12-21 2018-08-03 北京至信普林科技有限公司 A kind of method and system for the ensemble machine learning for facilitating data analyst to use
CN108363769A (en) * 2018-02-07 2018-08-03 大连大学 The method for building up of semantic-based music retrieval data set
CN109522548A (en) * 2018-10-26 2019-03-26 天津大学 A kind of text emotion analysis method based on two-way interactive neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡朝举;赵晓伟;: "基于词向量技术和混合神经网络的情感分析" *
郭文姣;欧阳昭连;李阳;郭柯磊;杜然然;池慧;: "应用共词分析法揭示生物医学工程领域的研究主题" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111897961A (en) * 2020-07-22 2020-11-06 深圳大学 Text classification method and related components of wide neural network model
CN114970497A (en) * 2022-06-02 2022-08-30 中南大学 Text classification method and word sense disambiguation method based on pre-training feature embedding
CN114970497B (en) * 2022-06-02 2023-05-16 中南大学 Text classification method and word sense disambiguation method based on pre-training feature embedding
CN115934752A (en) * 2022-12-09 2023-04-07 北京中科闻歌科技股份有限公司 Method for constructing retrieval model, electronic equipment and storage medium
CN116112320A (en) * 2023-04-12 2023-05-12 广东致盛技术有限公司 Method and device for constructing edge computing intelligent gateway based on object model

Also Published As

Publication number Publication date
CN110851593B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
Saad et al. Twitter sentiment analysis based on ordinal regression
CN109493166B (en) Construction method for task type dialogue system aiming at e-commerce shopping guide scene
CN107609132B (en) Semantic ontology base based Chinese text sentiment analysis method
CN110851593B (en) Complex value word vector construction method based on position and semantics
CN109726745B (en) Target-based emotion classification method integrating description knowledge
CN107315738B (en) A kind of innovation degree appraisal procedure of text information
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
Chang et al. Research on detection methods based on Doc2vec abnormal comments
CN107688870B (en) Text stream input-based hierarchical factor visualization analysis method and device for deep neural network
CN110750648A (en) Text emotion classification method based on deep learning and feature fusion
CN110717330A (en) Word-sentence level short text classification method based on deep learning
CN109409433A (en) A kind of the personality identifying system and method for social network user
CN111222318A (en) Trigger word recognition method based on two-channel bidirectional LSTM-CRF network
Tyagi et al. Sentiment analysis of product reviews using support vector machine learning algorithm
CN110297986A (en) A kind of Sentiment orientation analysis method of hot microblog topic
CN106874419A (en) A kind of real-time focus polymerization of many granularities
CN110569355B (en) Viewpoint target extraction and target emotion classification combined method and system based on word blocks
Dangi et al. An efficient model for sentiment analysis using artificial rabbits optimized vector functional link network
CN111815426B (en) Data processing method and terminal related to financial investment and research
Kumar et al. Comparison of various ml and dl models for emotion recognition using twitter
CN107329951A (en) Build name entity mark resources bank method, device, storage medium and computer equipment
Handayani et al. Sentiment Analysis Of Electric Cars Using Recurrent Neural Network Method In Indonesian Tweets
CN115169429A (en) Lightweight aspect-level text emotion analysis method
Kohsasih et al. Sentiment Analysis for Financial News Using RNN-LSTM Network
Shalinda et al. Hate words detection among sri lankan social media text messages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant